40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 151 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  15341   Wed May 20 20:10:34 2020 rana, John ZUpdateComputer Scripts / ProgramsNDS2 server / conf updated - seems OK now

We noticed about a week ago that the NDS2 channel lists were not getting updated on megatron. JZ and I investigated; he was able to fix it all up this afternoon by logging in and snooping around Megatron.

Please try it out and tell me about any problems in getting fresh data.


  1. The NDS2 server is what we connect to through our python NDS2 client software to download some data.
  2. It has been working for years, but it looks like there was a file corruption of the channel lists that it makes back in 2017.
  3. Since the NDS2 server code tries to make incremental changes, it was failing to make a new channel list. Was failing to parse the corrupted file.
  4. there was a controls crontab entry to restart the server every morning, but the file name in that tab had a typo, so that wasn't working. I commented it out, since it shouldn't be necessary (lets see how it goes...)
  5. the nds2mgr account also has a crontab, but that was failing since it didn't have sudo permission. JZ added nds2mgr to the sudoers list so that should work now.
  6. I was able to get new channels as of 4 PM today, so it seems to be working.

* we should remember to rebuild the NDS2 server code for Ubuntu. The thing running on there is for CentOS / SL7, but we moved to Ubuntu recently since the SL7 support is going away.

** the nds2 code & conf files are not backed up anywhere since its not on /cvs/cds. It has 52 GB(!!) of txt channel lists & archives which we don't need to backup

  5094   Tue Aug 2 16:43:23 2011 jamieUpdateCDSNDS2 server on mafalda restarted for access to new channels

In order to get access to new DQ channels from the NDS2 server, the NDS2 server needs to be told about the new channels and restarted.  The procedure is as follows:

ssh mafalda
cd /users/jzweizig/nds2-mafalda
./build_channel_history
./install_channel_list
pkill nds2
# wait a few seconds for the process to quit and release the server port
./start_nds2

This procedure needs to be run every time new _DQ channels are added.

We need to set this up as a proper service, so the restart procedure is more elegant.

An additional comment from John Z.:

    The --end-gps parameter in ./build_channel_history seems to be causeing
    some trouble. It should work without this parameter, but there is a
    directory with a gps time of 1297900000 (evidently a test for GPS1G)
    that might screw up the channel list generation. So, it appears that
    the end time requires a time for which data already exists. this
    wouldn't seem to be a big deal, but it means that it has to be modified
    by hand before running. I haven't fixed this yet, but I think that I
    can probably pick out the most recent frame and use that as an end-time
    point. I'll see if I can make that work...

  10279   Sat Jul 26 15:30:15 2014 Joseph AreedaUpdateComputer Scripts / ProgramsNDS2 server propem on megatron

The NDS2 server on megatron was unresponsive for what i think was the last couple of days.

The NDS the log file (~nds2mgr/logs/nds2-201407151045.log) started reporting "Stage: parser output queue is full." at 2014.7.24 14:47:54 also there are 16 connections still not closed with LindmeierLaptop.cacr.caltech.edu (131.215.146.102) with 15 of them in CLOSE_WAIT. 

To identify these zombie sockets we use "netstat -an | grep 31200"

The server was in a condition that /etc/init.d/nds2 stop didn't work and the process had to be manually kill -9'ed and then about 3 or 4 minutes later the zombie sockets were gone at /etc/init.d/nds2 start was used to restart the server.

The LindemejerLaptop was using pynds to get a bunch of channels at once to test drive a streaming visualization code for glitches.  It's unclear whether this bumped into a server limitation.  We have seen similar states in ldvw that seem to be the result of errors which result in client-server connections not being closed properly, leaving data in an output buffer causing Linux to wait for the other side to empty the buffer.

  13293   Tue Sep 5 14:41:58 2017 gautamUpdateCDSNDS2 server restarted on megatron

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

  13331   Tue Sep 26 13:40:45 2017 gautamUpdateCDSNDS2 server restarted on megatron

Gabriele reported problems with the nds2 server again. I restarted it again.

update: had to do it again at 1730 today - unclear why nds2 is so flaky. Log files don't suggest anything obvious to me...

Quote:

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

 

  13161   Thu Aug 3 00:59:33 2017 gautamUpdateCDSNDS2 server restarted, /frames mounted on megatron

[Koji, Nikhil, Gautam]

We couldn't get data using python nds2. There seems to have been many problems.

  1. /frames wasn't mounted on megatron, which was the nds2 server. Solution: added /frames 192.168.113.209(sync,ro,no_root_squash,no_all_squash,no_subtree_check) to /etc/exportfs on fb1, followed by sudo exportfs -ra. Using showmount -e, we confirmed that /frames was being exported.
  2. Edited /etc/fstab on megatron to be fb1:/frames/ /frames nfs ro,bg,soft 0 0. Tried to run mount -a, but console stalled.
  3. Used nfsstat -m on megatron. Found out that megatron was trying to mount /frames from old FB (192.168.113.202). Used sudo umount -f /frames to force unmount /frames/ (force was required).
  4. Re-ran mount -a on megatron.
  5. Killed nds2 using /etc/init.d/nds2 stop - didn't work, so we manually kill -9'ed it.
  6. Restarted nds2 server using /etc/init.d/nds2 start.
  7. Waited for ~10mins before everything started working again. Now usual nds2 data getting methods work.

I have yet to check about getting trend data via nds2, can't find the syntax. EDIT: As Jamie mentioned in his elog, the second trend data is being written but is inaccessible over nds (either with dataviewer, which uses fb as the ndsserver, or with python NDS, which uses megatron as the ndsserver). So as of now, we cannot read any kind of trends directly, although the full data can be downloaded from the past either with dataviewer or python nds2. On the control room workstations, this can also be done with cds.getdata.

  13162   Thu Aug 3 10:51:32 2017 ranaUpdateCDSNDS2 server restarted, /frames mounted on megatron

same issue on NODUS; I edited the /etc/fstab and tried mount -a, but it gives this error:

controls@nodus|~ 1> sudo mount -a
mount.nfs: access denied by server while mounting fb1:/frames

needs more debugging - this is the machine that allows us to have backed up frames in LDAS. Permissions issues from fb1 ?

  13163   Thu Aug 3 11:11:29 2017 gautamUpdateCDSNDS2 server restarted, /frames mounted on nodus

I added nodus' eth0 IP (192.168.113.200) to the list of allowed nfs clients in /etc/exportfs on fb1, and then ran sudo mount -a on nodus. Now /frames is mounted.

Quote:

needs more debugging - this is the machine that allows us to have backed up frames in LDAS. Permissions issues from fb1 ?

 

  15342   Thu May 21 15:31:26 2020 gautamUpdateComputer Scripts / ProgramsNDS2 service restarted

The service had failed at 16:09 yesterday. I just restarted it and am now able to fetch data again. 

Unrelated to this work: I restarted the httpd service on nodus a couple of times this afternoon while experimenting with the summary pages.

Quote:

Please try it out and tell me about any problems in getting fresh data.

  15345   Fri May 22 10:37:41 2020 ranaUpdateComputer Scripts / ProgramsNDS2 service restarted

was dead again this morning - JZ notified

current restart instructions (after ssh to megatron):

cd /home/nds2mgr/nds2-megatron

sudo su nds2mgr

make -f test_restart

  15346   Mon May 25 10:54:41 2020 ranaUpdateComputer Scripts / ProgramsNDS2 service restarted

so far it has run through the weekend with no problems (except that there are huge log files as usual).

I have started to set up monit to run on megatron to watch this process. In principle this would send us alerts when things break and also give a web interface to watch monit. I'm not sure how to do web port forwarding between megatron and nodus, so for now its just on the terminal. e.g.:

monit>sudo monit status
Monit 5.25.1 uptime: 4m

System 'megatron'
  status                       OK
  monitoring status            Monitored
  monitoring mode              active
  on reboot                    start
  load average                 [0.15] [0.22] [0.25]
  cpu                          0.6%us 1.0%sy 0.2%wa
  memory usage                 1001.4 MB [25.0%]
  swap usage                   107.2 MB [1.9%]
  uptime                       40d 17h 55m
  boot time                    Tue, 14 Apr 2020 17:47:49
  data collected               Mon, 25 May 2020 11:43:03

Process 'nds2'
  status                       OK
  monitoring status            Monitored
  monitoring mode              active
  on reboot                    start
  pid                          25007
  parent pid                   1
  uid                          4666
  effective uid                4666
  gid                          4666
  uptime                       3d 1h 22m
  threads                      53
  children                     0
  cpu                          0.0%
  cpu total                    0.0%
  memory                       19.4% [776.1 MB]
  memory total                 19.4% [776.1 MB]
  security attribute           unconfined
  disk read                    0 B/s [2.3 GB total]
  disk write                   0 B/s [17.9 MB total]
  data collected               Mon, 25 May 2020 11:43:03

 

  15067   Tue Dec 3 20:32:37 2019 ranaOmnistructureDAQNDS2 situation

Recently, accordian to Gautam, the NDS2 server has been dying on Megatron ~daily or weekly. The prescription is to restart the server.

  1. I could find no instructions (that work) in the elog or wiki. We must remove the misleading entries from the wiki and update it with whatever works as of today.
  2. There is a line (which has been commented out) in the Megatron crontab which is close to the right command, but it has the wrong path.
  3. Running the command from the CRON (/home/nds2mgr/nds2-megatron/test_restart), gives several errrors.
  4. when I run the init.d command which is in the script, it seems to run fine
  5. the server then takes several minutes to get itself together; i.e. just because it is running doesn't mean that you can get data. I recommend waiting 5-10 minutes.

Also, megatron is running Ubuntu 12 !! Let's decide on a day to upgrade it to a Debian 18ish....word from Rolf is that Scientific Linux is fading out everywhere, so Debian is the new operating system for all conformists.

Attachment 1: getData.py
#!/usr/bin/env python
# this function gets some data (from the 40m) and saves it as
# a .mat file for the matlabs
# Ex. python -O getData.py


from scipy.io import savemat,loadmat
import scipy.signal as sig
from astropy.time import Time
import nds2
... 116 more lines ...
Attachment 2: chanlist.txt
PEM-SEIS_BS_X_OUT_DQ
PEM-SEIS_BS_Y_OUT_DQ
PEM-SEIS_BS_Z_OUT_DQ
PEM-SEIS_EX_X_OUT_DQ
PEM-SEIS_EX_Y_OUT_DQ
PEM-SEIS_EX_Z_OUT_DQ
PEM-SEIS_EY_X_OUT_DQ
PEM-SEIS_EY_Y_OUT_DQ
PEM-SEIS_EY_Z_OUT_DQ
  17075   Thu Aug 11 16:48:59 2022 ranaUpdateComputer Scripts / ProgramsNDS2 updates

We had several problems with our NDS2 server configuration. It runs on megatron, but I think it may have had issues since perhaps not everyone was aware of it running there.

  1. channel lists were supposed to updated regularly, but the nds2_nightly script did not exist in the specified directory. I have moved it from Joe Areeda's personal directory (/home/nds2mgr/joework/server/src/utils/) to nds2mgr/channel-tracker/.
  2. The channel history files (/home/nds2mgr/channel-tracker/channel_history/) are stored on the local megatron disk. These files had grown up to ~50 GB over tha past several years. I backed these up to /users/rana/, and then wiped them out so that the NDS could regen them. Now that the megatron local disk is not full, it seems to work in giving raw data.
  3. Need to confirm that this serves up trend data (second and minute)
  4. I think there is a nds2-server package for Debian, so we should update megatrons OS to the preferred flavour of DebIan and use that. Who to get to help in this install?

Since Megatron is currently running the "Shanghai" Quad-core Opteron processor from ~2009,  its ~time to replace it with a more up to date thing. I'll check with Neo to see if he has any old LDAS leftovers that are better.

  14267   Fri Nov 2 12:07:16 2018 ranaUpdateCDSNDScope

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971

Let's install Jamie's new Data Viewer

  14344   Tue Dec 11 14:33:29 2018 gautamUpdateCDSNDScope

NDscope is now running on pianosa. To be really useful, we need the templates, so I've made /users/Templates/NDScope_templates where these will be stored. Perhaps someone can write a parser to convert dataviewer .xml to something ndscope can understand. To get it installed, I had to run:

sudo yum install ndscope
sudo yum install python34-gpstime
sudo yum install python34-dateutil
sudo yum install python34-requests

 I also changed the pythonpath variable to include the python3.4 site-packages library in .bashrc

Quote:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971

Let's install Jamie's new Data Viewer

Attachment 1: ndscope.png
ndscope.png
  269   Fri Jan 25 17:11:07 2008 Max , AndreyConfigurationGeneralNEW_FETCH_SHOUROV and GET_DATA do not work

The problem which started yesterday after Andrey's framebuilder restart still persists.

It is still impossible to read data in the past from the channels using "get_data" which in turn uses "new_fetch_shourov".

Max was trying to read data from the channel
"C1:LSC-DARM_CTRL",

and he got the same error messages as Andrey.

Andrey tried earlier today to read data from "C1:SUS-ITMS_SUS" or "C1:SUS-ETMX_SUS" with the error meassge
Error in ==> new_fetch_shourov at 22
at (start_time+duration) > stops(end)

So, it seems that Robert Ward fixed just one problem out of two problems.

Robert revived the realtime signals in Dataviewer,
but did not revive the memory of channels for new_fetch_shourov.

To be more precise, channels have memory (it is possible to see the "Playback" curves in Dataviewer"),
but "get_data" and "new_fetch_shourov" do not see the data from those channels. The problem appeared immediately after Andrey's clicking on blue buttons to restart the framebuilder.

Andrey again apologizes.
  14480   Sun Mar 17 00:42:20 2019 gautamUpdateALSNF1611 cannot be shot-noise limited?

Summary:

Per the manual (pg12) of the NF 1611 photodiode, the "Input Noise Current" is 16 pA/rtHz. It also specifies that for "Linear Operation", the max input power is 1 mW, which at 1um corresponds to a current shot noise of ~14 pA/rtHz. Therefore,

  1. This photodiode cannot be shot-noise limited if we also want to stay in the spec-ed linear regime.
  2. We don't need to worry so much about the noise figure of the RF amplifier that follows the photodiode. In fact, I think we can use a higher gain RF amplifier with a slightly worse noise figure (e.g. ZHL-3A) as we will benefit from having a larger frequency discriminant with more RF power reaching the delay line.

Details:

Attachment #1: Here, I plot the expected voltage noise due to shot noise of the incident light, assuming 0.75 A/W for InGaAs and 700V/A transimpedance gain. 

  • For convenience, I've calibrated on the twin axes the current shot noise (X) and equivalent amplifier noise figure at a given voltage noise, assuming a 50 ohm system (Y).
  • The 16 pA/rtHz input current noise exceeds the shot noise contribution for powers as high as 1 mW.
  • Even at 0.5 mW power on the PD, we can use the ZHL-3A rather than the Teledyne:
    • This calculation was motivated by some suspicious features in the Teledyne amplifier gain, I will write a separate elog about that. 
    • For the light levels we have, I expect ~3dBm RF signal from the photodiode. With the 24dB of gain from the ZHL-3A, the signal becomes 27dBm, which is smaller (but close to) the spec-ed max output of the ZHL-3A, which is 29.5 dBm. Is this too close to the edge?
    • I will measure the gain/noise of the ZHL-3A to get a better answer to these questions.
  • If in the future we get a better photodiode setup that reaches sub-1nV/rtHz (dark/electronics) voltage noise, we may have to re-evaluate what is an appropriate RF amplifier.
Attachment 1: PDnoise.pdf
PDnoise.pdf
  12232   Thu Jun 30 14:31:02 2016 ChemistryUpdateSUSNO

  12045   Thu Mar 24 07:56:09 2016 SteveUpdateCalibration-RepairNO Noise Eater for 1W Innolight

1W Innolight is NOT getting Noise Eater as it was decided yesterday at the 40m meeting. Corrected 3-25-2016

Repair quote with adding noise eater is in 40m wiki

Quote:

 

Quote:
Quote:

After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I

Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010

It's diodes should be replaced, based on it's age and performance.

RIN and noise eater bad. I will get a quote on this job.

The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956

Diagnoses from Glasglow:

“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”

 

  12371   Thu Aug 4 10:57:58 2016 ranaUpdateComputer Scripts / ProgramsNODUS update / restarts underway

Usual Ubuntu apt-get upgrades; long delayed but now happening.

  13761   Wed Apr 18 17:15:35 2018 ranaConfigurationComputersNODUS: no xmgrace for dataviewer

Turns out, there is no RPM for XmGrace on Scientific Linux 7. Since this is the graphic output of dataviewer, we can't use dataviewer through X windows until this gets fixed. CDS is looking into a xmGrace replacement, but it would be better if we can hijack a alt RH repo to steal a temporary xmgrace RPM. KT has been pinged.

  13897   Wed May 30 12:13:13 2018 ranaUpdateComputersNODUS: rsyncd + frames

To get our rsync back to LDAS back up, I followed instructions from Dan Kozak:

  1. mounted /frames from fb1: I modified /etc/fstab
  2. modified /etc/rsyncd.conf to allow access from LDAS
  3. restarted rsync as daemon: 'sudo /usr/bin/rsync --daemon --config=/etc/rsyncd.conf'

Next need to figure out what the SL7 protocol is for running this as a daemon after boot - some kind of init.d thing probably

  15302   Mon Apr 13 16:51:49 2020 ranaSummaryDAQNODUS: rsyncd daemon / service set up

I just now modified the /etc/rsyncd.conf file as per Dan Kozak's instructions. The old conf file is still there with the file name appended with today's date.

I then enabled the rsync daemon to run on boot using 'enable'. I'll ask Dan to start the file transfers again and see if this works.

controls@nodus|etc> sudo systemctl start rsyncd.service
controls@nodus|etc> sudo systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
controls@nodus|etc> sudo systemctl status rsyncd.service
● rsyncd.service - fast remote file copy program daemon
   Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-04-13 16:49:12 PDT; 1min 28s ago
 Main PID: 4950 (rsync)
   CGroup: /system.slice/rsyncd.service
           └─4950 /usr/bin/rsync --daemon --no-detach

Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Started fast remote file copy program daemon.
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Starting fast remote file copy program daemon...

  12141   Tue May 31 16:52:58 2016 SteveUpdatesafetyNONO

Please do not place anything on the top of the cabinets that is not tied down. It will end up on our head in an earth quake.

 

Attachment 1: nono.jpg
nono.jpg
  1045   Mon Oct 13 18:59:39 2008 YoichiUpdatePSLNPRO EMI and FSS error signal correlation
I made a simple loop antenna to measure the electro-magnetic inteference (EMI) around the master oscillator NPRO.

The first plot shows the comparison of the FSS error signal with the EMI measured when the antenna was put next to the NPRO (the MOPA box was opened).
There are harmonics of 78.1kHz which are present in both spectra. It is probably coming from the DC-DC converter in the NPRO board.

The second plot is the same spectra when the antenna was put far from the NPRO (just outside of the PSL enclosure).
The 78.1kHz harmonics are gone. So these are very likely to be coming from the NPRO.

The third plot shows the coherence functions between the signal from the antenna and the FSS error signal.
When the antenna was put near the NPRO, there is a strong coherence seen around 78.2kHz, whereas there is no strong coherence
when the antenna is far away from the NPRO.
This is a strong evidence that the 78.2(or 78.1)kHz harmonics is coming from the NPRO itself.

There are many peaks other than 78.1kHz harmonics in the FSS error signal spectrum. For most of them you can also find corresponding peaks in the EMI spectrum.
We have to hunt down those peaks to avoid the slew-rate saturation of the FSS.
Attachment 1: IMG_1692.JPG
IMG_1692.JPG
Attachment 2: Spectrum.png
Spectrum.png
Attachment 3: SpectrumFar.png
SpectrumFar.png
Attachment 4: Coherence.png
Coherence.png
  2158   Thu Oct 29 13:48:32 2009 KojiUpdatePSLNPRO LTMP lowered 9.5deg

13:00 Found MC TRANS less than 7.
13:50 Go into the PSL table.
14:20 Work done. Now I am running SLOWscan script.
15:10 SLOWscan finished. It was not satisfactory. I go into the table again.
15:15 Running SLOWscan again.
16:00 SLOWscan done. Lock PMC. Adjust NPRO current so as to maximize PMC TRANS.
16:10 Lock RC, PMC, MZ, MC. Align PMC / MZ on the table. Align MC WFS beams on the QPDs.
16:30 Work done.

New FSS-SLOWDC nominal is -4.0

Now MC TRANS is 7.9. This is +12% increase. ENJOY!
HEPA is on at 90%. Light is off.

---------

NPRO TEMP trimmer adjustment
o PSL NPRO TEMP trimmer at the back of the laser head was turned 6.5 times in CW.
o It reduced NPRO crystal temp by 9.5deg. (43.5deg -> 34.0deg for FSS_SLOWDC -5.5)

To revert the previous setting, refer to the former measurement
c.f. http://nodus.ligo.caltech.edu:8080/40m/2008

NPRO Thermal scan
o 2 scans are performed.
o I selected the colder side of the second scan. i.e. SLOWDC=-4.0

NPRO Current adjustment
o Tweaked C1:PSL-126MOPA_126CURADJ while looking at PMC TRANS.
o CURADJ was changed from -2.25 to -1.9. This corresponds to change of C1:PSL-126MOPA_CURMON from 2.503A to 2.547A.

Attachment 1: 091028_PSL.png
091028_PSL.png
  2161   Thu Oct 29 20:21:14 2009 KojiUpdatePSLNPRO LTMP lowered 9.5deg

Here is the plots for the powers. MC TRANS is still rising.

What I noticed was that C1:PSL-FSS_PCDRIVE nolonger hit the yellow alert.
The mean reduced from 0.4 to 0.3. This is good, at least for now.

Attachment 1: PSL_MC.png
PSL_MC.png
  5202   Fri Aug 12 03:49:45 2011 JennySummaryPSLNPRO PDH-Locked to Ref Cav

DMass and I locked the NPRO laser (Model M126-1064-700, S/N 238) on the AP table to the reference cavity on the PSL table using the PDH locking setup shown in the block diagram below (the part with the blue background):

 

LIGO_block_diagram_2.png

 

A Marconi IFR 2023A signal generator outputs a sine wave at 230 kHz and 13 dBm, which is split. One output of the splitter drives the laser PZT while the other is sent to a 7dBm mixer. Also sent to the mixer is the output of a photodiode that is detecting the reflected power from off the cavity. (A DC block is used so that only RF signal from the PD is sent to the mixer). The output of the mixer goes through an SR560 low-noise preamp, which is set to act as a low pass filter with a gain of 5 and a pole at 30 kHz. That error signal is then sent to the –B port of the LB1005 PDH servo, which has the following settings: PI corner at 10kHz, LF gain limit of 50 dB, and gain of 2.7 (1.74 corresponds to a decade, so the signal is multiplied by 35). The output signal from the LB1005 is added to the 230 kHz dither using another SR560 preamp, and the sum of the signals drive the PZT.

 

I am monitoring the transmission through the cavity on a digital oscilloscope (not shown in the diagram) and with a camera connected to a TV monitor. I sweep the NPRO laser temperature set point manually until the 0,0 mode of the carrier frequency resonates in the cavity and is visible on the monitor. Then I close the loop and turn on the integrator on the LB1005.

 

The laser locks to the cavity both when the error signal is sent into the A port and when it is sent into the –B port of the PDH servo. I determined that –B is the right sign by comparing the transmission through the cavity on the oscilloscope for both ways.

 

When using the A port, the transmission when it was locked swept from ~50 to ~200 mV (over ~10 second intervals) but had large high frequency fluctuations of around +/- 50 mV. Looking at the error signal on the oscilloscope as well, the RMS fluctuations of the error signal were at best ~40 mV peak to peak, which was at a gain of 2.9 on the LB1005.

 

Using the –B port yielded a transmission that swept from 50 to 250 mV but had smaller high frequency fluctuations of around +/- 20 mV. The error signal RMS was at best 10mV peak to peak, which was at a gain of 2.7. (Although over the course of 10 minutes the gain for which the error signal RMS was smallest would drift up or down by ~0.1).

 

 

The open loop error signal peak-to-peak voltage was 180 mV, which is more than an order of magnitude larger than the RMS error signal fluctuations when the loop is closed, indicating that it is staying in the range in which the response is linear.

openlooperror.jpg

 

In the above plot the transmission signal is offset by 0.1 V for clarity.

Below is the closed loop error signal. The inset plot shows the signal viewed over a 1.6 ms time period. You can see ~60 microsecond fluctuations in the signal (~17 kHz)

closedlooperror.jpg

The system remained locked for ~45 minutes, and may have stayed locked for much longer, but I stopped it by opening the loop and turning off the function generator. Below is a picture of the transmitted light showing up on a monitor, the electronics I'm using, and a semi-ridiculous mess of wires.

 

IMG_3034.JPG

 

I determined that it’s not dangerous to leave the system locked and leave for a while. The maximum voltage that the SR560 will output to the PZT is 10Vpp. This means that it will not drive the PZT at more than +/-5 V DC. At low modulation rates, the PZT can take a voltage on the order of 30 Vpp, according to the Lightwave Series 125-126 user’s manual, so the control signal will not push the PZT too hard such that it’s harmful to the laser.

 

 

  5217   Fri Aug 12 20:33:57 2011 DmassSummaryPSLNPRO PDH-Locked to Ref Cav

To aid Jenny's valiant attempt to finish her SURF project, I did some things with the front end system over the last couple days, largely tricking Jamie into doing things for me lest I ruin the 40m RCG system. Several tribulations have been omitted.

We stole a channel in the frontend, in the proccess:

  1. Modified the C1GFD simulink model (now analog) to be "ADC -> TMP -> DAC" where TMP is a filter bank
    • C1GFD_TMP.adl (in /opt/rtcds/caltech/c1/medm/c1gfd) is the relevant part which connects the ADC to the DAC in the frontend
  2. Confirmed that the ADC was working by putting a signal in and seeing it in the frontend
  3. Could not get a signal out of the anti aliasing board
  4. Looked sad until Kiwamu found a breakout board for the SCSI cable coming from the DAC
  5. Used SR560 to buffer DAC output
    • drove a triangle wave with AWG into the TMP EXC channel (100 counts 1 Hz) and looked at it after the ~25 ft of BNC cable running between the DAC and the NRPO driver
    • wave looked funny (not like a triangle wave), maybe the DAC is not meant to push a signal so far, so added buffer
  6. Took the control signal going to the fast input of the NPRO driver (using the 500 Ohm SR560 output - see Jenny's diagram) and put it into the anti aliasing board of the ADC
  7. Added switchable integrator to filter bank with Foton
    • I couldn't get the names to display in the filter bank, so I looked sad again
    • Jamie and Koji both poked at the "no name displayed" problem but had no conclusions, so I decided to ignore it
    • I confirm that when the two filters were toggled "on" that the transfer function was as expected: simple integrator with a unity gain at ~10mHz - agrees with what Foton's Bode Plot tool says it should be (see attached DTT plot)
  8. I got Jamie to manually add the two epics channels from the TMP model to the appropriate .ini file so they would be recorded
    • C1:GFD-TMP_OUTPUT  (16 Hz)
    • C1:GFD-TMP_INMON    (16 Hz)
  9. RefCav heater servo seems to still be set up, so we can use existing channels:
    • C1:PSL-FSS_RCPID_SETPOINT (temp setpoint - will do +/-1C steps about 35 C)
    • C1:PSL-FSS_MINCOMEAS (In loop temp sensor - in C)
    • C1:PSL-FSS_RCTEMP (out of loop temp sensor - in C)
    • C1:PSL-FSS_TIDALSET (Voltage to heater - rails @ +/- 2V)
  10.  Closed loop on the control signal for the NPRO driver with an integrator, saw error signal go to zero
    • Turned up gain a little bit, saw some oscillations, then turned gain down to stop them, final gain = 2
  11. Left system on for Jenny to come in and do step responses
Attachment 1: TMP_INT_TF.pdf
TMP_INT_TF.pdf
  3587   Sun Sep 19 18:52:52 2010 ranaConfigurationPSLNPRO SLOW servo settings updated for Innolight NPRO

Our new 2W Mephisto has a pretty zippy "SLOW" temperature input. Tuning the perl PID servo, I found that the best response came from setting

the "P" and "D" terms to zero. This is because the internal temperature stabilization servo has a fairly high UGF. In the attached

image you can see how the open loop step response looks (loop is open then the "KI" parameter is set to zero). The internal servo

really has too little damping. There is a 30% overshoot when it gets a temperature step. For this kind of servo Innolight would have done better

to back off on the gain until they got back some phase margin.

New SLOW parameters:

timestep = 1.9 s

KP = 0

KI = 0.035

KD = 0

Attachment 1: Untitled.png
Untitled.png
  13749   Thu Apr 12 18:12:49 2018 gautamUpdateALSNPRO channels hijacked

Summary:

  1. Today, the measured IR ALS noise for the X arm was dramatically improved. The main change was that I improved the alignment of the PSL pickoff beam into its fiber coupler.
  2. The noise level was non-stationary, leading me to suspect power modulation of the RF beat amplitude.
  3. I am now measuring the stability of the power in the two polarizations coming from EX table to the PSL table, using the PSL diagnostic connector channels.
  4. The EX beam is S-polarized when it is coupled into the fiber. The PSL beam is P-polarized. However, it looks like I have coupled light along orthogonal axes into the fiber, such that when the EX light gets to the PSL table, most of it is in the P-polarization, as judged by my PER measurement setup (i.e. the alignment keys at the PSL table and at the EX table are orthogonal). So it still seems like there is something to be gained by trying to improve the PER a bit more.

Details:

Today, I decided to check the power coupled into the PSL fiber for the BeatMouth. Surprisingly, it was only 200uW, while I had ~3.15mW going into it in January. Presumably some alignment drifting happened. So I re-aligned the beam into the fiber using the steering mirror immediately before the fiber coupler. I managed to get ~2.9mW in without much effort, and I figured this is sufficient for a first pass, so I didn't try too much more. I then tried making an ALS beat spectrum measurement (arm locked to IMC length using POX, green following the arm using end PDH servo). Surprisingly, the noise performannce was almost as good as the reference! See Attachment #1, in which the red curve is an IR beat (while all others are green beats). The Y arm green beat performance isn't stellar, but one problem at a time. Moreover, the kind of coherence structure between the arm error signal and the ALS beat signal that I reported here was totally absent today.

Upon further investigation, I found that the noise level was actually breathing quite significantly on timescales of minutes. While I was able to successfully keep the TEM00 mode of the PSL beam resonant inside the arm cavity by using the ALS beat frequency as an error signal and MC2 as a frequency actuator, the DC stability was very poor and TRX was wandering around by 50%. So my new hypothesis is that the excess ALS noise is because of one or more of

  • Beam jitter at coupling point into fiber.
  • Polarization drift of the IR beams.

While I did some work in trying to align the PSL IR pickoff into the fiber along the fast (P-pol) axis, I haven't done anything for the X end pickoff beam. So perhaps the fluctuations in the EX IR power is causing beatnote amplitude fluctuations. In the delay line + phase tracker frequency discriminator, I think RF beatnote amplitude fluctuations can couple into phase noise linearly. For such an apparently important noise source, I can't remeber ever including it in any of the ALS noise budgets.

Before Ph237 today, I decided to use my polarization monitoring setup and check the "RIN" of power in the two polarizations coming out of the fiber on the PSL table. For this purpose, I decided to hijack the Acromag channels used for the PSL diagnostics connector Attachment #2 shows that there is fluctuations at the level of ~10% in the p-polarization. This is the "desired" polarization in that I aligned the PSL beam into the fiber to maximize the power in this polarization. So assuming the power fluctuations in the PSL beam are negligible, this translates to sqrt(10) ~3% fluctuation in the RF beat amplitude. This is at best a conservative estimate, as in reality, there is probably more AM because of the non PM fibers inside the beatmouth.

All of this still doesn't explain the coherence between the measured ALS noise and the arm error signal at 100s of Hz (which presumably can only happen via frequency noise in the PSL).

Another "mystery" - yesterday, while I was working on recovering the Y arm green beat signal on the PSL table, I eventually got a beat signal that was ~20mVpp into 50ohms, which is approximately the same as what I measured when the Y arm ALS performance was "nominal", more than a year ago. But while viewing the Y arm beats (green and IR) simultaneously on an o'scope, I wasn't able to keep both signals synchronised while triggering on one (even though the IR beat frequency was half the green beat frequency). This means there is a huge amount of relative phase noise between the green and IR beats. What (if anything) does this mean? The differential noise between these two beats should be (i) phase noise at the fiber coupler/ inside the fiber itself, and (ii) scatter noise in the green light transmitted through the cavity. Is it "expected" that the relative phase noise between these two signals is so large that we can't view both of them on a common trigger signal on an o'scope? surpriseAlso - the green mode-matching into the Y arm is abysmal.

Anyways - I'm going to try and tweak the PER and mode-matching into the X end fiber a little and monitor the polarization stability (nothing too invasive for now, eventually, I want to install the new fiber couplers I acquired but for now I'll only change alignment into and rotation of the fiber coupler on the EX table). It would also be interesting to compare my "optimized" PSL drift to the unoptimized EX power drift. So the PSL diagnostic channels will not show any actual PSL diagnostic information until I plug it back in. But I suspect that the EPICS record names and physical channel wiring are wrong anyways - I hooked up my two photodiode signals into what I would believe is the "Diode 1 Power" and "Laser crystal temperature" monitors (as per the schematic), but the signals actually show up for me in "Diode 2 Power" (p-pol) and "Didoe 1 Temperature" (s-pol).

Annoyingly, there is no wiring diagram - on my todo list i guess...

@Steve - could you please take a photo of the EX table and update the wiki? I think the photo we have is a bit dated, the fiber coupler and transmon PDs aren't in it...

Attachment 1: IR_ALS_20180412.pdf
IR_ALS_20180412.pdf
Attachment 2: BeatMouthDrift.png
BeatMouthDrift.png
Attachment 3: ETMX_20180416.jpg
ETMX_20180416.jpg
  1651   Thu Jun 4 15:53:15 2009 steveUpdatePSLNPRO cooling flowrate adjusted

The Neslab chiller is working well. It's temp display shows 20.0 C rock solid. Flow meter rotating at 13.5Hz at the out put of the chiller.

The MOPA temp was measured with a hand held thermocouple . The  PA was  34 C and 29 C at NPRO heat sink.

The NPRO flow meter was not rotating at this time. There was just trickeling water flow though the meter.

I closed the needle valve this point. It needed 8 turns clockwise. This drives head temp to 19.9 C

Than I opened the needle valve 9 turns and the flow meter wheel was  rotaing at ~ 1 Hz

We gained a little power. Can you explain this?

 

Attachment 1: needlevalve.jpg
needlevalve.jpg
  1646   Wed Jun 3 03:30:52 2009 ranaUpdateMOPANPRO current adjust
I increased the NPRO's current to the max allowed via EPICS before the chiller shutdown. Yesterday, I did this
again just to see the effect. It is minimal.

If we trust the LMON as a proportional readout of the NPRO power, the current increase from 2.3 to 2.47 A gave us
a power boost from 525 to 585 mW (a factor of 1.11). The corresponding change in MOPA output is 2.4 to 2.5 W
( a factor of 1.04).

Therefore, I conclude that the amplifier's pump has degraded so much that it is partially saturating on the NPRO
side. So the intensity noise from NPRO should also be suppressed by a similar factor.

We should plan to replace this old MOPA with a 2 W Innolight NPRO and give the NPRO from this MOPA back to the
bridge labs. We can probably get Eric G to buy half of our new NPRO as a trade in credit.
Attachment 1: Untitled.png
Untitled.png
  14687   Sun Jun 23 08:09:53 2019 gautamUpdateIOONPRO diagnostics

Summary:

Over the last few days, I've been doing some (complementary) measurements to what Aaron and Koji have been looking at. The motivation was to identify if the problems we are seeing are optical (i.e. imprinted on the PSL light) or electronic. My findings:

  1. 60 Hz line noise in PMC REFL and PMC TRANS is heavily dependent on whether I connect cables between the measuring PDs and Acromag ADC or not - but even with the Acromag cable disconnected, the 60 Hz RIN is HUGE - 10 mVpp out of 670 mV DC, and lines are much dirtier if you have connections to the SLOW ADCs. Measurement was made by looking at the time-domain signals on a battery powered Tektronix oscilloscope. See Attachment #1. I believe this line noise is higher it was. Cause is unknown to me at this point.
  2. The NPRO noise eater seems to function as advertised. The measured RIN with the noise eater enabled (our nominal operating condition) is in line with what the manual tells us it should be. See Attachment #2.
  3. There isn't strong evidence of excess frequency noise (measured with PLL) out to 100 kHz. I didn't measure the high-frequency part yet, but maybe I'm doing something wrong with the PLL setup which should be first corrected. See Attachments #3, #4.
  4. The beat note frequency between the free-running PSL and EX NPRO's is definitely slewing more than the quadrature sum of the advertised 1 MHz/min slewing per the manual.

Evidence:

Attachment #1: Time domain look at PMC Refl and Trans signals under various operating conditions. During this work, I took the chance to remove ~4 BNC T connectors that were connected on the PMC TRANS photodiode (Thorlabs). Now, there is one cable going to the Acromag ADC, and one going to the Oscilloscope used to monitor these signals. Any further T-ing can be done at the oscilloscope.

Attachment #2: RIN measurement of the NPRO light. I opted to place a Thorlabs PDA55 in the IR ALS pickoff light path. This is before the light sees the PMC. A DC block was inserted between the PDA55 and the AG4395 used to make the measurement. DC level of the PD output was 3.1 V into high-Z and I used half this value to normalize the measurement made by the 50-ohm input AG4395 into RIN units. The measurement was made with the PZT and slow temperature controls to the NPRO connected/disconnected, but I saw no significant difference. 

Attachment #3: Frequency noise measurement via PLL. This shows the loop transfer funtion for the PLL. Some details of the setup:

  • The beat note for locking the PLL was made between the PSL NPRO and the EX NPRO (output of the IR ALS BeatMouth). ~4dBm beatnote.
  • Local oscillator was sourced by a Marconi, f_carrier=33 MHz, RF level = +10dBm.
  • Level 7 Mixer and LB1005 controller from the mode-spectroscopy PLL setup.
  • PLL control signal routed to EX NPRO PZT via Heliax cable running along south arm. 
  • Why EX and not PSL or Marconi FM? Latter has limited range, ~1/10th of that offered by NPRO PZT. PSL PZT has a 2.9 Hz corner freq Pomona box. I could disconnect this for the purpose of PLL locking, but I thought it may be interesting to see if there’s any hints of the problem being electrical, by looking at PLL spectra with / without Pomona box. The expected delay due to cabling is only 400 ns, so not really a limiting factor for the PLL bandwidth.
  • LB 1005 settings:
    • PI corner = 3 kHz.
    • G = 2.30 (I could not increase this further - with the PSL+Lightwave NPRO PLL, we could achieve a UGF of ~60 kHz, but in this setup, I can't do much better than ~7kHz before the loop starts oscillating, not sure if the fact that the PZT actuation coefficient for the Innolight is ~5x lower than for the Lightwave is enough to explain this?).
    • LFGL = 90 dB.
  • Mixer output had a maximum value of 800 mVpp => PLL discriminant is 400 mV/rad.
  • The "eye fit" is just the transfer function of two poles at DC (one for frequency to phase conversion in the PLL and one for the LB1005 integrator), and a zero at 3kHz (PI corner). I scaled the gain till the "fit" and measurement lined up, and then used this model to undo the loop suppression of the error signal to extract the frequency noise without worrying about the frequency vector of the measurement being limited.
  • Once again, slow temperature control and PZT controls to the PSL NPRO were disconnected so this measurement was made with two free-running NPROs.

Attachment #4: Frequency noise measurement via PLL. This shows the frequency noise. I've overlaid the expected frequency noise between 2 free-running NPROs, model used is in the text box in the plot. There isn't strong evidence of excess high frequency noise in this measurement. The fact that the "LB 1005 input terminated" trace is below all the others supports the hypothesis that I'm measuring real frequency noise. The bump around a few kHz could indicate some gain peaking?

However, I'm unable to find good agreement between the measured frequency noise using the error point and the control point. For the former, I used the PLL discriminant mentioned above of 400 mV/rad, and undid the loop suppression, and for the latter I used a PZT discriminant of 1.7 MHz/V. However, there is still a constant scale difference between these two traces. So I'm doing something wrong?

Next steps:

  1. More interpretation of the PLL measurement results required.
  2. Measure the PLL error signal spectrum to higher frequencies using the AG4395. 
  3. ???

I've not disturbed the PLL setup in case anyone else wants to repeat these measurements, but I have restored the normal electrical connections to the PSL PZT and temperature control.

Some other activity:

  1. Alignment into the PMC was tweaked.
  2. NPRO laser pump current was increased from 1.9 A to 2.0 A.
  3. PMC servo gain was changed from +18 to +17 to prevent the servo from oscillating.
Attachment 1: consolidatedOscopeScreenCaps.pdf
consolidatedOscopeScreenCaps.pdf
Attachment 2: RINcomp.pdf
RINcomp.pdf
Attachment 3: PLL_OLTF.pdf
PLL_OLTF.pdf
Attachment 4: PLLnoise.pdf
PLLnoise.pdf
  13308   Mon Sep 11 15:58:02 2017 SteveUpdateGeneralNPRO for repair

This NPRO has a tripping power output******

 

" Hi Eric,

I checked with the Engineer as Vincent is travelling.

“The lasers have serial number below 2000 which we cannot repair them, we only can repair NPRO laser has serial number 2000 or later.”

Thanks,

Betty-Ann Watt

Customer Service Professional
Global Customer Service/Communication & Commercial Optical Products "

www.lumentum.com

 

 

Attachment 1: NPRO_tripping.jpg
NPRO_tripping.jpg
  3709   Wed Oct 13 21:08:40 2010 kiwamuUpdatePSLNPRO is still alive

 The NPRO at the PSL table still can generate 2W laser !  He is still alive.

  When I reduced the temperature to  25 deg, the output power increased to 2W successfully.

  As Steve wrote down in his last entry (see here), the NPRO output was at 1.6 W currently, which is supposed to be 2W.

We were suspicious about the laser crystal's temperature because the current temperature looks a bit high.

In fact the setpoint of the temperature was 45.9 deg instead of 25 deg that is the previous setpoint.

 

  2211   Mon Nov 9 13:17:07 2009 AlbertoConfigurationABSLNPRO on

I turned the auxialiary NPRO for the AbsL Experiment on. Its shutter stays closed.

  621   Wed Jul 2 06:46:05 2008 AlbertoConfigurationGeneralNPRO on to warm up
This morning I turned on the NPRO on the AP table so that it can warm up for a few hours before I start using it today.
The flipping mirror is down so no beam is injected in to the IFO.


Alberto
  8365   Thu Mar 28 07:58:15 2013 SteveUpdateGreen LockingNPRO repair

Quote:

 

 JDSU can repair the Lightwave M126-1064-700 NPRO, sn 415  They do not need the Controller sn 516 

 Posted in the 40m Wiki_ PSL_ NPRO  cost repair and/or option to buy Innolight laser as replacement

 NPRO shipped out for evaluation yesterday under RMA 18022707

  1632   Sat May 30 11:24:56 2009 robConfigurationPSLNPRO slow scan

I'm setting SLOWDC to about -5.

I had to edit FSSSlowServo because it had hard limits on SLOWDC at (-5 and 5).  It now goes from -10 to 10.

 

 

Attachment 1: slowSCAN.png
slowSCAN.png
  16142   Sat May 15 12:39:54 2021 gautamUpdatePSLNPRO tripped/switched off

The NPRO has been off since ~1AM this morning it looks like. Is this intentional? Can I turn it back on (or at least try to)? The interlock signal we are recording doesn't report getting tripped but I think this has been the case in the past too.


After getting the go ahead from Koji, I turned the NPRO back on, following the usual procedure of diode current ramping. PMC and IMC locked. Let's see if this was a one-off or something chronic.

Attachment 1: NPRO.png
NPRO.png
  744   Sun Jul 27 20:49:21 2008 ranaConfigurationComputersNTP
After Aidan did whatever he did on op440m, I had to restart ntpd. I noticed it didn't actually do
anything so I restarted it by hand with the '-l' option to make a logfile. Essentially, the
problem is that NTPD is not allowed access to the outside world's NTP servers by our NAT router;
this should be fixed.

So for now I set all of the .conf files to point to rana and nodus' IP addresses. According to the
log files, that is successful. Rosalba and Mafalda, however, seem to have correct time but are
looking at rhel.ntp.pool.org and time.nist.gov, respectively. Maybe these have special rules?

For reference, the linux machines' conf files are /etc/ntp.conf
and the solaris machines' conf files are /etc/inet/ntp.conf

I also logged into dcuepics (aka scipe25) and did as instructed.
  9664   Mon Feb 24 16:26:14 2014 JenneUpdateCDSNTP fell out of sync on front end machines - fixed

[Koji, Jenne]

Koji noticed that the time on the front-end detail screens was not correct, and that the GPS time was not matching up between different models.  Koji ran the following on all front-end machines, and on nodus:

sudo ntpdate -b -s -u pool.ntp.org

Now, everything is fine, and every status light on the cds overview screen is green.

  9761   Fri Mar 28 23:28:13 2014 KojiConfigurationGeneralNTP setting on nodus

[Koji Rana]

We are looking at NTP settings. I looked at nodus.

- xntpd is running

- We decided to start over the configuration file /etc/inet/ntp.conf

    - The old configuration was moved to ntp.conf.bak

    - The server configuration file was copied from ntp.server to ntp.conf

    - Caltech NTP servers 131.215.239.14 and 131.215.220.22 were selected as the servers we are reffering

    - Commented out the lines for "fudge" and "broadcast"

- xntpd was restarted

    - sudo /etc/init.d/xntpd stop
    - sudo /etc/init.d/xntpd start

- check how the daemon is running
      tail -50 /var/adm/messages

   Mar 28 23:00:49 nodus xntpd[27800]: [ID 702911 daemon.notice] xntpd 3-5.93e Mon Sep 20 15:47:11 PDT 1999 (1)
   Mar 28 23:00:49 nodus xntpd[27800]: [ID 301315 daemon.notice] tickadj = 5, tick = 10000, tvu_maxslew = 495, est. hz = 100
   Mar 28 23:00:49 nodus xntpd[27800]: [ID 798731 daemon.notice] using kernel phase-lock loop 0041
   Mar 28 23:00:49 nodus last message repeated 1 time
   Mar 28 23:00:49 nodus xntpd[27800]: [ID 132455 daemon.error] trusted key 0 unlikely
   Mar 28 23:00:49 nodus xntpd[27800]: [ID 581490 daemon.error] 0 makes a poor request keyid

- check the syncing staus by ntpq -p

        remote           refid      st t when poll reach   delay   offset    disp
   ==============================================================================
   *ntp-02.caltech. .CDMA.           1 u   37   64  377     0.56    3.010    0.08
   +ntp-03.caltech. ntp1.symmetrico  2 u   36   64  377     0.52    2.727    0.12

      this * means nodus is synced to ntp-02. "+" means it is stand by as a valid secondary server.  "when" increases every second.
      When "when" reaches "poll" the clock is synced to the server. These marks are not set at the beginning.
      It was necessary to wait for sometime to get synced to the server.

- Once nodus became a synced server, the realtime systems starts syncing to nodus automatically.

   controls@c1sus ~ 0$ cat /var/log/ntpd
   25 Mar 01:41:00 ntpd[4443]: logging to file /var/log/ntpd
   (...)
   28 Mar 23:13:46 ntpd[4983]: synchronized to 192.168.113.200, stratum 2
   28 Mar 23:14:25 ntpd[4983]: time reset +39.298455 s
   28 Mar 23:14:25 ntpd[4983]: kernel time sync status change 2001
   28 Mar 23:25:19 ntpd[4983]: synchronized to 192.168.113.200, stratum 2
   controls@c1sus ~ 0$ ntpq -p
        remote           refid      st t when poll reach   delay   offset  jitter
   ==============================================================================
   *nodus.martian   131.215.239.14   2 u   42   64  377    0.140   42.222  11.373

  9762   Sat Mar 29 00:11:39 2014 KojiConfigurationGeneralNTP setting on nodus

FB: /etc/ntp.conf was updated as below such that it refers nodus and also caltech server when nodus is not available

server 192.168.113.200
server 131.215.239.14

ntpd was restarted and

diskless RT machines: they are booted from /diskless/root on fb.
Therefore /diskless/root/etc/ntp.conf was updated in the same way as above.
When the machines are rebooted, this setting will be active.

control machines:

now they are referring nodus and other external servers too.

  2005   Fri Sep 25 19:56:08 2009 ranaConfigurationComputersNTPD restarted on c1dcuepics (to fix the MEDM screen times)
restarted ntp on op440m using this syntax
>su
>/etc/init.d/xntpd start -c /etc/inet/ntp.conf

gettting the time on scipe25 (for the MEDM screen time) working was tougher. The /etc/ntp.conf file was pointing
to the wrong server. Our NAT / Firewall settings require some of our internal machines to go through the gateway
to get NTPD to work. Curiously, some of the linux workstations don't have this issue.
The internal network machines should all have the same file as scipe25's /etc/ntp.conf:

server nodus

and here's how to check that its working:

[root@c1dcuepics sbin]# ./ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 nodus.ligo.calt 0.0.0.0         16 u    -   64    0    0.000    0.000 4000.00
*nodus.ligo.calt usno.pa-x.dec.c  2 u   29   64  377    1.688  -65.616   6.647
-lime7.adamantsy clock.trit.net   3 u   32   64  377   37.448  -72.104   4.641
-montpelier.ilan .USNO.           1 u   19   64  377   18.122  -74.984   8.305
+spamd-0.gac.edu nss.nts.umn.edu  3 u   28   64  377   72.086  -66.787   0.540
-mighty.poclabs. time.nist.gov    2 u   30   64  377   71.202  -61.127   4.067
+monitor.xenscal clock.sjc.he.ne  2 u   16   64  377   11.855  -67.105   6.368
  4162   Mon Jan 17 04:10:10 2011 ranaConfigurationComputersNTPD restarted on c1dcuepics (to fix the MEDM screen times)

I installed NTPD on Megatron (its Ubuntu, so different from the CentOS workstations). Here's the terminal cap to show that its actually working:

megatron:/etc>sudo /etc/init.d/ntp restart
 * Stopping NTP server ntpd                                              [ OK ]
 * Starting NTP server ntpd                                              [ OK ]
megatron:/etc>/etc/init.d/ntp status
 * NTP server is running
megatron:/etc>ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 nodus.martian   204.123.2.72     2 u    7   64    1    0.217    3.833   0.001
 europium.canoni 193.79.237.14    2 u    6   64    1  155.354    3.241   0.001
megatron:/etc>date
Mon Jan 17 04:07:08 PST 2011

Along the way, I also updated the /etc/inet/ntp.conf file for nodus. It was using the USNO as a NTP server and I've pointed it to the Caltech NTP server as per the IMSS NTP page.

  4283   Mon Feb 14 01:40:14 2011 KojiOmnistructureCDSName of the green related channels

I propose to use C1:ALS-xxx_xxx for the names of the green related channels, instead of GCV, GCX, GCY, GFD...

Like C1:SUS or C1:LSC, we name the channels by the subsystems first, then probably we can specify the place.

We can keep the names of the processes as they are now.

  965   Thu Sep 18 14:36:54 2008 josephbConfigurationComputersName server and Epics
The problems Rob was experiencing last night was due to part of the setup (or rather testing of the setup) of the new nameserver running on linux1.

The name server was setup on linux1 by doing the following:

1) Installed xorg-x11-xauth via yum which was necessary to get remote x windows to work in linux1

2) Installed xorg-x11-fonts-Type1 in order to get the gui system-config-* programs to work

3) Ran system-config-bind, which created a default set of nameserver files. I unfortunately didn't understand the gui all that well, so I manually edited and added files to these base ones. The base files were generated in /var/named/chroot/etc/ and /var/named/chroot/var/named.

4) I added martian.zone and 113.215.131.in-addr.arpa.zone, named.conf.local, and edited named.conf so it loaded named.conf.local. The martian.zone file acts a forward look up (i.e. give it a name and it returns an IP number like 131.215.113.20). The 113.215.131.in-addr.arpa.zone acts as a reverse look up (i.e. give it an IP number like 131.215.113.20 and it tells you the name). The file named.conf.local merely points to these two files.

Note: One can add or change IP lookup by simply updating these two files. The format should be obvious from the files.

5) I specifically ssh'd in as root to linux1 (using su wasn't sufficient) and then typed "service named start" (without quotes). You can also use "restart" or "stop" instead of "start". This started the name server, giving an [Ok] message.

6) I edited the /etc/resolve.conf file on linux1 so that it pointed to itself first ("nameserver 127.0.0.1" at the top of the file). I also added the line "search martian", which allows one to simply use linux1 as opposed to linux1.martian.

I also edited the /etc/resolve/conf file on linux2, and it seems to resolve names fine.

7) And here is where I broke things. As a test, I moved /etc/hosts to /etc/hosts.bak, and then tested to see if names were being resolved correctly. By using the command host, I determined they were in fact working. I also tested with ssh.

However, something basic didn't like me moving the hosts file. Apparently when a front-end machine needed to reboot, it wouldn't come back up, without any ability to SSH or telnet into them.

With Yoichi and I did quite a bit of debugging this morning and determined the nameserver itself isn't conflicting, merely the lack of the host file was the source of the problem. One theory is that services don't know to go to DNS to resolve host names. I think by modifying the /etc/nsswitch.conf file to include dns as an option for services and other programs, it might work without the host file, however, I'm going to leave that to tomorrow morning which is less likely to interfere with current operations.

As it stands, things are working with the nameserver running and the host file in place.
  204   Wed Dec 19 20:28:27 2007 AndreyDAQPEMNames for all 6 accelerometers have been changed

I eventually changed the names for all 6 accelerometers (see my ELOG entry # 172 from Dec. 05 about my intent to do that).

I removed the word "BS" from their names,
and I changed the word combination "ACC_BS_EAST" in the old name for "ACC_ITMX" in the new name;
as well "ACC_BS_WEST" is now replaced by "ACC_ETMX".
(the reasoning behind such a change should become clear from my ELOG entry #172).

New accelerometer names are:
(note: there are no spaces (nowhere!) in the names of accelerometers, but ELOG replaces ": P" written without a space by a strange symbol Tongue)

C1 : PEM - ACC _ ETMX _ X ;
C1 : PEM - ACC _ ETMX _ Y ;
C1 : PEM - ACC _ ETMX _ Z ;
C1 : PEM - ACC _ ITMX _ X ;
C1 : PEM - ACC _ ITMX _ Y ;
C1 : PEM - ACC _ ITMX _ Z .

One can find them in "C1 : PEM - ACC" in Dataviewer.

ELOG V3.1.3-