40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 151 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  3702   Tue Oct 12 23:45:55 2010 ranaConfigurationDAQNDS2

I installed the NDS2 Client onto the workstations today using the instructions that Zach put onto the Wiki with a couple of modifications.

1) Instead of the adding path stuff in Matlab, I added the LD_LIBRARY_PATH and MATLABPATH variables into the .cshrc as instructed by JZ's NDS2 Wiki.

2) I installed the stuff into the shared /cvs/cds/caltech/apps/linux64/ partition so that it works now on all the 64-bit CentOS 5.5 workstations.

To run it you do:

> kinit albert.einstein

> matlab -nodesktop -nosplash

> help NDS2_GetData

(set the server to the NDS2 server that you like - the example in the help is fine)

> result = NDS2_GetData({'L1:LSC-DARM_ERR'}, 957313530, 10, server);

> plot(result.data)

Now you can get any of the S6 data super fast.

(** Remember to run kdestroy as soon as you are finished so that no one else in the control room can use your personal credentials. **)

Attachment 1: cerberus.jpg
cerberus.jpg
  6381   Wed Mar 7 21:13:30 2012 ranaUpdateDAQNDS2

 I noticed that NDS2 was not running on mafalda as it should be. Instead, there were a couple of zombie MEDMs using up 99% of the CPU. I killed the zombies and have run the 'build channel list' script. When it finished, I tried to restart the nds server, but got the following error in the log file. Email has been dispatched to JZ.

mafalda:logs>less nds2-mafalda-201203072111.log

Configuring from file: nds2.conf
Allow list: ALL
terminate called after throwing an instance of 'std::runtime_error'
  what():  Insufficient arguments
  8750   Tue Jun 25 23:57:30 2013 ranaUpdateSUSNDS2 Status

I've restarted the NDS2 process on Megatron so that we can use it for getting past data and eventually from outside the 40m.

1) from /home/controls/nds2 (which is not a good place for programs to run) I ran nds2-megatron/start-nds2

2) this is just a script that runs the binary from /usr/bin/ and then leaves a log file in ~/nds2/log/

3) I tested with DTT that I could access megatron:31200 and get data that way.

There is a script in usr/bin called nds2_nightly which seems to be the thing we should run by cron to get the channel list to get updated, but I' m not sure. Let's see if we can get an ELOG entry about how this works.

Then we want Jamie to allow some kind of tunneling so that the 40m data can be accessed from outside, etc.

  8757   Wed Jun 26 15:11:08 2013 John ZweizigUpdateSUSNDS2 Status

Quote:

I've restarted the NDS2 process on Megatron so that we can use it for getting past data and eventually from outside the 40m.

1) from /home/controls/nds2 (which is not a good place for programs to run) I ran nds2-megatron/start-nds2

2) this is just a script that runs the binary from /usr/bin/ and then leaves a log file in ~/nds2/log/

3) I tested with DTT that I could access megatron:31200 and get data that way.

There is a script in usr/bin called nds2_nightly which seems to be the thing we should run by cron to get the channel list to get updated, but I' m not sure. Let's see if we can get an ELOG entry about how this works.

Then we want Jamie to allow some kind of tunneling so that the 40m data can be accessed from outside, etc.

 I have done the following:

  * installed the nds2-client in /ligo/apps/nds2-client

  * moved the nds2 configuration directories to /ligo/apps/nds2/nds2-megatron

  * set up a cron job to update the channel list every morning at 5 am. The cron line is:

     15 5 * * * /usr/bin/nds2_nightly /ligo/apps/nds2/channel-tracker /ligo/apps/nds2/nds2-megatron

    cron will send an email each time the channel list changes, at which point you will have to restart the server with:

     cd /ligo/apps/nds2/nds2-megatron
     pkill nds2
     ./start-nds2

  * restarted nds2 with updated channel lists.

  8787   Fri Jun 28 17:33:33 2013 John ZweizigUpdateSUSNDS2 Status

Quote:

Quote:

I've restarted the NDS2 process on Megatron so that we can use it for getting past data and eventually from outside the 40m.

1) from /home/controls/nds2 (which is not a good place for programs to run) I ran nds2-megatron/start-nds2

2) this is just a script that runs the binary from /usr/bin/ and then leaves a log file in ~/nds2/log/

3) I tested with DTT that I could access megatron:31200 and get data that way.

There is a script in usr/bin called nds2_nightly which seems to be the thing we should run by cron to get the channel list to get updated, but I' m not sure. Let's see if we can get an ELOG entry about how this works.

Then we want Jamie to allow some kind of tunneling so that the 40m data can be accessed from outside, etc.

 I have done the following:

  * installed the nds2-client in /ligo/apps/nds2-client

  * moved the nds2 configuration directories to /ligo/apps/nds2/nds2-megatron

  * set up a cron job to update the channel list every morning at 5 am. The cron line is:

     15 5 * * * /usr/bin/nds2_nightly /ligo/apps/nds2/channel-tracker /ligo/apps/nds2/nds2-megatron

    cron will send an email each time the channel list changes, at which point you will have to restart the server with:

     cd /ligo/apps/nds2/nds2-megatron
     pkill nds2
     ./start-nds2

  * restarted nds2 with updated channel lists.

 I have set the cron job up to restart the nds2 server automatically if the channel list changes. The only change is that the cron command was changes to /ligo/apps/nds2/nds2-megatron/test-restart.

 

  8861   Tue Jul 16 19:16:12 2013 ranaUpdateDAQNDS2 Status

I have modified the settings on the router that connects our Martian network to the outside world so that one can access the NDS2 server running on megatron:31200.

To get at the data you point your data getting client (Matlab, ligoDV, DTT, etc.) at our router and the megatron port will be forwarded to you:

131.215.115.189:31200

is what you should point to. Now, it should be possible to run DetChar jobs (e.g. our 40m Summary pages) from the outside on some remote server. You can also grab 40m data on your laptop directly by using matlab or python NDS software.

  5780   Tue Nov 1 23:13:28 2011 ZachUpdateComputersNDS2 channel files

I did some messing around with the NDS2 config and channel files and things seem to be working as expected... for now. SENSOR channel data can be acquired for all sensors on all hanging optics.

What I did:

  • NDS2 gets its channel lists from .../users/jzweizig/nds2-mafalda/nds2.conf, which is called in the start-nds2 script. Within this, there are channel.file lines that specify which channels are available for raw and trend data. The four files that were listed were:
    • C-R-raw-channel_list.txt
    • C-M-ChanList.txt
    • C-T-ChanLIst.txt
    • C-R-online-channel_list.txt (this one was listed after a hashed line, which was suspicous---see below)
  • I noticed that a grep for SENSOR only returned lines for non-MC mirrors in both "R" files
  • I also noticed that calling NDS2_GetChannels('mafalda.martian:31200') did not return any non-suffixed (i.e., raw) channel names for MCx_SENSOR channels, while non-MC SENSOR channels each had two non-suffixed listings. I thought this was strange.
  • I manually added the line "C1:SUS-MC1_SENSOR_UL 0 real_4 2048 C-R" to one of the "R" channel files, then restarted the NDS2 server, and that channel was still not served. I figured that the second "R" channel file might have been left in the config file as a mistake, so I commented it out, restarted the NDS2 server, and was able to get MC1_SENSOR_UL data. I have left the comment-out there, with a signed EDIT.
  • Wary of (and too lazy to) manually add lines for all 5 sensors for each MC mirror, I decided to try generating a channel file using the most recent .gwf file in the frames, as indicated in Joe and John's elog post. To do this, while in .../nds2-mafalda/, I ran:
    • /cvs/cds/caltech/users/jzweizig/nds2-server/bin/buildChannelList /frames/full/10042/C-R-1004246528-16.gwf > C-R-raw-channel_list.txt
  • A grep for SENSOR in the new C-R-raw-channel_list.txt now returned lines for all MC mirror sensors... BUT NOT FOR ETMY(?!). I tried some slightly older .gwf files (all from today), but the ETMY files never showed up. I had no choice but to enter them manually. Another odd thing is that the channel file generated this way seems to be fairly jumbled up, in the sense that there is no clear top-town order (e.g. SUS-BS_blah then SUS-ETMX_blah). Instead, some SUS-BS channels are here, some are after SUS-ETMX or SUS-PRM channels, etc. Look at the file to know what I mean.
  • The original raw channel file is there as C-R-raw-channel_list.txt.bak.1004247481.

In any case, as I said, everything appears to be working now, but as soon as we try to generate a new channel file using the prescribed means, there will inevitably be channels omitted. Someone who knows more than me should get to the bottom of this and wiki a strict, detailed procedure for how this is to be done.

  6912   Wed Jul 4 18:25:44 2012 ZachUpdateComputersNDS2 client now working on Ubuntu machines

After plenty of work, NDS2 can now be used to get site data within MATLAB using the following machines:

  • allegra
  • megatron
  • ottavia
  • pianosa
  • rosalba
  • rossa

What I did

NDS2 was not working on any of the machines, so the first thing I did was simply to install the newest version. I downloaded the latest tarball (0.9.1) from the LDAS Wiki, unzipped and installed it

/users/zach $ tar -xvf nds2-client-0.9.1.tar

/users/zach $ cd nds2-client-0.9.1

/users/zach $ sudo ./configure --prefix=/cvs/cds/caltech/apps/linux64 --with-matlab=/cvs/cds/caltech/apps/linux64/matlab/bin/matlab

/users/zach $ sudo make

/users/zach $ sudo make install

 

Even with the new version, it still didn't work.

Solution: The main problem was that the cyrus-sasl-gssapi authentication protocol was not installed on these machines, so that even with a kerberos ticket the datalink could not be established. Using information from the LDAS Wiki, I used aptitude to install it as:

$ sudo aptitude install lscsoft-auth

This group installs both the SASL protocol and the package python-kerberos

 

I also needed to update the kerberos config file for each machine, which is located at /etc/krb5.conf. I found that ottavia had a nice one with many realms, so I copied that one over to the other machines. In any case where there was an old config file overwritten, it is now /etc/krb5.conf.old.

Finally, the matlab path for NDS2 was still set to the old 2010a directory (/cvs/cds/caltech/apps/linux64/lib/matlab2010a) that was created by the NDS2 install when Rana originally did it. The new install I made above created the appropriate 2010b mexa64 files, so I changed the matlab path within matlab to this one:

>> rmpath /cvs/cds/caltech/apps/linux64/lib/matlab2010a

>> addpath /cvs/cds/caltech/apps/linux64/lib/matlab2010b

>> savepath

 

Now everything works fine on all these machines. As in Rana's original post, you get data in the following way:

$ kinit albert.einstein %then enter password

$ matlab -nosplash -nodesktop

>> d = NDS2_GetData({'H1:LSC-NPTRX_OUT16.mean'},963968415,6000,'nds.ligo.caltech.edu:31200')

 

d = 


            name: 'H1:LSC-NPTRX_OUT16.mean' 

            chan_type: 'm-trend'             

            rate: 0.0167       

            data_type: 'real_8'     

            signal_gain: 1   

            signal_offset: 0     

            signal_slope: 1     

            signal_units: ''   

            start_gps_sec: 963968415     

            duration_sec: 6000             

            data: [100x1 double]           

            exists: 1

 

>> quit % since you've seen that the data is really here

$ kdestroy % so that no one uses your credentials

 

Some thoughts

  • I would like to extend this to the 32-bit machines, but I have to figure out the best way to install the proper NDS2 client without interfering with the 64-bit version. I think it is just a matter of specifying the matlabroot in the .../linux/ instead of .../linux64/
  • It would be nice to find a way that the nice tool gps('MM/DD/YYYY XX:XX:XX UTC'), which calls the ligotool executable tconvert, can be automatically usable when calling NDS2 functions. Right now, there seems to be an issue preventing that: even though tconvert can be run in the terminal, gps() returns an error and even directly running unix('tconvert now') or !tconvert returns the same error. I have emailed Peter Shawhan to see if he has any advice.

 

 

  6919   Thu Jul 5 12:06:35 2012 JamieUpdateComputersNDS2 client now working on Ubuntu machines

What I did

NDS2 was not working on any of the machines, so the first thing I did was simply to install the newest version. I downloaded the latest tarball (0.9.1) from the LDAS Wiki, unzipped and installed it

/users/zach $ tar -xvf nds2-client-0.9.1.tar

/users/zach $ cd nds2-client-0.9.1

/users/zach $ sudo ./configure --prefix=/cvs/cds/caltech/apps/linux64 --with-matlab=/cvs/cds/caltech/apps/linux64/matlab/bin/matlab

/users/zach $ sudo make

/users/zach $ sudo make install

No no, this is totally unnecessary.  NDS2 was already installed on every machine from the official packaged releases (apt-get install nds2-client), and it's known to work fine. We use it with pynds all the time. If the matlab component is not working we should figure out the right way to fix it with the existing packages.

In general, please only manually install software as a very last resort.  Manually installed software doesn't get maintained, where as the officially packaged stuff is being actively maintained by the collaboration. If there is a problem with the distributed packaging we should report it and get it fixed (and hint I was the one who built the original Debian packaging for nds2, so I know how to fix all the issues).  I'm trying to bring the 40m out of the dark days of complete chaos, where random software was installed in random locations.

Even with the new version, it still didn't work. 

That's because this wasn't the problem!

Solution: The main problem was that the cyrus-sasl-gssapi authentication protocol was not installed on these machines, so that even with a kerberos ticket the datalink could not be established. Using information from the LDAS Wiki, I used aptitude to install it as:

$ sudo aptitude install lscsoft-auth

This group installs both the SASL protocol and the package python-kerberos

I also needed to update the kerberos config file for each machine, which is located at /etc/krb5.conf. I found that ottavia had a nice one with many realms, so I copied that one over to the other machines. In any case where there was an old config file overwritten, it is now /etc/krb5.conf.old.

Finally, the matlab path for NDS2 was still set to the old 2010a directory (/cvs/cds/caltech/apps/linux64/lib/matlab2010a) that was created by the NDS2 install when Rana originally did it. The new install I made above created the appropriate 2010b mexa64 files, so I changed the matlab path within matlab to this one:

>> rmpath /cvs/cds/caltech/apps/linux64/lib/matlab2010a

>> addpath /cvs/cds/caltech/apps/linux64/lib/matlab2010b

>> savepath

This sounds like it's more likely the issues. You did the right thing by going to apt to fix the authentication packages.  It's curious to me that you did that here, whereas you went totally out of band for the nds2 client stuff.  Why?

The matlab mex files are the other problem.  But there is also a nds2-client-matlab Debian/Ubuntu package for that as well.  The problem is that the package just distributes the source, and it needs to be compiled.  I'll help figure out a good way to do that.

  • I would like to extend this to the 32-bit machines, but I have to figure out the best way to install the proper NDS2 client without interfering with the 64-bit version. I think it is just a matter of specifying the matlabroot in the .../linux/ instead of .../linux64/

Again, this is handled by the packaging!  Just use apt and the right architecture is installed automatically.

But what 32 bit machines are you referring to?  I think basically everything is 64 bit nowadays.

  • It would be nice to find a way that the nice tool gps('MM/DD/YYYY XX:XX:XX UTC'), which calls the ligotool executable tconvert, can be automatically usable when calling NDS2 functions. Right now, there seems to be an issue preventing that: even though tconvert can be run in the terminal, gps() returns an error and even directly running unix('tconvert now') or !tconvert returns the same error. I have emailed Peter Shawhan to see if he has any advice. 

We are now using lalapps_tconvert for tconvert.  We're not using that ligotools crap anymore.  I've aliased that to tconvert on the command line, but maybe matlab isn't getting the message.  I'll try to think of a more robust solution (e.g. make a wrapper script).

  6921   Thu Jul 5 13:12:12 2012 ZachUpdateComputersNDS2 client now working on Ubuntu machines

From my conversations with JZ and Leo, it seemed there was no package that generated the appropriate mex files. It was clear that the right ones weren't there from the absence of a /cvs/cds/caltech/apps/linux64/lib/matlab2010b directory. I'm sorry if I screwed anything up with pynds, but I have repeatedly asked for help with NDS2+matlab and no one has done anything.

It would be nice to do it via apt if there indeed is a versioned package that can make the mexs. Sorry again if I jumped the gun, but I didn't think anyone was going to do anything.

I guess the only 32-bit machine I can think of is mafalda.

About tconvert, I think the best solution is to make a new wrapper M-file. gps was just a convenient remnant of mDV, but all that we need is some matlab function that can output a GPS time given a date/time string. We can use whatever command-line utility you want.

  6928   Fri Jul 6 09:00:34 2012 not ZachUpdateComputersNDS2 client now working on Ubuntu machines

Quote:

From my conversations with JZ and Leo, it seemed there was no package that generated the appropriate mex files. It was clear that the right ones weren't there from the absence of a /cvs/cds/caltech/apps/linux64/lib/matlab2010b directory. I'm sorry if I screwed anything up with pynds, but I have repeatedly asked for help with NDS2+matlab and no one has done anything.

It would be nice to do it via apt if there indeed is a versioned package that can make the mexs. Sorry again if I jumped the gun, but I didn't think anyone was going to do anything.

There is a package that provides the mex source, but it doesn't actually provide the mex binaries.  The problem is that the binary depends on the matlab version, so you can't possibly provide binaries for every version.

The solution is to just build the binaries from the source package.  We should put together a nice script that builds the binaries from the source, and installs them in the directory of your choosing.  If we get something nice working, we can probably get them to include it with the package, to make it easier in the future.

Here's what's included in the source package:

controls@pianosa:~ 0$ sudo apt-get install nds2-client-matlab
...
controls@pianosa:~ 0$ dpkg -L nds2-client-matlab | sort
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/nds2-client-matlab
/usr/share/doc/nds2-client-matlab/changelog.Debian.gz
/usr/share/doc/nds2-client-matlab/changelog.gz
/usr/share/doc/nds2-client-matlab/copyright
/usr/share/matlab
/usr/share/matlab/NDS2_GetChannels.m
/usr/share/matlab/NDS2_GetData.m
/usr/share/matlab/NDS_GetChannels.m
/usr/share/matlab/NDS_GetData.m
/usr/share/matlab/NDS_GetMinuteTrend.m
/usr/share/matlab/NDS_GetSecondTrend.m
/usr/share/matlab/src
/usr/share/matlab/src/NDS2_GetChannels.c
/usr/share/matlab/src/NDS2_GetData.c
/usr/share/matlab/src/NDS_GetChannels.c
/usr/share/matlab/src/NDS_GetData.c
/usr/share/matlab/src/nds_mex_utils.c
/usr/share/matlab/src/nds_mex_utils.h
controls@pianosa:~ 0$ 
  4407   Sun Mar 13 00:00:58 2011 jzweizig, ranaConfigurationDAQNDS2 code change and restart

 John has changed the NDS2 code and restarted it on Mafalda. The issue is that it goes off the rails everytime the DAQD is restarted on FB because of filename convention war between GDS and CDS.

Until this is resolved, please make sure to restart the NDS2 process on Mafalda everytime you restart DAQD by doing this:

pkill -KILL nds2

/users/jzweizig/nds2-mafalda/start_nds2

  4926   Thu Jun 30 21:55:16 2011 ranaConfigurationDAQNDS2 conf change

As I recently had trouble getting all of the SUS SENSOR channels at once from NDS2, I asked J.Z. for help. He found that the number of buffers on mafalda was set to only allow a small amount of data to be requested at one time.

He's going to have to figure out a more permanent fix, but for now he's increased the data buffer size to allow somewhat larger chunks to be gotten. I have made a work around in matlab, which gets smaller chunks and then cats them together.

Its in SUS/peakFit/.

Attachment 1: Untitled.png
Untitled.png
  6392   Fri Mar 9 11:59:38 2012 Zweizig the ELOG MavenSummaryCDSNDS2 restart

 Hi Rana,

It looks like the channe list file has a few blank lines that the channel list reader is choking on. I removed the lines and it is working now.. I have made the error message a bit more obvious (gave the file name  and line number) and allowed it to ignore empty lines so this won't cause problems with future versions (when installed). The bottom line is nds2 is now running on mafalda.
Best regards,

John   

  15341   Wed May 20 20:10:34 2020 rana, John ZUpdateComputer Scripts / ProgramsNDS2 server / conf updated - seems OK now

We noticed about a week ago that the NDS2 channel lists were not getting updated on megatron. JZ and I investigated; he was able to fix it all up this afternoon by logging in and snooping around Megatron.

Please try it out and tell me about any problems in getting fresh data.


  1. The NDS2 server is what we connect to through our python NDS2 client software to download some data.
  2. It has been working for years, but it looks like there was a file corruption of the channel lists that it makes back in 2017.
  3. Since the NDS2 server code tries to make incremental changes, it was failing to make a new channel list. Was failing to parse the corrupted file.
  4. there was a controls crontab entry to restart the server every morning, but the file name in that tab had a typo, so that wasn't working. I commented it out, since it shouldn't be necessary (lets see how it goes...)
  5. the nds2mgr account also has a crontab, but that was failing since it didn't have sudo permission. JZ added nds2mgr to the sudoers list so that should work now.
  6. I was able to get new channels as of 4 PM today, so it seems to be working.

* we should remember to rebuild the NDS2 server code for Ubuntu. The thing running on there is for CentOS / SL7, but we moved to Ubuntu recently since the SL7 support is going away.

** the nds2 code & conf files are not backed up anywhere since its not on /cvs/cds. It has 52 GB(!!) of txt channel lists & archives which we don't need to backup

  5094   Tue Aug 2 16:43:23 2011 jamieUpdateCDSNDS2 server on mafalda restarted for access to new channels

In order to get access to new DQ channels from the NDS2 server, the NDS2 server needs to be told about the new channels and restarted.  The procedure is as follows:

ssh mafalda
cd /users/jzweizig/nds2-mafalda
./build_channel_history
./install_channel_list
pkill nds2
# wait a few seconds for the process to quit and release the server port
./start_nds2

This procedure needs to be run every time new _DQ channels are added.

We need to set this up as a proper service, so the restart procedure is more elegant.

An additional comment from John Z.:

    The --end-gps parameter in ./build_channel_history seems to be causeing
    some trouble. It should work without this parameter, but there is a
    directory with a gps time of 1297900000 (evidently a test for GPS1G)
    that might screw up the channel list generation. So, it appears that
    the end time requires a time for which data already exists. this
    wouldn't seem to be a big deal, but it means that it has to be modified
    by hand before running. I haven't fixed this yet, but I think that I
    can probably pick out the most recent frame and use that as an end-time
    point. I'll see if I can make that work...

  10279   Sat Jul 26 15:30:15 2014 Joseph AreedaUpdateComputer Scripts / ProgramsNDS2 server propem on megatron

The NDS2 server on megatron was unresponsive for what i think was the last couple of days.

The NDS the log file (~nds2mgr/logs/nds2-201407151045.log) started reporting "Stage: parser output queue is full." at 2014.7.24 14:47:54 also there are 16 connections still not closed with LindmeierLaptop.cacr.caltech.edu (131.215.146.102) with 15 of them in CLOSE_WAIT. 

To identify these zombie sockets we use "netstat -an | grep 31200"

The server was in a condition that /etc/init.d/nds2 stop didn't work and the process had to be manually kill -9'ed and then about 3 or 4 minutes later the zombie sockets were gone at /etc/init.d/nds2 start was used to restart the server.

The LindemejerLaptop was using pynds to get a bunch of channels at once to test drive a streaming visualization code for glitches.  It's unclear whether this bumped into a server limitation.  We have seen similar states in ldvw that seem to be the result of errors which result in client-server connections not being closed properly, leaving data in an output buffer causing Linux to wait for the other side to empty the buffer.

  13293   Tue Sep 5 14:41:58 2017 gautamUpdateCDSNDS2 server restarted on megatron

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

  13331   Tue Sep 26 13:40:45 2017 gautamUpdateCDSNDS2 server restarted on megatron

Gabriele reported problems with the nds2 server again. I restarted it again.

update: had to do it again at 1730 today - unclear why nds2 is so flaky. Log files don't suggest anything obvious to me...

Quote:

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

 

  13161   Thu Aug 3 00:59:33 2017 gautamUpdateCDSNDS2 server restarted, /frames mounted on megatron

[Koji, Nikhil, Gautam]

We couldn't get data using python nds2. There seems to have been many problems.

  1. /frames wasn't mounted on megatron, which was the nds2 server. Solution: added /frames 192.168.113.209(sync,ro,no_root_squash,no_all_squash,no_subtree_check) to /etc/exportfs on fb1, followed by sudo exportfs -ra. Using showmount -e, we confirmed that /frames was being exported.
  2. Edited /etc/fstab on megatron to be fb1:/frames/ /frames nfs ro,bg,soft 0 0. Tried to run mount -a, but console stalled.
  3. Used nfsstat -m on megatron. Found out that megatron was trying to mount /frames from old FB (192.168.113.202). Used sudo umount -f /frames to force unmount /frames/ (force was required).
  4. Re-ran mount -a on megatron.
  5. Killed nds2 using /etc/init.d/nds2 stop - didn't work, so we manually kill -9'ed it.
  6. Restarted nds2 server using /etc/init.d/nds2 start.
  7. Waited for ~10mins before everything started working again. Now usual nds2 data getting methods work.

I have yet to check about getting trend data via nds2, can't find the syntax. EDIT: As Jamie mentioned in his elog, the second trend data is being written but is inaccessible over nds (either with dataviewer, which uses fb as the ndsserver, or with python NDS, which uses megatron as the ndsserver). So as of now, we cannot read any kind of trends directly, although the full data can be downloaded from the past either with dataviewer or python nds2. On the control room workstations, this can also be done with cds.getdata.

  13162   Thu Aug 3 10:51:32 2017 ranaUpdateCDSNDS2 server restarted, /frames mounted on megatron

same issue on NODUS; I edited the /etc/fstab and tried mount -a, but it gives this error:

controls@nodus|~ 1> sudo mount -a
mount.nfs: access denied by server while mounting fb1:/frames

needs more debugging - this is the machine that allows us to have backed up frames in LDAS. Permissions issues from fb1 ?

  13163   Thu Aug 3 11:11:29 2017 gautamUpdateCDSNDS2 server restarted, /frames mounted on nodus

I added nodus' eth0 IP (192.168.113.200) to the list of allowed nfs clients in /etc/exportfs on fb1, and then ran sudo mount -a on nodus. Now /frames is mounted.

Quote:

needs more debugging - this is the machine that allows us to have backed up frames in LDAS. Permissions issues from fb1 ?

 

  15342   Thu May 21 15:31:26 2020 gautamUpdateComputer Scripts / ProgramsNDS2 service restarted

The service had failed at 16:09 yesterday. I just restarted it and am now able to fetch data again. 

Unrelated to this work: I restarted the httpd service on nodus a couple of times this afternoon while experimenting with the summary pages.

Quote:

Please try it out and tell me about any problems in getting fresh data.

  15345   Fri May 22 10:37:41 2020 ranaUpdateComputer Scripts / ProgramsNDS2 service restarted

was dead again this morning - JZ notified

current restart instructions (after ssh to megatron):

cd /home/nds2mgr/nds2-megatron

sudo su nds2mgr

make -f test_restart

  15346   Mon May 25 10:54:41 2020 ranaUpdateComputer Scripts / ProgramsNDS2 service restarted

so far it has run through the weekend with no problems (except that there are huge log files as usual).

I have started to set up monit to run on megatron to watch this process. In principle this would send us alerts when things break and also give a web interface to watch monit. I'm not sure how to do web port forwarding between megatron and nodus, so for now its just on the terminal. e.g.:

monit>sudo monit status
Monit 5.25.1 uptime: 4m

System 'megatron'
  status                       OK
  monitoring status            Monitored
  monitoring mode              active
  on reboot                    start
  load average                 [0.15] [0.22] [0.25]
  cpu                          0.6%us 1.0%sy 0.2%wa
  memory usage                 1001.4 MB [25.0%]
  swap usage                   107.2 MB [1.9%]
  uptime                       40d 17h 55m
  boot time                    Tue, 14 Apr 2020 17:47:49
  data collected               Mon, 25 May 2020 11:43:03

Process 'nds2'
  status                       OK
  monitoring status            Monitored
  monitoring mode              active
  on reboot                    start
  pid                          25007
  parent pid                   1
  uid                          4666
  effective uid                4666
  gid                          4666
  uptime                       3d 1h 22m
  threads                      53
  children                     0
  cpu                          0.0%
  cpu total                    0.0%
  memory                       19.4% [776.1 MB]
  memory total                 19.4% [776.1 MB]
  security attribute           unconfined
  disk read                    0 B/s [2.3 GB total]
  disk write                   0 B/s [17.9 MB total]
  data collected               Mon, 25 May 2020 11:43:03

 

  15067   Tue Dec 3 20:32:37 2019 ranaOmnistructureDAQNDS2 situation

Recently, accordian to Gautam, the NDS2 server has been dying on Megatron ~daily or weekly. The prescription is to restart the server.

  1. I could find no instructions (that work) in the elog or wiki. We must remove the misleading entries from the wiki and update it with whatever works as of today.
  2. There is a line (which has been commented out) in the Megatron crontab which is close to the right command, but it has the wrong path.
  3. Running the command from the CRON (/home/nds2mgr/nds2-megatron/test_restart), gives several errrors.
  4. when I run the init.d command which is in the script, it seems to run fine
  5. the server then takes several minutes to get itself together; i.e. just because it is running doesn't mean that you can get data. I recommend waiting 5-10 minutes.

Also, megatron is running Ubuntu 12 !! Let's decide on a day to upgrade it to a Debian 18ish....word from Rolf is that Scientific Linux is fading out everywhere, so Debian is the new operating system for all conformists.

Attachment 1: getData.py
#!/usr/bin/env python
# this function gets some data (from the 40m) and saves it as
# a .mat file for the matlabs
# Ex. python -O getData.py


from scipy.io import savemat,loadmat
import scipy.signal as sig
from astropy.time import Time
import nds2
... 116 more lines ...
Attachment 2: chanlist.txt
PEM-SEIS_BS_X_OUT_DQ
PEM-SEIS_BS_Y_OUT_DQ
PEM-SEIS_BS_Z_OUT_DQ
PEM-SEIS_EX_X_OUT_DQ
PEM-SEIS_EX_Y_OUT_DQ
PEM-SEIS_EX_Z_OUT_DQ
PEM-SEIS_EY_X_OUT_DQ
PEM-SEIS_EY_Y_OUT_DQ
PEM-SEIS_EY_Z_OUT_DQ
  17075   Thu Aug 11 16:48:59 2022 ranaUpdateComputer Scripts / ProgramsNDS2 updates

We had several problems with our NDS2 server configuration. It runs on megatron, but I think it may have had issues since perhaps not everyone was aware of it running there.

  1. channel lists were supposed to updated regularly, but the nds2_nightly script did not exist in the specified directory. I have moved it from Joe Areeda's personal directory (/home/nds2mgr/joework/server/src/utils/) to nds2mgr/channel-tracker/.
  2. The channel history files (/home/nds2mgr/channel-tracker/channel_history/) are stored on the local megatron disk. These files had grown up to ~50 GB over tha past several years. I backed these up to /users/rana/, and then wiped them out so that the NDS could regen them. Now that the megatron local disk is not full, it seems to work in giving raw data.
  3. Need to confirm that this serves up trend data (second and minute)
  4. I think there is a nds2-server package for Debian, so we should update megatrons OS to the preferred flavour of DebIan and use that. Who to get to help in this install?

Since Megatron is currently running the "Shanghai" Quad-core Opteron processor from ~2009,  its ~time to replace it with a more up to date thing. I'll check with Neo to see if he has any old LDAS leftovers that are better.

  14267   Fri Nov 2 12:07:16 2018 ranaUpdateCDSNDScope

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971

Let's install Jamie's new Data Viewer

  14344   Tue Dec 11 14:33:29 2018 gautamUpdateCDSNDScope

NDscope is now running on pianosa. To be really useful, we need the templates, so I've made /users/Templates/NDScope_templates where these will be stored. Perhaps someone can write a parser to convert dataviewer .xml to something ndscope can understand. To get it installed, I had to run:

sudo yum install ndscope
sudo yum install python34-gpstime
sudo yum install python34-dateutil
sudo yum install python34-requests

 I also changed the pythonpath variable to include the python3.4 site-packages library in .bashrc

Quote:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971

Let's install Jamie's new Data Viewer

Attachment 1: ndscope.png
ndscope.png
  269   Fri Jan 25 17:11:07 2008 Max , AndreyConfigurationGeneralNEW_FETCH_SHOUROV and GET_DATA do not work

The problem which started yesterday after Andrey's framebuilder restart still persists.

It is still impossible to read data in the past from the channels using "get_data" which in turn uses "new_fetch_shourov".

Max was trying to read data from the channel
"C1:LSC-DARM_CTRL",

and he got the same error messages as Andrey.

Andrey tried earlier today to read data from "C1:SUS-ITMS_SUS" or "C1:SUS-ETMX_SUS" with the error meassge
Error in ==> new_fetch_shourov at 22
at (start_time+duration) > stops(end)

So, it seems that Robert Ward fixed just one problem out of two problems.

Robert revived the realtime signals in Dataviewer,
but did not revive the memory of channels for new_fetch_shourov.

To be more precise, channels have memory (it is possible to see the "Playback" curves in Dataviewer"),
but "get_data" and "new_fetch_shourov" do not see the data from those channels. The problem appeared immediately after Andrey's clicking on blue buttons to restart the framebuilder.

Andrey again apologizes.
  14480   Sun Mar 17 00:42:20 2019 gautamUpdateALSNF1611 cannot be shot-noise limited?

Summary:

Per the manual (pg12) of the NF 1611 photodiode, the "Input Noise Current" is 16 pA/rtHz. It also specifies that for "Linear Operation", the max input power is 1 mW, which at 1um corresponds to a current shot noise of ~14 pA/rtHz. Therefore,

  1. This photodiode cannot be shot-noise limited if we also want to stay in the spec-ed linear regime.
  2. We don't need to worry so much about the noise figure of the RF amplifier that follows the photodiode. In fact, I think we can use a higher gain RF amplifier with a slightly worse noise figure (e.g. ZHL-3A) as we will benefit from having a larger frequency discriminant with more RF power reaching the delay line.

Details:

Attachment #1: Here, I plot the expected voltage noise due to shot noise of the incident light, assuming 0.75 A/W for InGaAs and 700V/A transimpedance gain. 

  • For convenience, I've calibrated on the twin axes the current shot noise (X) and equivalent amplifier noise figure at a given voltage noise, assuming a 50 ohm system (Y).
  • The 16 pA/rtHz input current noise exceeds the shot noise contribution for powers as high as 1 mW.
  • Even at 0.5 mW power on the PD, we can use the ZHL-3A rather than the Teledyne:
    • This calculation was motivated by some suspicious features in the Teledyne amplifier gain, I will write a separate elog about that. 
    • For the light levels we have, I expect ~3dBm RF signal from the photodiode. With the 24dB of gain from the ZHL-3A, the signal becomes 27dBm, which is smaller (but close to) the spec-ed max output of the ZHL-3A, which is 29.5 dBm. Is this too close to the edge?
    • I will measure the gain/noise of the ZHL-3A to get a better answer to these questions.
  • If in the future we get a better photodiode setup that reaches sub-1nV/rtHz (dark/electronics) voltage noise, we may have to re-evaluate what is an appropriate RF amplifier.
Attachment 1: PDnoise.pdf
PDnoise.pdf
  12232   Thu Jun 30 14:31:02 2016 ChemistryUpdateSUSNO

  12045   Thu Mar 24 07:56:09 2016 SteveUpdateCalibration-RepairNO Noise Eater for 1W Innolight

1W Innolight is NOT getting Noise Eater as it was decided yesterday at the 40m meeting. Corrected 3-25-2016

Repair quote with adding noise eater is in 40m wiki

Quote:

 

Quote:
Quote:

After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I

Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010

It's diodes should be replaced, based on it's age and performance.

RIN and noise eater bad. I will get a quote on this job.

The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956

Diagnoses from Glasglow:

“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”

 

  12371   Thu Aug 4 10:57:58 2016 ranaUpdateComputer Scripts / ProgramsNODUS update / restarts underway

Usual Ubuntu apt-get upgrades; long delayed but now happening.

  13761   Wed Apr 18 17:15:35 2018 ranaConfigurationComputersNODUS: no xmgrace for dataviewer

Turns out, there is no RPM for XmGrace on Scientific Linux 7. Since this is the graphic output of dataviewer, we can't use dataviewer through X windows until this gets fixed. CDS is looking into a xmGrace replacement, but it would be better if we can hijack a alt RH repo to steal a temporary xmgrace RPM. KT has been pinged.

  13897   Wed May 30 12:13:13 2018 ranaUpdateComputersNODUS: rsyncd + frames

To get our rsync back to LDAS back up, I followed instructions from Dan Kozak:

  1. mounted /frames from fb1: I modified /etc/fstab
  2. modified /etc/rsyncd.conf to allow access from LDAS
  3. restarted rsync as daemon: 'sudo /usr/bin/rsync --daemon --config=/etc/rsyncd.conf'

Next need to figure out what the SL7 protocol is for running this as a daemon after boot - some kind of init.d thing probably

  15302   Mon Apr 13 16:51:49 2020 ranaSummaryDAQNODUS: rsyncd daemon / service set up

I just now modified the /etc/rsyncd.conf file as per Dan Kozak's instructions. The old conf file is still there with the file name appended with today's date.

I then enabled the rsync daemon to run on boot using 'enable'. I'll ask Dan to start the file transfers again and see if this works.

controls@nodus|etc> sudo systemctl start rsyncd.service
controls@nodus|etc> sudo systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
controls@nodus|etc> sudo systemctl status rsyncd.service
● rsyncd.service - fast remote file copy program daemon
   Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-04-13 16:49:12 PDT; 1min 28s ago
 Main PID: 4950 (rsync)
   CGroup: /system.slice/rsyncd.service
           └─4950 /usr/bin/rsync --daemon --no-detach

Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Started fast remote file copy program daemon.
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Starting fast remote file copy program daemon...

  12141   Tue May 31 16:52:58 2016 SteveUpdatesafetyNONO

Please do not place anything on the top of the cabinets that is not tied down. It will end up on our head in an earth quake.

 

Attachment 1: nono.jpg
nono.jpg
  1045   Mon Oct 13 18:59:39 2008 YoichiUpdatePSLNPRO EMI and FSS error signal correlation
I made a simple loop antenna to measure the electro-magnetic inteference (EMI) around the master oscillator NPRO.

The first plot shows the comparison of the FSS error signal with the EMI measured when the antenna was put next to the NPRO (the MOPA box was opened).
There are harmonics of 78.1kHz which are present in both spectra. It is probably coming from the DC-DC converter in the NPRO board.

The second plot is the same spectra when the antenna was put far from the NPRO (just outside of the PSL enclosure).
The 78.1kHz harmonics are gone. So these are very likely to be coming from the NPRO.

The third plot shows the coherence functions between the signal from the antenna and the FSS error signal.
When the antenna was put near the NPRO, there is a strong coherence seen around 78.2kHz, whereas there is no strong coherence
when the antenna is far away from the NPRO.
This is a strong evidence that the 78.2(or 78.1)kHz harmonics is coming from the NPRO itself.

There are many peaks other than 78.1kHz harmonics in the FSS error signal spectrum. For most of them you can also find corresponding peaks in the EMI spectrum.
We have to hunt down those peaks to avoid the slew-rate saturation of the FSS.
Attachment 1: IMG_1692.JPG
IMG_1692.JPG
Attachment 2: Spectrum.png
Spectrum.png
Attachment 3: SpectrumFar.png
SpectrumFar.png
Attachment 4: Coherence.png
Coherence.png
  2158   Thu Oct 29 13:48:32 2009 KojiUpdatePSLNPRO LTMP lowered 9.5deg

13:00 Found MC TRANS less than 7.
13:50 Go into the PSL table.
14:20 Work done. Now I am running SLOWscan script.
15:10 SLOWscan finished. It was not satisfactory. I go into the table again.
15:15 Running SLOWscan again.
16:00 SLOWscan done. Lock PMC. Adjust NPRO current so as to maximize PMC TRANS.
16:10 Lock RC, PMC, MZ, MC. Align PMC / MZ on the table. Align MC WFS beams on the QPDs.
16:30 Work done.

New FSS-SLOWDC nominal is -4.0

Now MC TRANS is 7.9. This is +12% increase. ENJOY!
HEPA is on at 90%. Light is off.

---------

NPRO TEMP trimmer adjustment
o PSL NPRO TEMP trimmer at the back of the laser head was turned 6.5 times in CW.
o It reduced NPRO crystal temp by 9.5deg. (43.5deg -> 34.0deg for FSS_SLOWDC -5.5)

To revert the previous setting, refer to the former measurement
c.f. http://nodus.ligo.caltech.edu:8080/40m/2008

NPRO Thermal scan
o 2 scans are performed.
o I selected the colder side of the second scan. i.e. SLOWDC=-4.0

NPRO Current adjustment
o Tweaked C1:PSL-126MOPA_126CURADJ while looking at PMC TRANS.
o CURADJ was changed from -2.25 to -1.9. This corresponds to change of C1:PSL-126MOPA_CURMON from 2.503A to 2.547A.

Attachment 1: 091028_PSL.png
091028_PSL.png
  2161   Thu Oct 29 20:21:14 2009 KojiUpdatePSLNPRO LTMP lowered 9.5deg

Here is the plots for the powers. MC TRANS is still rising.

What I noticed was that C1:PSL-FSS_PCDRIVE nolonger hit the yellow alert.
The mean reduced from 0.4 to 0.3. This is good, at least for now.

Attachment 1: PSL_MC.png
PSL_MC.png
  5202   Fri Aug 12 03:49:45 2011 JennySummaryPSLNPRO PDH-Locked to Ref Cav

DMass and I locked the NPRO laser (Model M126-1064-700, S/N 238) on the AP table to the reference cavity on the PSL table using the PDH locking setup shown in the block diagram below (the part with the blue background):

 

LIGO_block_diagram_2.png

 

A Marconi IFR 2023A signal generator outputs a sine wave at 230 kHz and 13 dBm, which is split. One output of the splitter drives the laser PZT while the other is sent to a 7dBm mixer. Also sent to the mixer is the output of a photodiode that is detecting the reflected power from off the cavity. (A DC block is used so that only RF signal from the PD is sent to the mixer). The output of the mixer goes through an SR560 low-noise preamp, which is set to act as a low pass filter with a gain of 5 and a pole at 30 kHz. That error signal is then sent to the –B port of the LB1005 PDH servo, which has the following settings: PI corner at 10kHz, LF gain limit of 50 dB, and gain of 2.7 (1.74 corresponds to a decade, so the signal is multiplied by 35). The output signal from the LB1005 is added to the 230 kHz dither using another SR560 preamp, and the sum of the signals drive the PZT.

 

I am monitoring the transmission through the cavity on a digital oscilloscope (not shown in the diagram) and with a camera connected to a TV monitor. I sweep the NPRO laser temperature set point manually until the 0,0 mode of the carrier frequency resonates in the cavity and is visible on the monitor. Then I close the loop and turn on the integrator on the LB1005.

 

The laser locks to the cavity both when the error signal is sent into the A port and when it is sent into the –B port of the PDH servo. I determined that –B is the right sign by comparing the transmission through the cavity on the oscilloscope for both ways.

 

When using the A port, the transmission when it was locked swept from ~50 to ~200 mV (over ~10 second intervals) but had large high frequency fluctuations of around +/- 50 mV. Looking at the error signal on the oscilloscope as well, the RMS fluctuations of the error signal were at best ~40 mV peak to peak, which was at a gain of 2.9 on the LB1005.

 

Using the –B port yielded a transmission that swept from 50 to 250 mV but had smaller high frequency fluctuations of around +/- 20 mV. The error signal RMS was at best 10mV peak to peak, which was at a gain of 2.7. (Although over the course of 10 minutes the gain for which the error signal RMS was smallest would drift up or down by ~0.1).

 

 

The open loop error signal peak-to-peak voltage was 180 mV, which is more than an order of magnitude larger than the RMS error signal fluctuations when the loop is closed, indicating that it is staying in the range in which the response is linear.

openlooperror.jpg

 

In the above plot the transmission signal is offset by 0.1 V for clarity.

Below is the closed loop error signal. The inset plot shows the signal viewed over a 1.6 ms time period. You can see ~60 microsecond fluctuations in the signal (~17 kHz)

closedlooperror.jpg

The system remained locked for ~45 minutes, and may have stayed locked for much longer, but I stopped it by opening the loop and turning off the function generator. Below is a picture of the transmitted light showing up on a monitor, the electronics I'm using, and a semi-ridiculous mess of wires.

 

IMG_3034.JPG

 

I determined that it’s not dangerous to leave the system locked and leave for a while. The maximum voltage that the SR560 will output to the PZT is 10Vpp. This means that it will not drive the PZT at more than +/-5 V DC. At low modulation rates, the PZT can take a voltage on the order of 30 Vpp, according to the Lightwave Series 125-126 user’s manual, so the control signal will not push the PZT too hard such that it’s harmful to the laser.

 

 

  5217   Fri Aug 12 20:33:57 2011 DmassSummaryPSLNPRO PDH-Locked to Ref Cav

To aid Jenny's valiant attempt to finish her SURF project, I did some things with the front end system over the last couple days, largely tricking Jamie into doing things for me lest I ruin the 40m RCG system. Several tribulations have been omitted.

We stole a channel in the frontend, in the proccess:

  1. Modified the C1GFD simulink model (now analog) to be "ADC -> TMP -> DAC" where TMP is a filter bank
    • C1GFD_TMP.adl (in /opt/rtcds/caltech/c1/medm/c1gfd) is the relevant part which connects the ADC to the DAC in the frontend
  2. Confirmed that the ADC was working by putting a signal in and seeing it in the frontend
  3. Could not get a signal out of the anti aliasing board
  4. Looked sad until Kiwamu found a breakout board for the SCSI cable coming from the DAC
  5. Used SR560 to buffer DAC output
    • drove a triangle wave with AWG into the TMP EXC channel (100 counts 1 Hz) and looked at it after the ~25 ft of BNC cable running between the DAC and the NRPO driver
    • wave looked funny (not like a triangle wave), maybe the DAC is not meant to push a signal so far, so added buffer
  6. Took the control signal going to the fast input of the NPRO driver (using the 500 Ohm SR560 output - see Jenny's diagram) and put it into the anti aliasing board of the ADC
  7. Added switchable integrator to filter bank with Foton
    • I couldn't get the names to display in the filter bank, so I looked sad again
    • Jamie and Koji both poked at the "no name displayed" problem but had no conclusions, so I decided to ignore it
    • I confirm that when the two filters were toggled "on" that the transfer function was as expected: simple integrator with a unity gain at ~10mHz - agrees with what Foton's Bode Plot tool says it should be (see attached DTT plot)
  8. I got Jamie to manually add the two epics channels from the TMP model to the appropriate .ini file so they would be recorded
    • C1:GFD-TMP_OUTPUT  (16 Hz)
    • C1:GFD-TMP_INMON    (16 Hz)
  9. RefCav heater servo seems to still be set up, so we can use existing channels:
    • C1:PSL-FSS_RCPID_SETPOINT (temp setpoint - will do +/-1C steps about 35 C)
    • C1:PSL-FSS_MINCOMEAS (In loop temp sensor - in C)
    • C1:PSL-FSS_RCTEMP (out of loop temp sensor - in C)
    • C1:PSL-FSS_TIDALSET (Voltage to heater - rails @ +/- 2V)
  10.  Closed loop on the control signal for the NPRO driver with an integrator, saw error signal go to zero
    • Turned up gain a little bit, saw some oscillations, then turned gain down to stop them, final gain = 2
  11. Left system on for Jenny to come in and do step responses
Attachment 1: TMP_INT_TF.pdf
TMP_INT_TF.pdf
  3587   Sun Sep 19 18:52:52 2010 ranaConfigurationPSLNPRO SLOW servo settings updated for Innolight NPRO

Our new 2W Mephisto has a pretty zippy "SLOW" temperature input. Tuning the perl PID servo, I found that the best response came from setting

the "P" and "D" terms to zero. This is because the internal temperature stabilization servo has a fairly high UGF. In the attached

image you can see how the open loop step response looks (loop is open then the "KI" parameter is set to zero). The internal servo

really has too little damping. There is a 30% overshoot when it gets a temperature step. For this kind of servo Innolight would have done better

to back off on the gain until they got back some phase margin.

New SLOW parameters:

timestep = 1.9 s

KP = 0

KI = 0.035

KD = 0

Attachment 1: Untitled.png
Untitled.png
  13749   Thu Apr 12 18:12:49 2018 gautamUpdateALSNPRO channels hijacked

Summary:

  1. Today, the measured IR ALS noise for the X arm was dramatically improved. The main change was that I improved the alignment of the PSL pickoff beam into its fiber coupler.
  2. The noise level was non-stationary, leading me to suspect power modulation of the RF beat amplitude.
  3. I am now measuring the stability of the power in the two polarizations coming from EX table to the PSL table, using the PSL diagnostic connector channels.
  4. The EX beam is S-polarized when it is coupled into the fiber. The PSL beam is P-polarized. However, it looks like I have coupled light along orthogonal axes into the fiber, such that when the EX light gets to the PSL table, most of it is in the P-polarization, as judged by my PER measurement setup (i.e. the alignment keys at the PSL table and at the EX table are orthogonal). So it still seems like there is something to be gained by trying to improve the PER a bit more.

Details:

Today, I decided to check the power coupled into the PSL fiber for the BeatMouth. Surprisingly, it was only 200uW, while I had ~3.15mW going into it in January. Presumably some alignment drifting happened. So I re-aligned the beam into the fiber using the steering mirror immediately before the fiber coupler. I managed to get ~2.9mW in without much effort, and I figured this is sufficient for a first pass, so I didn't try too much more. I then tried making an ALS beat spectrum measurement (arm locked to IMC length using POX, green following the arm using end PDH servo). Surprisingly, the noise performannce was almost as good as the reference! See Attachment #1, in which the red curve is an IR beat (while all others are green beats). The Y arm green beat performance isn't stellar, but one problem at a time. Moreover, the kind of coherence structure between the arm error signal and the ALS beat signal that I reported here was totally absent today.

Upon further investigation, I found that the noise level was actually breathing quite significantly on timescales of minutes. While I was able to successfully keep the TEM00 mode of the PSL beam resonant inside the arm cavity by using the ALS beat frequency as an error signal and MC2 as a frequency actuator, the DC stability was very poor and TRX was wandering around by 50%. So my new hypothesis is that the excess ALS noise is because of one or more of

  • Beam jitter at coupling point into fiber.
  • Polarization drift of the IR beams.

While I did some work in trying to align the PSL IR pickoff into the fiber along the fast (P-pol) axis, I haven't done anything for the X end pickoff beam. So perhaps the fluctuations in the EX IR power is causing beatnote amplitude fluctuations. In the delay line + phase tracker frequency discriminator, I think RF beatnote amplitude fluctuations can couple into phase noise linearly. For such an apparently important noise source, I can't remeber ever including it in any of the ALS noise budgets.

Before Ph237 today, I decided to use my polarization monitoring setup and check the "RIN" of power in the two polarizations coming out of the fiber on the PSL table. For this purpose, I decided to hijack the Acromag channels used for the PSL diagnostics connector Attachment #2 shows that there is fluctuations at the level of ~10% in the p-polarization. This is the "desired" polarization in that I aligned the PSL beam into the fiber to maximize the power in this polarization. So assuming the power fluctuations in the PSL beam are negligible, this translates to sqrt(10) ~3% fluctuation in the RF beat amplitude. This is at best a conservative estimate, as in reality, there is probably more AM because of the non PM fibers inside the beatmouth.

All of this still doesn't explain the coherence between the measured ALS noise and the arm error signal at 100s of Hz (which presumably can only happen via frequency noise in the PSL).

Another "mystery" - yesterday, while I was working on recovering the Y arm green beat signal on the PSL table, I eventually got a beat signal that was ~20mVpp into 50ohms, which is approximately the same as what I measured when the Y arm ALS performance was "nominal", more than a year ago. But while viewing the Y arm beats (green and IR) simultaneously on an o'scope, I wasn't able to keep both signals synchronised while triggering on one (even though the IR beat frequency was half the green beat frequency). This means there is a huge amount of relative phase noise between the green and IR beats. What (if anything) does this mean? The differential noise between these two beats should be (i) phase noise at the fiber coupler/ inside the fiber itself, and (ii) scatter noise in the green light transmitted through the cavity. Is it "expected" that the relative phase noise between these two signals is so large that we can't view both of them on a common trigger signal on an o'scope? surpriseAlso - the green mode-matching into the Y arm is abysmal.

Anyways - I'm going to try and tweak the PER and mode-matching into the X end fiber a little and monitor the polarization stability (nothing too invasive for now, eventually, I want to install the new fiber couplers I acquired but for now I'll only change alignment into and rotation of the fiber coupler on the EX table). It would also be interesting to compare my "optimized" PSL drift to the unoptimized EX power drift. So the PSL diagnostic channels will not show any actual PSL diagnostic information until I plug it back in. But I suspect that the EPICS record names and physical channel wiring are wrong anyways - I hooked up my two photodiode signals into what I would believe is the "Diode 1 Power" and "Laser crystal temperature" monitors (as per the schematic), but the signals actually show up for me in "Diode 2 Power" (p-pol) and "Didoe 1 Temperature" (s-pol).

Annoyingly, there is no wiring diagram - on my todo list i guess...

@Steve - could you please take a photo of the EX table and update the wiki? I think the photo we have is a bit dated, the fiber coupler and transmon PDs aren't in it...

Attachment 1: IR_ALS_20180412.pdf
IR_ALS_20180412.pdf
Attachment 2: BeatMouthDrift.png
BeatMouthDrift.png
Attachment 3: ETMX_20180416.jpg
ETMX_20180416.jpg
  1651   Thu Jun 4 15:53:15 2009 steveUpdatePSLNPRO cooling flowrate adjusted

The Neslab chiller is working well. It's temp display shows 20.0 C rock solid. Flow meter rotating at 13.5Hz at the out put of the chiller.

The MOPA temp was measured with a hand held thermocouple . The  PA was  34 C and 29 C at NPRO heat sink.

The NPRO flow meter was not rotating at this time. There was just trickeling water flow though the meter.

I closed the needle valve this point. It needed 8 turns clockwise. This drives head temp to 19.9 C

Than I opened the needle valve 9 turns and the flow meter wheel was  rotaing at ~ 1 Hz

We gained a little power. Can you explain this?

 

Attachment 1: needlevalve.jpg
needlevalve.jpg
  1646   Wed Jun 3 03:30:52 2009 ranaUpdateMOPANPRO current adjust
I increased the NPRO's current to the max allowed via EPICS before the chiller shutdown. Yesterday, I did this
again just to see the effect. It is minimal.

If we trust the LMON as a proportional readout of the NPRO power, the current increase from 2.3 to 2.47 A gave us
a power boost from 525 to 585 mW (a factor of 1.11). The corresponding change in MOPA output is 2.4 to 2.5 W
( a factor of 1.04).

Therefore, I conclude that the amplifier's pump has degraded so much that it is partially saturating on the NPRO
side. So the intensity noise from NPRO should also be suppressed by a similar factor.

We should plan to replace this old MOPA with a 2 W Innolight NPRO and give the NPRO from this MOPA back to the
bridge labs. We can probably get Eric G to buy half of our new NPRO as a trade in credit.
Attachment 1: Untitled.png
Untitled.png
  14687   Sun Jun 23 08:09:53 2019 gautamUpdateIOONPRO diagnostics

Summary:

Over the last few days, I've been doing some (complementary) measurements to what Aaron and Koji have been looking at. The motivation was to identify if the problems we are seeing are optical (i.e. imprinted on the PSL light) or electronic. My findings:

  1. 60 Hz line noise in PMC REFL and PMC TRANS is heavily dependent on whether I connect cables between the measuring PDs and Acromag ADC or not - but even with the Acromag cable disconnected, the 60 Hz RIN is HUGE - 10 mVpp out of 670 mV DC, and lines are much dirtier if you have connections to the SLOW ADCs. Measurement was made by looking at the time-domain signals on a battery powered Tektronix oscilloscope. See Attachment #1. I believe this line noise is higher it was. Cause is unknown to me at this point.
  2. The NPRO noise eater seems to function as advertised. The measured RIN with the noise eater enabled (our nominal operating condition) is in line with what the manual tells us it should be. See Attachment #2.
  3. There isn't strong evidence of excess frequency noise (measured with PLL) out to 100 kHz. I didn't measure the high-frequency part yet, but maybe I'm doing something wrong with the PLL setup which should be first corrected. See Attachments #3, #4.
  4. The beat note frequency between the free-running PSL and EX NPRO's is definitely slewing more than the quadrature sum of the advertised 1 MHz/min slewing per the manual.

Evidence:

Attachment #1: Time domain look at PMC Refl and Trans signals under various operating conditions. During this work, I took the chance to remove ~4 BNC T connectors that were connected on the PMC TRANS photodiode (Thorlabs). Now, there is one cable going to the Acromag ADC, and one going to the Oscilloscope used to monitor these signals. Any further T-ing can be done at the oscilloscope.

Attachment #2: RIN measurement of the NPRO light. I opted to place a Thorlabs PDA55 in the IR ALS pickoff light path. This is before the light sees the PMC. A DC block was inserted between the PDA55 and the AG4395 used to make the measurement. DC level of the PD output was 3.1 V into high-Z and I used half this value to normalize the measurement made by the 50-ohm input AG4395 into RIN units. The measurement was made with the PZT and slow temperature controls to the NPRO connected/disconnected, but I saw no significant difference. 

Attachment #3: Frequency noise measurement via PLL. This shows the loop transfer funtion for the PLL. Some details of the setup:

  • The beat note for locking the PLL was made between the PSL NPRO and the EX NPRO (output of the IR ALS BeatMouth). ~4dBm beatnote.
  • Local oscillator was sourced by a Marconi, f_carrier=33 MHz, RF level = +10dBm.
  • Level 7 Mixer and LB1005 controller from the mode-spectroscopy PLL setup.
  • PLL control signal routed to EX NPRO PZT via Heliax cable running along south arm. 
  • Why EX and not PSL or Marconi FM? Latter has limited range, ~1/10th of that offered by NPRO PZT. PSL PZT has a 2.9 Hz corner freq Pomona box. I could disconnect this for the purpose of PLL locking, but I thought it may be interesting to see if there’s any hints of the problem being electrical, by looking at PLL spectra with / without Pomona box. The expected delay due to cabling is only 400 ns, so not really a limiting factor for the PLL bandwidth.
  • LB 1005 settings:
    • PI corner = 3 kHz.
    • G = 2.30 (I could not increase this further - with the PSL+Lightwave NPRO PLL, we could achieve a UGF of ~60 kHz, but in this setup, I can't do much better than ~7kHz before the loop starts oscillating, not sure if the fact that the PZT actuation coefficient for the Innolight is ~5x lower than for the Lightwave is enough to explain this?).
    • LFGL = 90 dB.
  • Mixer output had a maximum value of 800 mVpp => PLL discriminant is 400 mV/rad.
  • The "eye fit" is just the transfer function of two poles at DC (one for frequency to phase conversion in the PLL and one for the LB1005 integrator), and a zero at 3kHz (PI corner). I scaled the gain till the "fit" and measurement lined up, and then used this model to undo the loop suppression of the error signal to extract the frequency noise without worrying about the frequency vector of the measurement being limited.
  • Once again, slow temperature control and PZT controls to the PSL NPRO were disconnected so this measurement was made with two free-running NPROs.

Attachment #4: Frequency noise measurement via PLL. This shows the frequency noise. I've overlaid the expected frequency noise between 2 free-running NPROs, model used is in the text box in the plot. There isn't strong evidence of excess high frequency noise in this measurement. The fact that the "LB 1005 input terminated" trace is below all the others supports the hypothesis that I'm measuring real frequency noise. The bump around a few kHz could indicate some gain peaking?

However, I'm unable to find good agreement between the measured frequency noise using the error point and the control point. For the former, I used the PLL discriminant mentioned above of 400 mV/rad, and undid the loop suppression, and for the latter I used a PZT discriminant of 1.7 MHz/V. However, there is still a constant scale difference between these two traces. So I'm doing something wrong?

Next steps:

  1. More interpretation of the PLL measurement results required.
  2. Measure the PLL error signal spectrum to higher frequencies using the AG4395. 
  3. ???

I've not disturbed the PLL setup in case anyone else wants to repeat these measurements, but I have restored the normal electrical connections to the PSL PZT and temperature control.

Some other activity:

  1. Alignment into the PMC was tweaked.
  2. NPRO laser pump current was increased from 1.9 A to 2.0 A.
  3. PMC servo gain was changed from +18 to +17 to prevent the servo from oscillating.
Attachment 1: consolidatedOscopeScreenCaps.pdf
consolidatedOscopeScreenCaps.pdf
Attachment 2: RINcomp.pdf
RINcomp.pdf
Attachment 3: PLL_OLTF.pdf
PLL_OLTF.pdf
Attachment 4: PLLnoise.pdf
PLLnoise.pdf
  13308   Mon Sep 11 15:58:02 2017 SteveUpdateGeneralNPRO for repair

This NPRO has a tripping power output******

 

" Hi Eric,

I checked with the Engineer as Vincent is travelling.

“The lasers have serial number below 2000 which we cannot repair them, we only can repair NPRO laser has serial number 2000 or later.”

Thanks,

Betty-Ann Watt

Customer Service Professional
Global Customer Service/Communication & Commercial Optical Products "

www.lumentum.com

 

 

Attachment 1: NPRO_tripping.jpg
NPRO_tripping.jpg
  3709   Wed Oct 13 21:08:40 2010 kiwamuUpdatePSLNPRO is still alive

 The NPRO at the PSL table still can generate 2W laser !  He is still alive.

  When I reduced the temperature to  25 deg, the output power increased to 2W successfully.

  As Steve wrote down in his last entry (see here), the NPRO output was at 1.6 W currently, which is supposed to be 2W.

We were suspicious about the laser crystal's temperature because the current temperature looks a bit high.

In fact the setpoint of the temperature was 45.9 deg instead of 25 deg that is the previous setpoint.

 

ELOG V3.1.3-