40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 84 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  10927   Wed Jan 21 15:27:47 2015 JenneUpdateLSCFixed LSC model bug

This problem with the CARM loop last night was the fault of a bug that I had put into the LSC model last week.  When I gave the input matrix and normalization matrix double rows, I had put the goto tags for the CARM normalization matrix rows backwards.  So, even though I thought I was not normalizing CARM, in fact I was normalizing by POPDC, which was near zero since the PRM was misaligned.

Anyhow, found, fixed, currently locking, and all seems well.

  4308   Wed Feb 16 12:16:14 2011 josephbUpdateCDSFixed Optical level SUM channel names

[Joe,Valera]

Valera pointed out the OPLEV SUM channels were incorrect.  We changing the optical level sum channel to _OPLEV_SUM when it should have been OL_SUM.  This has been fixed in the activateDAQ.py script.

  5178   Wed Aug 10 19:18:26 2011 NicoleSummarySUSFixed Reflective Photosensors; Recalibrated Photosensor 2

Thanks to Koji's help, the second photosensor, which was not working, has been fixed. I have re-calibrated the photosensor after fixing a problem with the circuit.  I have determined the new linear region to lie between 7.6 mm and 19.8mm. The slope defining the linear region is -0.26 V/mm (no longer the same as the first photosensor, which is -0.32 V/mm).

 

Here is the calibration plot.

ps2.jpg

  3642   Mon Oct 4 11:20:45 2010 josephbUpdateCDSFixed Suspension binary output list and sus model

I've updated the CDS wiki page listing the wiring of the 40m suspensions with the correct binary output channels.  I previously had confused the wiring of the Auxillary crate XY220 (watchdogs) with the SOS coil dewhitening bypasses.  So I had wound up with the wrong channels (the physical cables we plugged in were correct, just what I thought was going on channel X of that cable was wrong).  This has been corrected in the plan now.  The updated channel/cable list is at http://lhocds.ligo-wa.caltech.edu:8000/40m/Upgrade_09/CDS/Suspension_wiring_to_channels

  4130   Mon Jan 10 16:47:08 2011 josephb, alex, rolfUpdateCDSFixed c1lsc dolphin reflected memory

While Alex and Rolf were visiting, I pointed out that the Dolphin card was not sending any data, not even a time stamp, from the c1lsc machine.

After some poking around, we realized the IOP (input/output processor) was coming up before the Dolphin driver had even finished loading. 

We uncommented the line

#/etc/dolphin_wait

in the /diskless/root/etc/rc.local file on the frame builder.  This waits until the dolphin module is fully loaded, so it can hand off a correct pointer to the memory location that the Dolphin card reads and writes to.  Previously, the IOP had been receiving a bad pointer since the Dolphin driver had not finished loading.

So now the c1lsc machine can communicate with c1sus via Dolphin and from there the rest of the network via the traditional Ge Fanuc RFM.

  4666   Mon May 9 15:21:36 2011 josephbUpdatePSLFixed channel names for PSL QPDs, fixed saturation, changed signs

[Valera, Joe]

Software Changes:

First we changed all the C1:IOO-QPD_*_* channels to C1:PSL-QPD_*_* channels in the /cvs/cds/caltech/target/c1iool0/c1ioo.db file, as well as the /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini file.  We then rebooted the frame builder via "telnet fb 8087" and then "shutdown".

This change breaks continuity for these channels prior to today.

The C1:PSL-QPD_POS_HOR and C1:PSL-QPD_POS_VERT channels were found to be backwards as well.  So we modified the /cvs/cds/caltech/target/c1iool0/c1ioo.db file to switch them.

Lastly, we changed the ASLO and AOFF values for the C1:PSL-QPD_POS_SUM and the C1:PSL-QPD_ANG_SUM so as to provide positive numbers.  This was done by flipping the sign for each entry.

ASLO went from 0.004883 to -0.004883, and AOFF when from -10 to 10 for both channels.

Hardware Changes:

The C1:PSL-QPD_ANG_SUM channel had been saturated at -10V.  Valera reduced the power on the QPD to drop it to about 4V by placing an ND attenuator in the ANG QPD path.

  4677   Tue May 10 10:06:23 2011 josephbUpdatePSLFixed channel names for PSL QPDs, fixed saturation, changed signs

I added calculation entries to the /cvs/cds/caltech/target/c1iool0/c1ioo.db file which are named C1:IOO-QPD_*_*, as the channels were originally named.  These calculation channels have the identical data to the C1:PSL-QPD_*_* channels.  I then added the channels to the C0EDCU.ini file, so as to once again have continuity for the channels, in addition to having the newer, better named channels.

The c1iool0 machine ("telnet c1iool0", "reboot") and the framebuilder process ("telnet fb 8087", "shutdown") were both restarted after these changes.

These channels were brought up in dataviewer and compared.  The approriate channels were identical.

Quote:

[Valera, Joe]

Software Changes:

First we changed all the C1:IOO-QPD_*_* channels to C1:PSL-QPD_*_* channels in the /cvs/cds/caltech/target/c1iool0/c1ioo.db file, as well as the /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini file.  We then rebooted the frame builder via "telnet fb 8087" and then "shutdown".

This change breaks continuity for these channels prior to today.

 

  16179   Thu Jun 3 17:35:31 2021 AnchalSummaryIMCFixed medm button

I fixed the PSL shutter button on Shutters summary page C1IOO_Mech_Shutter.adl. Now PSL switch changes C1:PSL-PSL_ShutterRqst channel. Earlier it was C1:AUX-PSL_ShutterRqst which doesn't do anything.

 

Attachment 1: C1IOO_Mech_Shutters.png
C1IOO_Mech_Shutters.png
  4404   Fri Mar 11 11:33:24 2011 josephbUpdateCDSFixed mistake in Matrix of Filter banks naming convention

While fixing up some medm screens and getting spectra of the simulated plant, I realized that the naming convention for the Matrices of Filter banks was backwards when compared to that of the normal matrices (and the rest of the world).  The naming was incorrectly column, row.

This has several ramifications:

1) I had to change the suspensions screens for the TO_COIL  output filters.

2) I had to change the filters for the suspension with regards to the TO_COIL output filters so they go in the correct filter banks.

3) Burt restores to times previous March 11th around noon, will put your TO_COIL output filters in a funny state that will need to be fixed.

4) The simplant RESPONSE filters had to be moved to the correct filter banks.

5) If you have some model I'm not aware of that uses the FiltMuxMatrix piece, it is going to correctly build now, but you're going to have to move filters you may have created with foton.

  16391   Mon Oct 11 17:31:25 2021 AnchalSummaryCDSFixed mounting of mx devices in fb. daqd_dc is running now.
 
 

However, lspci | grep 'Myri' shows following output on both computers:

controls@fb1:/dev 0$ lspci | grep 'Myri'
02:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC (rev 01)

Which means that the computer detects the card on PCie slot.

 

I tried to add this to /etc/rc.local to run this script at every boot, but it did not work. So for now, I'll just manually do this step everytime. Once the devices are loaded, we get:

controls@fb1:/etc 0$ ls /dev/*mx*
/dev/mx0  /dev/mx4  /dev/mxctl   /dev/mxp2  /dev/mxp6         /dev/ptmx
/dev/mx1  /dev/mx5  /dev/mxctlp  /dev/mxp3  /dev/mxp7
/dev/mx2  /dev/mx6  /dev/mxp0    /dev/mxp4  /dev/open-mx
/dev/mx3  /dev/mx7  /dev/mxp1    /dev/mxp5  /dev/open-mx-raw

The, restarting all daqd_ processes, I found that daqd_dc was running succesfully now. Here is the status:

controls@fb1:/etc 0$ sudo systemctl status daqd_* -l
● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
   Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:00 PDT; 23min ago
 Main PID: 2308 (daqd_dc_mx)
   CGroup: /daqd.slice/daqd_dc.service
           ├─2308 /usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc
           └─2370 caRepeater

Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 006 thread priority error Operation not permitted[Mon Oct 11 17:48:06 2021]
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 005 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] [Mon Oct 11 17:48:06 2021] mx receiver 006 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 007 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread - label dqmx003 pid=2362
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread priority error Operation not permitted
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: warning:regcache incompatible with malloc
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] EDCU has 410 channels configured; first=0
Oct 11 17:49:06 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:49:06 2021] ->4: clear crc

● daqd_fw.service - Advanced LIGO RTS daqd frame writer
   Loaded: loaded (/etc/systemd/system/daqd_fw.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:01 PDT; 23min ago
 Main PID: 2318 (daqd_fw)
   CGroup: /daqd.slice/daqd_fw.service
           └─2318 /usr/bin/daqd_fw -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.fw

Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] [Mon Oct 11 17:48:09 2021] Producer thread - label dqproddbg pid=2440
Oct 11 17:48:09 fb1 daqd_fw[2318]: Producer crc thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] [Mon Oct 11 17:48:09 2021] Producer crc thread put on CPU 0
Oct 11 17:48:09 fb1 daqd_fw[2318]: Producer thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread put on CPU 0
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread - label dqprod pid=2434
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread put on CPU 0
Oct 11 17:48:10 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:10 2021] Minute trender made GPS time correction; gps=1318034906; gps%60=26
Oct 11 17:49:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:49:09 2021] ->3: clear crc

● daqd_rcv.service - Advanced LIGO RTS daqd testpoint receiver
   Loaded: loaded (/etc/systemd/system/daqd_rcv.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:00 PDT; 23min ago
 Main PID: 2311 (daqd_rcv)
   CGroup: /daqd.slice/daqd_rcv.service
           └─2311 /usr/bin/daqd_rcv -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.rcv

Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1X07_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_STATUS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_CRC_CPS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_STATUS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_CRC_CPS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1OM[Mon Oct 11 17:50:21 2021] Epics server started
Oct 11 17:50:24 fb1 daqd_rcv[2311]: [Mon Oct 11 17:50:24 2021] Minute trender made GPS time correction; gps=1318035040; gps%120=40
Oct 11 17:51:21 fb1 daqd_rcv[2311]: [Mon Oct 11 17:51:21 2021] ->3: clear crc

Now, even before starting teh FE models, I see DC status as ox2bad in the CDS screens of the IOP and user models. The mx_stream service remains in a failed state at teh FE machines and remain the same even after restarting the service.

controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: failed (Result: exit-code) since Mon 2021-10-11 17:50:26 PDT; 15min ago
  Process: 382 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
 Main PID: 382 (code=exited, status=1/FAILURE)

Oct 11 17:50:25 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 17:50:25 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 17:50:25 c1sus2 mx_stream_exec[382]: Failed to open endpoint Not initialized
Oct 11 17:50:26 c1sus2 systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 11 17:50:26 c1sus2 systemd[1]: Unit mx_stream.service entered failed state.

But  if I restart the mx_stream service before starting the rtcds models, the mx-stream service starts succesfully:

controls@c1sus2:~ 0$ sudo systemctl restart mx_stream
controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: active (running) since Mon 2021-10-11 18:14:13 PDT; 25s ago
 Main PID: 1337 (mx_stream)
   CGroup: /system.slice/mx_stream.service
           └─1337 /usr/bin/mx_stream -e 0 -r 0 -w 0 -W 0 -s c1x07 c1su2 -d fb1:0

Oct 11 18:14:13 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 18:14:13 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: send len = 263596
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: Connection Made

However, the DC status on CDS screens still show 0x2bad. As soon as I start the rtcds model c1x07 (the IOP model for c1sus2), the mx_stream service fails:

controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: failed (Result: exit-code) since Mon 2021-10-11 18:18:03 PDT; 27s ago
  Process: 1337 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
 Main PID: 1337 (code=exited, status=1/FAILURE)

Oct 11 18:14:13 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 18:14:13 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: send len = 263596
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: Connection Made
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: isendxxx failed with status Remote Endpoint Unreachable
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: disconnected from the sender
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: c1x07_daq mmapped address is 0x7fe3620c3000
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: c1su2_daq mmapped address is 0x7fe35e0c3000
Oct 11 18:18:03 c1sus2 systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 11 18:18:03 c1sus2 systemd[1]: Unit mx_stream.service entered failed state.

This shows that the start of rtcds model, causes the fail in mx_stream, possibly due to inability of finding the endpoint on fb1. I've again reached to the edge of my knowledge here. Maybe the fiber optic connection between fb and the network switch that connects to FE is bad, or the connection between switch and FEs is bad.

But we are just one step away from making this work.

 

 

  3008   Fri May 28 13:17:05 2010 josephbUpdateCDSFixed problem with channel access on c1iscex

Talked with Alex and tracked down why the codes were not working on the new c1iscex finally.  The .bashrc and .cshrc files in /home/controls/ on c1iscex has the following lines:

setenv EPICS_CA_ADDR_LIST 131.215.113.255
setenv EPICS_CA_AUTO_ADDR_LIST NO

This was interfering with channel access and preventing read and writes from working properly.  We simply commented them out. After logging out and back in, the things like ezcaread and write started working, and we were able to get the models passing data back and forth.

Next up, testing RFM communications between megatron on c1iscex.  To do this, I'd like to move Megatron down to 1Y3, and setup a firewall for it and c1iscex so I can test the frame builder and testpoints at the same time on both machines.

  16282   Wed Aug 18 20:30:12 2021 AnchalUpdateASSFixed runASS scripts

Late elog: Original time of work Tue Aug 17 20:30 2021


I locked the arms yesterday remotely and tried running runASS.py scripts (generally ran by clicking Run ASS buttons on IFO OVERVIEW screen of ASC screen). We have known for few weeks that this script stopped working for some reason. It would start the dithering and would optimize the alignment but then would fail to freeze the state and save the alignment.

I found the caget('C1:LSC-TRX_OUT') or caget('C1:LSC-TRY_OUT') were not working in any of the workstations. This is weird since caget was able to acquire these fast channel values earlier and we have seen this script to work for about a month without any issue.

Anyways, to fix this, I just changed the channel name to 'C1:LSC-TRY_OUT16' when the script checks in the end if the arm has indeed been aligned. It was only this step that was failing. Now the script is working fine and I tested them on both arms. On the Y arm, I misaligned the arm by adding bias in yaw by changing C1:SUS-ITMY_YAW_OFFSET from -8 to 22. The script was able to align the arm back.

  3907   Fri Nov 12 11:36:06 2010 AidanConfigurationComputersFixed the PERL PATH errors from the scripts directory change

 I've been trying to get a PID loop running for the green locking and I've discovered that some of the directories in the path for the Perl Modules are now obselete. Specifically, since the scripts directory has been moved from /cvs/cds/caltech/scripts to /cvs/cds/rtcds/caltech/c1/scripts the following locations in the Perl @INC path list need to be changed:

  • /cvs/cds/caltech/scripts/general
  • /cvs/cds/caltech/scripts/general/perlmodules

I've added the above directories to the PERL5LIB path in /cvs/cds/caltech/cshrc.40m.

 

 

setenv PERL5LIB /cvs/cds/caltech/scripts/general:

/cvs/cds/caltech/scripts/general/perlmodules:

/cvs/cds/caltech/libs/solaris9/usr_local_lib/perl5/5.8.0/:

/cvs/cds/rtcds/caltech/c1/scripts/general:

/cvs/cds/rtcds/caltech/c1/scripts/general/perlmodules

 This seems to fix the problem .. at least, you no longer get an error if you try nodus:~>perl -e 'user EpicsTools'

 

  8452   Sun Apr 14 15:03:17 2013 ManasaUpdateLockingFixing - progress

Quote:

TRY signals are all gone!  Both the PD and the camera show no signal.  I went down there to turn off the lights, and look to see what was up, and I don't see any obvious things blocking the beam path on the table.  However, Steve has experimentally bungeed the lids down, so I didn't open the box to really look to see what the story is.

Absent TRY, I redid the IFO alignment.  Yarm locked, so I assumed it was close enough.  I redid Xarm alignment pretty significantly.  Transmission was ~0.5, which I got up to ~0.85 (which isn't too bad, since the PMC transmission is 0.74 instead of the usual 0.83).  I then aligned MICH, and PRM.  After fixing up the BS alignment, the POP beam wasn't hitting the POP PD in the center any more.  I centered the beam on the PD, although as Gabriele pointed out to me a week or two ago, we really need to put a lens in front of POP, since the beam is so big.  We're never getting the full beam when the cavity flashes, which is not so good.

Den is still working on locking, so I'll let him write the main locking report for the night.

We see that the PRC carrier lock seems to be more stable when we lock MICH with +1 for ITMY and -1 for ITMX, and PRCL with -1 for both ITMs.  This indicates that we need to revisit the systematic problem with using the PRM oplev to balance the coils, since that oplev has a relatively wide opening angle.  I am working on how to do this.

I'm fixing the TRY path.

I misaligned PRM and restored ETMY; but did not see the Y arm flashing. I am going ahead and moving the optics to get Y arm flashing again.

The slider values on the medm screen before touching any of them (for the record):

       tt1        tt2      itmy        etmy
p    -1.3886    0.8443     0.9320    -3.2583
y     0.3249    1.1407    -0.2849    -0.2751
  4455   Tue Mar 29 00:00:55 2011 KojiUpdateIOOFixing MC/Freq Divider Box

This is the log of the work on Wednesday 23rd.

1. Power Supply of the freq divider box

Kiwamu claimed that the comparator output of the freq div box only had small output like ~100mV.
The box worked on the electronics bench, we track down the power supply and found the fuse of the +15V line
brew out. It took sometime to notice this fact as the brown-out-LED of the fuse was not on and the power
supply terminal had +15V without the load. But this was because of the facts 1) the fuse is for 24V, and 2)
the large resistor is on the fuse for lighting the LED when the fuse is brown out.

I found another 24V fuse and put it there. Kiwamu is working on getting the correct fuses.

2. MC locking problem

After the hustle of the freq divider, the MC didn't lock. I tracked down the problem on the rack and found
there was no LO for the MC. This was fixed by pushing the power line cable of the AM Stabilizer for the MC LO, which was a bit loose.

  8273   Mon Mar 11 22:28:30 2013 Max HortonUpdateSummary PagesFixing Plot Limits

Quick Note on Multiprocessing:  The multiprocessing was plugged into the codebase on March 4. Since then, the various pages that appear when you click on certain tabs (such as the page found here: https://nodus.ligo.caltech.edu:30889/40m-summary/archive_daily/20130304/ifo/dc_mon/ from clicking the 'IFO' tab) don't display graphs.  But, the graphs are being generated (if you click here or here, you will find the two graphs that are supposed to be displayed).  So, for some reason, the multiprocessing is preventing these graphs from appearing, even though they are being generated.  I rolled back the multiprocessing changes temporarily, so that the newly generated pages look correct until I find the cause of this.

Fixing Plot Limits:  The plots generated by the summary_pages.py script have a few problems, one of which is: the graphs don't choose their boundaries in a very useful way.  For example, in these pressure plots, the dropout 0 values 'ruin' the graph in the sense that they cause the plot to be scaled from 0 to 760, instead of a more useful range like 740 to 760 (which would allow us to see details better).

The call to the plotting functions begins in process_data() of summary_pages.py, around line 972, with a call to plot_data().  This function takes in a data list (which represents the x-y data values, as well as a few other fields such as axes labels).  The easiest way to fix the plots would be to "cleanse" the data list before calling plot_data().  In doing so, we would remove dropout values and obtain a more meaningful plot.

To observe the data list that is passed to plot_data(), I added the following code:

      # outfile is a string that represents the name of the .png file that will be generated by the code.
      print_verbose("Saving data into a file.")
      print_verbose(outfile)
      outfile_mch = open(outfile + '.dat', 'w')

      # at this point in process_data(), data is an array that should contain the desired data values.
      if (data == []):
          print_verbose("Empty data!")
      print >> outfile_mch, data
      outfile_mch.close()

When I ran this in the code midday, it gave a human-readable array of values that appeared to match the plots of pressure (i.e. values between 740 and 760, with a few dropout 0 values).  However, when I let the code run overnight, instead of observing a nice list in 'outfile.dat', I observed:

[('Pressure', array([  1.04667840e+09,   1.04667846e+09,   1.04667852e+09, ...,
         1.04674284e+09,   1.04674290e+09,   1.04674296e+09]), masked_array(data = [ 744.11076965  744.14254761  744.14889221 ...,  742.01931356  742.05930208
  742.03433228],
             mask = False,
       fill_value = 1e+20)
)]

I.e. there was an ellipsis (...) instead of actual data, for some reason.  Python does this when printing lists in a few specific situations.  The most common of which is that the list is recursively defined.  For example:

INPUT:
a = [5]
a.append(a)
print a

OUTPUT:
[5, [...]]

It doesn't seem possible that the definitions for the data array become recursive (especially since the test worked midday).  Perhaps the list becomes too long, and python doesn't want to print it all because of some setting.

Instead, I will use cPickle to save the data.  The disadvantage is that the output is not human readable.  But cPickle is very simple to use.  I added the lines:

      import cPickle
      cPickle.dump(data, open(outfile + 'pickle.dat', 'w'))

This should save the 'data' array into a file, from which it can be later retrieved by cPickle.load().

Next:
There are other modules I can use that will produce human-readable output, but I'll stick with cPickle for now since it's well supported.  Once I verify this works, I will be able to do two things:
1) Cut out the dropout data values to make better plots.
2) When the process_data() function is run in its current form, it reprocesses all the data every time.  Instead, I will be able to draw the existing data out of the cPickle file I create.  So, I can load the existing data, and only add new values.  This will help the program run faster.

  8286   Wed Mar 13 15:30:37 2013 Max HortonUpdateSummary PagesFixing Plot Limits

Jamie has informed me of numpy's numpy.savetxt() method, which is exactly what I want for this situation (human-readable text storage of an array).  So, I will now be using:

      # outfile is the name of the .png graph.  data is the array with our desired data.
      numpy.savetxt(outfile + '.dat', data)

to save the data.  I can later retrieve it with numpy.loadtxt()

  605   Mon Jun 30 15:56:22 2008 JenneUpdateElectronicsFixing the LO demod signal
To make the alarm handler happy, at Rana and John's suggestion I replaced R14 of the MC's Demod board, changing it from 4.99 Ohms to 4.99 kOhms. This increased the gain of the LO portion of the demod board by a factor of 10. Sharon and I have remeasured the table of LO input to the demod board, and the output on the C1:IOO-MC_DEMOD_LO channel:

Input Amplitude to LO input on demod board [dBm]: | Value of channel C1:IOO-MC_DEMOD_LO
------------------------------------------------- | -----------------------------------
-10 | -0.00449867
-8 | 0.000384331
-6 | 0.0101503
-4 | 0.0296823
-2 | 0.0882783
0 | 0.2543
2 | 0.542397
4 | 0.962335
6 | 1.65572
8 | 2.34911
10 | 2.96925
  3617   Tue Sep 28 21:11:52 2010 koji, taraUpdateElectronicsFixing the new TTFSS

We found a small PCB defect which is an excess copper shorting circuit on the daughter board,

it was removed and the signal on mixer monitor path is working properly.

 

 We were checking the new TTFSS upto test 10a on the instruction, E1000405 -V1. There was no signal at MIXER mon channel.

It turned out that U3 OpAmp on the daughter board, D040424, was not working because the circuit path for leg 15 was shorted

because of the board's defect. We can see from fig1 that the contact for the OpAmp's leg (2nd from left) touches ground.

We used a knife to scrap it out, see fig 2, and now this part is working properly.

 

Attachment 1: before.jpg
before.jpg
Attachment 2: after.jpg
after.jpg
Attachment 3: before.jpg
before.jpg
Attachment 4: after.jpg
after.jpg
  3811   Thu Oct 28 16:38:54 2010 josephbUpdateCDSFlaky fb, reverted inittab changes on fb

Problem:

Yuta reported many of the signals being displayed by dataviewer "fuzzier" than normal.  And diaggui was not working.

Running "diag -i" reported:

Diagnostics configuration:
awg 21 0 192.168.113.85 822095893 1 192.168.113.85
awg 36 0 192.168.113.85 822095908 1 192.168.113.85
awg 37 0 192.168.113.85 822095909 1 192.168.113.85
tp 21 0 192.168.113.85 822091797 1 192.168.113.85
tp 36 0 192.168.113.85 822091812 1 192.168.113.85
tp 37 0 192.168.113.85 822091813 1 192.168.113.85

This seems to be missing an nds type line between the 3 awgs and the 3 tp lines.

The daqd code (the framebuilder) is being especially flaky today, and I'm starting to see new errors.

[Thu Oct 28 16:13:46 2010] Couldn't open full trend frame file
`/frames/trend/second/9723/C-

T-972342780-60.gwf' for writing; errno 13
epicsThreadOnceOsd epicsMutexLock failed.
epicsThreadOnceOsd epicsMutexLock failed.
epicsThreadOnceOsd epicsMutexLock failed.
epicsThreadOnceOsd epicsMutexLock failed.
epicsThreadOnceOsd epicsMutexLock failed.
Segmentation fault (core dumped)

or

[Thu Oct 28 16:17:06 2010] Couldn't open full frame file
`/frames/full/9723/.C-R-972343024-16.gwf' for writing; errno 13
CA client library tcp receive thread terminating due to a C++ exception
FATAL: exception not rethrown
CA client library tcp receive thread terminating due to a C++ exception
CA client library tcp receive thread terminating due to a C++ exception
FATAL: exception not rethrown
cac: tcp send thread received an unexpected exception - disconnecting
Aborted (core dumped)

What was done today that might have affected it:

A new c1ioo chassis from Downs was connected to c1ioo.  I also connected c1ioo to the DAQ network (192.168.114.xxx) which talks to the frame builder.

I started downloading the necessary files to be able to follow Keith's instructions for a standard control / teststand setup in /opt/apps , /opt/rtapps, etc.  However, it has not actually been installed yet.

Yuta added additional OL channels to the DAQ config for being recorded.

Attempted Fixes:

I reverted the inittab changes I made in this elog.  Didn't help.

I disconnected c1ioo from the DAQ network.  Didn't help.

Rebooted the frame builder machine.  Didn't help.

I've sent an e-mail to Alex describing the problem to see if he has any idea where we went wrong.

Yuta may try restoring the old DAQ channel choices and see if that makes a difference.

Current Status:

daqd framebuilder code still won't stay up.  So no channels at the moment.

  3814   Thu Oct 28 21:20:11 2010 yutaUpdateCDSFlaky fb, tried DAQ re-install, but no help

Summary:
  Unfortunately, fb is flakier than normal. We can't use dataviewer and diaggui now.
  I thought it might be because editting .ini files(list of DAQ channels) in /cvs/cds/rtcds/caltech/c1/chans/daq/ without using GUI was doing something wrong.
  So, I re-installed DAQ, but it didn't help.

What I did:
1. ssh c1sus, went to /opt/rtcds/caltech/c1/core/advLigoRTS/ and ran
  make uninstall-daq-c1SYS
  make install-daq-c1SYS

It didn't help.
More than that, MC suspension damping went wrong. So;

2. Rebooted c1sus machine.
 I restored MC suspension damping by doing this.
 (Similar thing happened Tuesday when we were trying to lock MC)

Conclusion:
  Editting .ini DAQ channel list file wasn't wrong. (or, I failed in finding anything wrong right now)

Quote:

Attempted Fixes:

Yuta may try restoring the old DAQ channel choices and see if that makes a difference.

 

  16003   Wed Apr 7 02:50:49 2021 KojiUpdateSUSFlange Inspections

Basically I went around all the chambers and all the DB25 flanges to check the invac cable configurations. Also took more time to check the coil Rs and Ls.

Exceptions are the TTs. To avoid unexpected misalignment of the TTs, I didn't try to disconnect the TT cables from the flanges.

Upon the disconnection of the SOS cables, the following steps are taken to avoid large impact to the SOSs

  • The alignment biases were saved or recorded.
  • Gradually moved the biases to 0
  • Turned off the watchdogs (thus damping)

After the measurement, IMC was lock and aligned. The two arms were locked and aligned with ASS. And the PRM alignment (when "misalign" was disengaged) was checked with the REFL CCD.
So I believe the SOSs are functioning as before, but if you find anything, please let me know.
 

  16404   Thu Oct 14 18:30:23 2021 KojiSummaryVACFlange/Cable Stand Configuration

Flange Configuration for BHD

We will need total 5 new cable stands. So Qty.6 is the number to be ordered.


Looking at the accuglass drawing, the in-vaccum cables are standard dsub 25pin cables only with two standard fixing threads.

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110070_3.pdf

For SOSs, the standard 40m style cable bracket works fine. https://dcc.ligo.org/D010194-x0

However, for the OMCs, we need to make the thread holes available so that we can mate DB25 male cables to these cables.
One possibility is to improvise this cable bracket to suspend the cables using clean Cu wires or something. I think we can deal with this issue in situ.


Ha! The male side has the 4-40 standoff (jack) screws. So we can hold the male side on the bracket using the standoff (jack) screws and plug the female cables. OK! The issue solved!

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110029_3.pdf

Attachment 1: 40m_flange_layout_20211014.pdf
40m_flange_layout_20211014.pdf
  6196   Fri Jan 13 16:16:05 2012 Leo SingerUpdateStewart platformFlexure type for leg joints

I had been thinking of using this flexure for the bearings for the leg joints, but I finally realized that it was not the right type of bearing.  The joints for the Stewart platform need to be free to both yaw and pitch, but this bearing actually constrains yaw (while leaving out-of-plane translation free).

  257   Wed Jan 23 20:52:40 2008 ranaSummaryEnvironmentFlooding from construction area
We noticed tonight around 7 PM that there was a lot of brown water in the control room and also in the interferometer area mostly concentrated around the north wall between the LSC rack and the AP table.

The leak was mainly in the NW corner of the interferometer area.

The construction crew had set up sandbags, plastic sheet, and gravel to block the drains outside of the 40m along the north wall. The rain had produced ponds and lakes outside in the construction area. Once the level got high enough this leaked through holes in the 40m building walls (these are crappy walls).

We called the on-call facilities team (1 guy). He showed up, cut through the construction fence lock, and then unblocked the drains. This guy was pretty good (although inscrutable); he adjusted the sandbags to control the flow of the lake into the drains. He went along the wall and unblocked all 3 drains; there were mini-lakes forming there which he felt would eventually start leaks all along our north wall.

In the morning we'll need volunteers to move equipment around under Steve's direction while the floor gets mopped up. There's dirt and mud all over, underneath the chambers and racks.

Luckily Alberto spotted this early and he, Jon, Andrey and Steve kept the water from spreading and then scooped it all up with a wet-vac that the facilities guy brought over.
Extra Napoleon to them for late evening mud clearing work.

Many pictures were taken: Update and pictures will appear later.
Attachment 1: Shop-Vac_Action.MOV
Attachment 2: Flooding.pdf
Flooding.pdf Flooding.pdf Flooding.pdf Flooding.pdf Flooding.pdf
  9246   Wed Oct 16 10:32:40 2013 SteveUpdateSUSFlowbench effect on oplev error signals

 ETMX _OPLEV_...ERROR signals are effected by turning ON the  flow bench motor  at the south end 

 The broadband noise [~ 5-60 Hz] is much higher when the motor is running.

The flow bench was turned off.

 

 

Attachment 1: FlowBenchOnOff.png
FlowBenchOnOff.png
  5951   Fri Nov 18 19:07:07 2011 JenneUpdateRF SystemFoam house on EOM

[Jenne, Zach, Frank]

Frank helped Zach and I cable up at PT-100 RTD, and make sure it worked with the Newport Model 6000 Laser Diode Controller.  We're using this rather than the Newport 3040 Temperature Controller because Dmass says the output of that isn't working.  So we're using just the temp control part of the Laser Diode controller.

The back of the controller has a 15-pin D-sub, with the following useful connections.  All others were left Not Connected. 

1 & 2 (same) - Pin 2 is one side of TEC output (we have it connected to one side of a resistive heater)

3 & 4 (same) - Pin 4 is the other side of the TEC output (connected to the other side of the resistive heater)

7 - connected to one side of PT-100 temp sensor

8 - connected to other side of PT-100 temp sensor

I used aluminum tape to attach the sensor and heater to the 40m's EOM, and we plugged in the controller.  It seems to be kind of working.  Zach figured out the GPIB output stuff, so we can talk to it remotely.

 

  5954   Sat Nov 19 00:09:02 2011 ZachUpdateRF SystemFoam house on EOM

Quote:

I used aluminum tape to attach the sensor and heater to the 40m's EOM, and we plugged in the controller.  It seems to be kind of working.  Zach figured out the GPIB output stuff, so we can talk to it remotely. 

I stole the Prologix wireless GPIB interface from the SR785 that's down the Y-Arm temporarily. The address is 192.168.113.108. (Incidentally, I think some network settings have been changed since the GPIB stuff was initially configured. All the Prologix boxes have 131.215.X.X written on them, while they are only accessible via the 192.168.X.X addresses. Also, the 40MARS wireless router is only accessible from Martian computers at 192.168.113.226---not 131.215.113.226).

In any case, the Newport 6000 is controllable via telnet. I went through the remote RTD calibration process in the manual, by measuring the exact RTD resistance with an ohmmeter and entering it in. Despite this, when the TEC output is turned on, the heating way overshoots the entered set temperature. This is probably because the controller parameters (gain, etc.) are not set right. We have left it off for the moment.

Here are a couple command examples:

1. Turning on the TEC output 

nodus:~>telnet 192.168.113.108 1234
Trying 192.168.113.108...
Connected to 192.168.113.108.
Escape character is '^]'.
TEC:OUT on
TEC:OUT
TEC:OUT?
++read eoi
1
 
2. Measuring the current temperature
 
TEC:T?
++read eoi
32.9837
 
3. Reading and then changing the set temperature
 
TEC:SET:T?
++read eoi
34.0000
TEC:T 35.0
TEC:SET:T?
++read eoi
35.0000
 
4. Figuring out that the temperature is unstable and then turning off the TEC (this is important)
 
TEC:T?
++read eoi
36.2501
TEC:OUT off
TEC:OUT?
++read eoi
0
 
(The "++read eoi" lines are the commands you give the Prologix to read the controlled device output.)
 
As I understand, Frank has some code that will pull data in realtime and put it into EPICS. This would be nice.
  6104   Mon Dec 12 11:16:02 2011 JenneUpdateRF SystemFoam house on EOM

Foam house installed on EOM a few min ago.  We'll leave it until ~tomorrow, then try out the heater loop.

  5242   Mon Aug 15 17:38:07 2011 jamieUpdateGeneralFoil aperture placed in front of ETMY

We have placed a foil aperture in front of ETMY, to aid in aligning the Y-arm, and then the PRC.  It obviously needs to be removed before we close up.

  15022   Wed Nov 13 19:34:45 2019 gautamUpdatePEMFollow-up on seismometer discussion

Attachment #1 shows the spectra of our three available seismometers over a period of ~10ksec.

  • I don't understand why the z-axis motion reported by the T240 is ~10x lower at 10 mHz compared to the X and Y motions. Is this some electronics noise artefact?
  • The difference in the low frequency (<100mHz) shapes of the T240 compared to the Guralps is presumably due to the difference in the internal preamps / readout boxes (?). I haven't checked yet.
  • There is almost certainly some issue with the EX Guralp. IIRC this is the one that had cabling issues in the past, and also is the one that was being futzed around for Tctrl, but also could be that its masses need re-centering, since it is EX_X that is showing the anomalous behaviour.
  • The coherence structure between the other pairs of sensors is consistent.

Attachment #2 shows the result of applying frequency domain Wiener filter subtraction to the POP QPD (target) with the vertex seismometer signals as witness channels.

  • The dataset was PRMI locked with the carrier resonant, ETMs misaligned.
  • The dashed lines in these plots correspond to the RMS for the solid line with the same color.
  • For both PIT and YAW, I am using BS_X and BS_Y seismometer channels for the MISO filter inputs.
  • In particular for PIT, I notice that I am unable to get the same level of performance as in the past, particularly around ~2-3 Hz.
  • The BS seismometer health indicators don't signal any obvious problems with the seismometer itself - so something has changed w.r.t. how the ground motion propagates to the PR2/PR3? Or has the seismometer sensing truly degraded? I don't think the dataset I collected was particularly bad compared to the past, and I confirmed similar performance with a separate PRMI lock from a different time period.
Attachment 1: seisAll_20191111.pdf
seisAll_20191111.pdf
Attachment 2: ffPotential.pdf
ffPotential.pdf
  15023   Wed Nov 13 20:15:56 2019 ranaUpdatePEMFollow-up on seismometer discussion

this is due to the Equivalence Principle: local accelerations are indistinguishable from spacetime curvature. On a spherical Earth, the local gradient of the metric points in the direction towards the center of the Earth, which is colloquially known as "down".

Quote:

I don't understand why the z-axis motion reported by the T240 is ~10x lower at 10 mHz compared to the X and Y motions. Is this some electronics noise artefact?

 

  15024   Wed Nov 13 23:40:15 2019 gautamUpdatePEMFollow-up on seismometer discussion

Here is some disturbance in the spacetime curvature, where the local gradient of the metric seems to have been modulated (in the "downward" as well as in the other two orthogonal Cartesian directions) at ~1 Hz - seems real as far as I can tell, all the suspensions were being shaken about and all the seismometers witnessed it, though the peak is pretty narrow. A broader, less prominent peak also shows up around 0.5 Hz. We couldn't identify any clear source (no LN2 fill-up / obvious CES activity). This event lasted for ~45 mins, and stopped around 2315 local time. Shortly (~5min) after the ~1 Hz peak died down, however, the 3-10 Hz BLRMS channel reports an increase by ~factor of 2. 

Onto trying some locking now that the suspensions have settled down somewhat.

Quote:

this is due to the Equivalence Principle: local accelerations are indistinguishable from spacetime curvature. On a spherical Earth, the local gradient of the metric points in the direction towards the center of the Earth, which is colloquially known as "down".

Attachment 1: seisAll_20191111_1Hz.pdf
seisAll_20191111_1Hz.pdf
  15025   Thu Nov 14 12:11:04 2019 ranaUpdatePEMFollow-up on seismometer discussion

at 1 Hz' this effect is not large so that's real translation. at lower frequencies a ground tilt couples to the horizontal sensors at first order and so the apparent signal is amplified by the double integral. drawing a free body diagram u can c that

x_apparent = (g / s^2) * theta

but for vortical this not tru because it already measures the full free fall and the tilt only shows up at 2nd order

  15027   Fri Nov 15 00:18:41 2019 ranaUpdatePEMFollow-up on seismometer discussion

The large ground motion at 1 Hz started up again tonight at around 23:30. I walked around the lab and nearby buildings with a flashlight and couldn't find anything whumping. The noise is very sinusoidal and seems like it must be a 1 Hz motor rather than any natural disturbance or traffic, etc. Suspect that it is a pump in the nearby CES building which is waking up and running to fill up some liquid level. Will check out in the morning.

Estimate of displacement noise based on the observed MC_F channel showing a 25 MHz peak-peak excursion for the laser:

dL = 25e6 * (13 m / (c / lambda)

      = 1 micron

So this is a lot. Probably our pendulum is amplifying the ground motion by 10x, so I suspect a ground noise of ~0.1 micron peak-peak.

(this is a native PDF export using qtgrace rather than XMgrace. uninstall xmgrace and symlink to qtgrace.)

Attachment 1: MCshake.pdf
MCshake.pdf
  15030   Fri Nov 15 12:16:48 2019 gautamUpdatePEMFollow-up on seismometer discussion

Attachment #1 is a spectrogram of the BS sesimometer signals for a ~24 hour period (from Wednesday night to Thursday night local time, zipped because its a large file). I've marked the nearly pure tones that show up for some time and then turn off. We need to get to the bottom of this and ideally stop it from happening at night because it is eating ~1 hour of lockable time.

We considered if we could look at the phasing between the vertex and end seismometers to localize the source of the disturbance.

Attachment 1: BS_ZspecGram.pdf.zip
  15032   Mon Nov 18 14:32:53 2019 gautamUpdatePEMFollow-up on seismometer discussion

The nightly seismic activity enhancement continued during the weekend. It always shows up around 10pm local time, persists for ~1 hour, and then goes away. This isn't a show stopper as long as it stops at some point, but it is annoying that it is eating up >1 hour of possible locking time. I walked over to CES, no one there admitted to anything - there is an "Earth Surface Dynamics Laboratory" there that runs some heavy equipment right next to us, but they claim they aren't running anything post ~530pm. Rick (building manager ?) also doesn't know of anything that turns on with the periodicity we see. He suggested contacting Watson but I have no idea who to talk to there who has an overview of what goes on in the building. 😢 

  15036   Tue Nov 19 21:53:57 2019 gautamUpdatePEMFollow-up on seismometer discussion

The shaking started earlier today than yesterday, at ~9pm local time.

While the IFO is shaking, I thought (as Jan Harms suggested) I'd take a look at the cross-spectra between our seismometer channels at the dominant excitation frequency, which is ~1.135 Hz. Attachment #1 shows the phase of the cross spectrum taken for 10 averages (with 30mHz resolution) during the time period when the shaking was strong yesterday (~1500 seconds with 50% overlap). The logic is that we can use the relative phasing between the seismometer channels to estimate the direction of arrival and hence, the source location. However, I already see some inconsistencies - for example, the relative phase between BS_Z and EX_Z suggests that the signal arrives at the EX seismometer first. But the phasing between EX_Y and BS_Y suggests the opposite. So maybe my thinking about the problem as 3 co-located sensors measuring plane-wave disturbances originating from the same place is too simplistic? Moreover, Koji points out that for two sensors separated by ~40m, for a ground wave velocity of 1.5 km/s, the maximum phase delay we should see between sensors is 30 msec, which corresponds to ~10 degrees of phase. I guess we have to undo the effects of the phasing in the electronics chain.

Does anyone have some code that's already attempted something similar that I can put the data through? I'd like to not get sucked into writing fresh code.

🤞 this means that the shaking is over for today and I get a few hours of locking time later today evening.

Another observarion is that even after the main 1.14 Hz peak dies out, there is elevated seismic acitivity reported by the 1-3 Hz BLRMS band. This unfortunately coincides with some stack resonance, and so the arm cavity transmission reports greater RIN even after the main peak dies out. Today, it seems that all the BLRMS return to their "nominal" nighttime levels ~10 mins after the main 1.14 Hz peak dies out.

Attachment 1: seisxSpec.pdf
seisxSpec.pdf
  15417   Fri Jun 19 14:03:50 2020 JordanUpdateVACForepump Tip Seal Replacement

Tip Seals were replaced on the forepumps for TP2 and TP3, and both are ready to be installed back onto the forelines.

TP2 Forepump Ultimate Pressure: 180 mtorr

TP3 Forepump Ultimate Pressure: 120 mtorr

  7348   Thu Sep 6 10:57:27 2012 JenneUpdateGeneralForgot to turn green refl pd back on

Quote:

I couldn't understand the Y-End green setup as the PD was turned off and the sign of the servo was flipped. Once they are fixed, I could lock the cavity with the green beams.

Quote:

[EricQ, Jenne, brains of other people]

Get green spots co-located with IR spots on ETMs, ITMs, check path of leakage through the arms, make sure both greens get out to PSL table

 

 I had turned the green refl PD off on Tuesday while we were doing the IPANG alignment, since the beam was not so bright, and the LED on top of the PD was very annoyingly bright.  I forgot to turn it back on.  The sign flip on the servo, I can't explain.

  5793   Thu Nov 3 13:00:52 2011 JenneUpdatePhotosFormatting of MEDM screen names

Quote:

After lots of trial and error, and a little inspiration from Koji, I have written a new script that will run when you select "update snapshot" in the yellow ! button on any MEDM screen. 

Right now, it's only live for the OAF_OVERVIEW screen.  View snapshot and view prev snapshot also work. 

Next on the list is to make a script that will create the yellow buttons for each screen, so I don't have to type millions of things in by hand.

The script lives in:  /cvs/cds/rtcds/caltech/c1/scripts/MEDMsnapshots, and it's called....wait for it....... "updatesnap".

 Currently the update snapshot script looks at the 3 letters after "C1" to determine what folder to put the snapshots in.  (It can also handle the case when there is no C1, ex. OAF_OVERVIEW.adl still goes to the c1oaf folder).  If the 3 letters after C1 are SYS, then it puts the snapshot into /opt/rtcds/caltech/c1/medm/c1sys/snap/MEDM_SCREEN_NAME.adl

Mostly this is totally okay, but a few subsystems seem to have incongruous names.  For example, there are screens called "C1ALS...." in the c1gcv folder.  Is it okay if these snapshots go into a /c1als/snap folder, or do I need to figure out how to put them in the exact same folder they currently exist in?  Or, perhaps, why aren't they just in a c1als folder to begin with? It seems like we just weren't careful when organizing these screens.

Another problem one is the C1_FE_STATUS.adl screen.  Can I create a c1gds folder, and rename that screen to C1GDS_FE_STATUS.adl?  Objections?

 

  6041   Tue Nov 29 18:31:40 2011 DenUpdatedigital noiseFoton error

 In the previous elog we've compared Matlab and Foton SOS representations using low-order filter. Now we move on to high order filters and see that Foton is pretty bad here.

We consider Chebyshev filter of the first type with cuf off frequency 12 Hz and ripple 1 dB. In the table below we summarize the GAINS obtained by Matlab and Foton for different digital filter orders.

Order Matlab Foton
2 5.1892960e-06 5.1892960e-06
4 6.8709377e-12 6.8709378e-12
6 5.4523201e-16 9.0950127e-18
8 5.3879305e-21 1.2038559e-23

 

 

 

 

 

We can see that for high orders the gains are completely different (ORDER of 2!!!). Interesting that besides of very bad GAIN, SOS-MATRIX Foton calculates pretty well. I checked up to 5 digit - full coincidence. Only GAIN is very bad.

The filter considered is cheby1("LowPass",6,1,12) and is a part of the bad Cheby filter where we loose coherence and see some other strange things.

  6061   Thu Dec 1 18:30:39 2011 Vladimir, DenUpdatedigital noiseFoton error

Foton Matlab Error     

We investigated some more the discrepancy between Matlab and Foton numbers. The comparison of cheby1(k, 1, 2*12/16384) was done between versions implemented in Matlab, R and Octave. Filters created by R and Octave agree with Foton.

Also, we found that Matlab has gross precision errors for cutoff frequencies just smaller than used in our fitler, for example cheby1(6, 2*3/16384) fails miserably.

  6062   Fri Dec 2 17:43:46 2011 ranaUpdatedigital noiseFoton error

It would be useful to see some plots so we could figure out exactly what magnitude and phase error correspond to "gross" and "miserable".

  15287   Tue Mar 31 09:39:41 2020 gautamUpdateCDSFoton for shaped noise injections

I'd like to re-measure the transfer function from driving MC2 position to the MC_L_DQ channel (for feedforward purposes). Swept sine would be one option, but I can't get the "Envelope" feature of DTT to work, the excitation amplitude isn't getting scaled as specified in the envelope, and so I'm unable to make the measurement near 1 Hz (which is where the FF is effective). I see some scattered mentions of such an issue in past elogs but no mention of a fix (I also feel like I have gotten the envelope function to work for some other loop measurement templates). So then I thought I'd try broadband noise injection, since that seems to have been the approach followed in the past. Again, the noise injection needs to be shaped around ~1 Hz to avoid knocking the IMC out of lock, but I can't get Foton to do shaped noise injections because it doesn't inherit the sample rate when launched from inside DTT/awggui - this is not a new issue, does anyone know the fix?

Note that we are using the gds2.15 install of foton, but the pre-packaged foton that comes with the SL7 installation doesn't work either.

Update:

The envelope feature for swept-sine wasn't working because i specified the frequency grid in the wrong order apparently. Eric von Reis has been notified to include a sorting algorithm in future DTT so that this can be in arbitrary order. fixing that allows me to run a swept sine with enveloped excitation amplitude and hence get the TF I want, but still no shaped noise injections via foton 😢 

  15288   Tue Mar 31 23:35:50 2020 ranaUpdateCDSFoton for shaped noise injections

do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.

If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.

  15289   Tue Mar 31 23:54:57 2020 gautamUpdateCDSFoton for shaped noise injections

The problem is that foton does not inherit the model sample rate when launched from DTT/awggui. This is likely some shared/linked/dynamic library issue, the binaries we are running are precompiled presumably for some other OS. I've never gotten this to work since we changed to SL7 (but I did use it successfully in 2017 with the Ubuntu12 install).

Quote:

do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.

If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.

  3659   Wed Oct 6 12:00:23 2010 josephb, yuta, kiwamuUpdateCDSFound and fixed filter sampling rate problem with suspensions

While we looking using dtt and going over the basics of its operation, we discovered that the filter sample rates for the suspensions were still set to 2048 Hz, rather than 16384 Hz which is the new front end.  This caused the filters loaded into the front ends to not behave as expected.

After correcting the sample rate, the transfer functions obtained from dtt are now looking like the bode plots from foton.

We fixed the C1SUS.txt and C1MCS.txt files in the /opt/rtcds/caltech/c1/chans/ directory, by changing the SAMPLING lines to have 16384 rather than 2048.

  14500   Fri Mar 29 11:43:15 2019 JonUpdateUpgradeFound c1susaux database bug

I found the current bias output channels, C1:SUS-<OPTIC>_<DOF>BiasAdj, were all pointed at C1:SUS-<OPTIC>_ULBiasSet for every degree of freedom. This same issue appeared in all eight database files (one per optic), so it looks like a copy-and-paste error. I fixed them to all reference the correct degree of freedom.

  8653   Thu May 30 01:02:41 2013 ManasaUpdateGreen LockingFound it!

X green beat note found!

Key points
1. Near-field and far-field alignment on the PSL table. The near-field alignment checked by looking at the camera and the far-field alignment checked by allowing the beams to propagate by removing the DC PD.
2. Check laser temperature and get a sense of how the offset translates to the actual laser temperature.
3. Get an idea of the expected temperature of laser using the plot in elog.

Data
PSL laser temperature = 31.45 deg C

X end laser temperature = 39.24 deg C
C1-ALS-X_SLOW_SEERVO2_OFFSET = 4810
Amplitude of beat note = -40dBm

I do not understand why
1. The amplitude of beatnote falls linearly with frequency (peak traced using 'hold' option of the spectrum analyzer).
2. I found the beat note at the RF output of the PD. Earlier, while I was trying to search for the beatnote from the RFmon output of the betabox, there was a strong peak at 29.6MHz that existed even when the green shutters were closed. It's source has to be traced.

Next
Solve beatbox puzzle and lock arm using ALS.

IMG_0598.JPGIMG_0599.JPG

ELOG V3.1.3-