40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 57 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  6675   Thu May 24 14:49:59 2012 KojiSummaryGeneralDaily news idea

Top tab categolies:

  • Summary
  • CDS
    • CDS Status
  • PEM
    • Seismic 24h trend
    • Accoustic 24h trend
    • Weather/Temp/Barometer/etc 24h trend
  • PSL/IOO
    • PSL summary trend / duty ratio
    • IOO summary (MC Health Check/IOO QPD trends / IFO QPD trends / Transmon QPD trends) duty ratio
  • SUS
    • Summary
    • OSEM PSD/trend
    • OPLEV PSD/trend
  • IFO
    • DC Mon
    • RF ports
    • OMC
  • Steve
    • Vacuum
  • Misc.

IFO

  • DC Monitors
    • Incident beam power trend (24h)
    • AS/REFL/POP/TRX/TRY bean power trend (24h)
    • AS/POP RF beam power trend (24h)
  • RF port
    • DARM sensitivity PSD (mean/min/max/reference) for an hour
    • DARM/CARM/PRCL/MICH/SRCL PSD
    • DARM/CARM/PRCL/MICH/SRCL (freq vs Gaussianity)
    • DARM/CARM/PRCL/MICH/SRCL calibration trend
  • OMC
    • TBD

 

  11319   Fri May 22 11:59:54 2015 ericqUpdateSUSDampRestore script problem

PRM watchdog tripped, but the damprestore.py script wouldn't run. 

It turns out the script tries to import some ezca stuff from /users/yuta (angry), which had been moved to /users/OLD/yuta (crying). 

I've moved the yuta directory back to /users/ until I fix the damprestore script. 

  11320   Fri May 22 12:09:57 2015 ranaUpdateSUSDampRestore script problem

I will move it back. We need to fix our scripts to not use any users/ libraries ever again.

Quote:

PRM watchdog tripped, but the damprestore.py script wouldn't run. 

It turns out the script tries to import some ezca stuff from /users/yuta (angry), which had been moved to /users/OLD/yuta (crying). 

I've moved the yuta directory back to /users/ until I fix the damprestore script. 

 

  16748   Tue Mar 29 17:35:54 2022 PacoUpdateSUSDamping fix on BS, AS4, PR2, and PR3

[Ian, Paco]

  • We removed the "cheby" filters from AS4, PR2 and PR3 which had been misplaced after copying from the old SUS models. After removing them, the new SOS damped fine. Note that because of the Input matrices, the filters have to be enabled all at once for the MIMO loop to make sense.
  • We also disabled the "Cheby" filter on BS and saw it damp better. We don't understand this yet, but perhaps it's just a consequence of the many changes in the BSC that have rendered this filter obsolete.
  • we also reduced the damping gains on PR2, PR3 and AS4 to prevent overflow values. After the adjustments the optics were damping fine.
  12515   Thu Sep 22 22:52:08 2016 ericqUpdateGeneralDamping found to be on

Just a heads up, it looks like the damping came on at around 8:30pm. Not sure why. 

  10623   Fri Oct 17 15:17:31 2014 jamieUpdateCDSDaqd "fixed"?

I very tentatively declare that this particular daqd crapfest is "resolved" after Jenne rebooted fb and daqd has been running for about 40 minutes now without crapping itself.  Wee hoo.

I spent a while yesterday trying to figure out what could have been going on.  I couldn't find anything.  I found an elog that said a previous daqd crapfest was finally only resolved by rebooting fb after a similar situation, i.e. there had been an issue that was resolved, daqd was still crapping itself, we couldn't figure out why so we just rebooted, daqd started working again.

So, in summary, totally unclear what the issue was, or why a reboot solved it, but there you go.

  10624   Fri Oct 17 16:54:11 2014 jamieUpdateCDSDaqd "fixed"?

Quote:

I very tentatively declare that this particular daqd crapfest is "resolved" after Jenne rebooted fb and daqd has been running for about 40 minutes now without crapping itself.  Wee hoo.

I spent a while yesterday trying to figure out what could have been going on.  I couldn't find anything.  I found an elog that said a previous daqd crapfest was finally only resolved by rebooting fb after a similar situation, i.e. there had been an issue that was resolved, daqd was still crapping itself, we couldn't figure out why so we just rebooted, daqd started working again.

So, in summary, totally unclear what the issue was, or why a reboot solved it, but there you go.

Looks like I spoke too soon.  daqd seems to be crapping itself again:

controls@fb /opt/rtcds/caltech/c1/target/fb 0$ ls -ltr logs/old/ | tail -n 20
-rw-r--r-- 1 4294967294 4294967294    11244 Oct 17 11:34 daqd.log.1413570846
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 11:36 daqd.log.1413570988
-rw-r--r-- 1 4294967294 4294967294    11244 Oct 17 11:38 daqd.log.1413571087
-rw-r--r-- 1 4294967294 4294967294    13377 Oct 17 11:43 daqd.log.1413571386
-rw-r--r-- 1 4294967294 4294967294    11481 Oct 17 11:45 daqd.log.1413571519
-rw-r--r-- 1 4294967294 4294967294    11985 Oct 17 11:47 daqd.log.1413571655
-rw-r--r-- 1 4294967294 4294967294    13219 Oct 17 13:00 daqd.log.1413576037
-rw-r--r-- 1 4294967294 4294967294    11150 Oct 17 14:00 daqd.log.1413579614
-rw-r--r-- 1 4294967294 4294967294     5127 Oct 17 14:07 daqd.log.1413580231
-rw-r--r-- 1 4294967294 4294967294    11165 Oct 17 14:13 daqd.log.1413580397
-rw-r--r-- 1 4294967294 4294967294     5440 Oct 17 14:20 daqd.log.1413580845
-rw-r--r-- 1 4294967294 4294967294    11352 Oct 17 14:25 daqd.log.1413581103
-rw-r--r-- 1 4294967294 4294967294    11359 Oct 17 14:28 daqd.log.1413581311
-rw-r--r-- 1 4294967294 4294967294    11195 Oct 17 14:31 daqd.log.1413581470
-rw-r--r-- 1 4294967294 4294967294    10852 Oct 17 15:45 daqd.log.1413585932
-rw-r--r-- 1 4294967294 4294967294    12696 Oct 17 16:00 daqd.log.1413586831
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 16:02 daqd.log.1413586924
-rw-r--r-- 1 4294967294 4294967294    11165 Oct 17 16:05 daqd.log.1413587101
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 16:21 daqd.log.1413588108
-rw-r--r-- 1 4294967294 4294967294    11097 Oct 17 16:25 daqd.log.1413588301
controls@fb /opt/rtcds/caltech/c1/target/fb 0$

The times all indicate when the daqd log was rotated, which happens everytime the process restarts.  It doesn't seem to be happening so consistently, though.  It's been 30 minutes since the last one.  I wonder if it somehow correlated with actual interaction with the NDS process.  Does some sort of data request cause it to crash?

 

  10633   Thu Oct 23 01:39:34 2014 JenneUpdateCDSDaqd "fixed"?

Merging of threads. 

ChrisW figured out that it looks like the problem with the frame builder is that it's having to wait for disk access.  He has tweaked some things, and life has been soooo much better for Q and I this evening!  See Chris' elog at elog 10632.

In the last few hours we've had 2 or maybe 3 times that I've had to reconnect Dataviewer to the framebuilder, which is a significant improvement over having to do it every few minutes.

Also, Rossa is having trouble with DTT today, starting sometime around dinnertime.  Ottavia and Pianosa can do DTT things, but Rossa keeps getting "test timed out". 

  10616   Thu Oct 16 03:18:48 2014 JenneUpdateCDSDaqd segfaulting again

 The daqd process on the frame builder looks like it is segfaulting again.  It restarts itself every few minutes.  

The symptoms remind me of elog 9530, but /frames is only 93% full, so the cause must be different.  

Did anyone do anything to the fb today?  If you did, please post an elog to help point us in a direction for diagnostics.

Q!!!!  Can you please help?  I looked at the log files, but they are kind of mysterious to me - I can't really tell the difference between a current (bad) log file and an old (presumably fine) log file.  (I looked at 3 or 4 random, old log files, and they're all different in some ways, so I don't know which errors and warnings are real, and which are to be ignored).

  10617   Thu Oct 16 12:22:43 2014 ericqUpdateCDSDaqd segfaulting again

I've been trying to figure out why daqd keeps crashing, but nothing is fixed yet. 

I commented out the line in /etc/inittab that runs daqd automatically, so I could run it manually. Each time I run it ( with ./daqd -c ./daqdrc while in c1/target/fb), it churns along fine for a little while, but eventually spits out something like:

[Thu Oct 16 12:07:23 2014] main profiler warning: 1 empty blocks in the buffer
[Thu Oct 16 12:07:24 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 12:07:25 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1097521658 to 1097521660
Segmentation fault
 
Or:
 
[Thu Oct 16 11:43:54 2014] main profiler warning: 1 empty blocks in the buffer
[Thu Oct 16 11:43:55 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:56 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:57 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:58 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:59 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:00 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:01 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:02 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1097520250 to 1097520257
FATAL: exception not rethrown
Aborted

I looked for time disagreements between the FB and the frontends, but they all seem fine. Running ntpdate only corrected things by 5ms. However, looking through /var/log/messages on FB, I found that ntp claims to have corrected the FB's time by ~111600 seconds (~31 hours) when I rebooted it on Monday.

Maybe this has something to do with the timing that the FB is getting? The FE IOPs seem happy with their sync status, but I'm not personally currently aware of how the FB timing is set up. 


Addendum:

On Monday, Jamie suggested checking out the situation with FB's RAID. Searching the elog for "empty blocks in the buffer" also brought up posts that mentioned problems with the RAID. 

I went to the JetStor RAID web interface at http://192.168.113.119, and it reports everything as healthy; no major errors in the log. Looking at the SMART status of a few of the drives shows nothing out of the ordinary. The RAID is not mounted in read-only mode either, as was the problem mentioned in previous elogs. 

  4320   Thu Feb 17 23:56:53 2011 josephbUpdateCDSDaqd was rebuilt, now reverted.

As one of the trouble shooting steps for the daqd (i.e. framebuilder) I rebuilt the daqd executable.  My guess is somewhere in the build code is some kind of GPS offset to make the time correct due to our lack of IRIG-B signal.

The actual daqdrc file was left untouched when I did the new install, so the symmetricom gps offset is still the same, which confuses me.

I'll take a look at the SVN diffs tomorrow to see what changed in that code that could cause a 300000000 or so offset to the GPS time.

 

 

  2095   Thu Oct 15 02:38:10 2009 rana, robUpdateOMCDark Port Mode Scan using the OMC

Bottom trace is proportional to the OMC PZT voltage - top trace is the transmitted light through the OMC. Interferometer is locked (DARM- RF) with arm powers = 80 / 100. The peaks marked by the cursors are the +(- ?) 166 MHz sidebands.

Attachment 1: OMC-ModeScan_091015.png
OMC-ModeScan_091015.png
  519   Wed Jun 4 16:57:12 2008 josephbConfigurationCamerasDark images from cameras (electronics noise measurement)
The attached pdfs are 1 second and 1 millisecond long integrations from the GC650 and GC750 cameras with a cap in place - i.e. no light.

They include the mean and standard deviation values.

The single bright pixel in the 1 second long exposure image for the GC650 seems to be a real effect. Multiple images taken show the same bright pixel (although with slightly varying amplitudes).

The last pdf is a zoom in on the z-axis of the first pdf (i.e. GC650 /w 1 sec exposure time).

I'm not really sure what to make of the mean remaining virtually fixed for the different integration times for both cameras. I guess 0 is simply offset, but doesn't result in any runaway integrations in general. Although there are certainly some stronger pixels in the long exposures when compared to the short exposures.

Its interesting to note the standard deviation actually drops from the long exposure to the short exposure, possibly influenced by certain pixels which seem to grow with time.

The one with the least variation from its "zero" was the 1 millisecond GC750 dark image.
Attachment 1: GC650_1sec_dark.pdf
GC650_1sec_dark.pdf
Attachment 2: GC650_1msec_dark.pdf
GC650_1msec_dark.pdf
Attachment 3: GC750_1sec_dark.pdf
GC750_1sec_dark.pdf
Attachment 4: GC750_1msec_dark.pdf
GC750_1msec_dark.pdf
Attachment 5: GC650_1sec_dark_zoom.pdf
GC650_1sec_dark_zoom.pdf
  7330   Fri Aug 31 17:44:21 2012 ManasaUpdateRingdownData

Quote:

Ok, so the whole idea that mirror motion can explain the ripples is nonsense. At least, when you think off the ringdown with "pump off". The phase shifts that I tried to estimate from longitudinal and tilt mirror motion are defined against a non-existing reference. So I guess that I have to click on the link that Koji posted...

Just to mention, for the tilt phase shift (yes, there is one, but the exact expression has two more factors in the equation I posted), it does not matter, which mirror tilts. So even for a lower bound on the ripple time, my equation was incorrect. It should have the sum over all three initial tilt angles not only the two "shooting into the long arms" of the MC.

Quote:

Laser frequency shift = longitudinal motion of the mirrors

Ringing: http://www.opticsinfobase.org/ol/abstract.cfm?uri=ol-20-24-2463

Quote:

Hmm. I don't know what ringing really is. Ok, let's assume it has to do with the pump... I don't see how the pump laser could produce these ripples. They have large amplitudes and so I always suspected something happening to the intracavity field. Therefore I was looking for effects that would change resonance conditions of the intracavity field during ringdown. Tilt motion seemed to be one explanation to me, but it may be a bit too slow (not sure yet). Longitudinal mirror motion is certainly too slow. What else could there be?

 

 

It is essential we take a look at the ringdown data for all measurements made so far to figure out what must be done to track the source of these notorious ripples. I've attached the plot for the same showing the decay time to be the same in all cases. About the ripples; it seems unlikely to both Jan and me that the ripples are some electronic noise because the ripples do not follow any common pattern or time constant. We have discussed with Koji about monitoring the frequency shift, the input power to the MC and also try other methods of shutting down the pump to track their source as the next steps.

 

cum_plot.png 

  10274   Sat Jul 26 10:12:19 2014 AkhilUpdateGeneralData Acquisition from FC into EPICS Channels

 I succeeded in creating a new channel access server hosted on domenica ( R Pi) for continuous data acquisition from the FC into  accessible channels. For this I have written a ctypes interface between EPICS and the C interface code to write data into the channels. The channels which I created are:

C1:ALS-X-BEAT-NOTE-FREQ

C1:ALS-Y-BEAT-NOTE-FREQ

 

The scripts I have written for this can be found in:

db script in:     /users/akhil/fcreadoutIoc/fcreadoutApp/Db/fcreadout.db

 Python code:  /users/akhil/fcreadoutIoc/pycall

C code:          /users/akhil/fcreadoutIoc/FCinterfaceCcode.c

I will give the standard channel names(similar to the names on the channel root)once the testing is completed and confirm that data from FC is consistent with the C code readout. Once ready I will run the code forever so that both the server and data acquisition are in process always.

Yesterday, when I set out to test the channel, I faced few serious issues in booting the raspberry pi. However, I have backed up the files on the Pi and will try to debug the issue very soon( I will test with Eric Q's R Pi).

To run these codes one must be root ( sudo python pycall, sudo ./FCinterfaceCcode)  because the HID- devices can be written to only by the root(should look into solving this issue). 

Instructions for Installation of EPICS, and how to create channel server on Pi will be described in detail in 40m Wiki ( FOLL page).

 

  10276   Sat Jul 26 13:38:34 2014 JamieUpdateGeneralData Acquisition from FC into EPICS Channels

Quote:

 I succeeded in creating a new channel access server hosted on domenica ( R Pi) for continuous data acquisition from the FC into  accessible channels. For this I have written a ctypes interface between EPICS and the C interface code to write data into the channels. The channels which I created are:

C1:ALS-X-BEAT-NOTE-FREQ

C1:ALS-Y-BEAT-NOTE-FREQ

 

The scripts I have written for this can be found in:

db script in:     /users/akhil/fcreadoutIoc/fcreadoutApp/Db/fcreadout.db

 Python code:  /users/akhil/fcreadoutIoc/pycall

C code:          /users/akhil/fcreadoutIoc/FCinterfaceCcode.c

I will give the standard channel names(similar to the names on the channel root)once the testing is completed and confirm that data from FC is consistent with the C code readout. Once ready I will run the code forever so that both the server and data acquisition are in process always.

Yesterday, when I set out to test the channel, I faced few serious issues in booting the raspberry pi. However, I have backed up the files on the Pi and will try to debug the issue very soon( I will test with Eric Q's R Pi).

To run these codes one must be root ( sudo python pycall, sudo ./FCinterfaceCcode)  because the HID- devices can be written to only by the root(should look into solving this issue). 

Instructions for Installation of EPICS, and how to create channel server on Pi will be described in detail in 40m Wiki ( FOLL page).

 

controls@rossa|~ 2> ls /users/akhil/fcreadoutIoc
ls: cannot access /users/akhil/fcreadoutIoc: No such file or directory
controls@rossa|~ 2> 

This code should be in the 40m SVN somewhere, not just stored on the RPi.

I'm still confused why python is in the mix here at all.  It doesn't make any sense at all that a C program (EPICS IOC) would be calling out to a python program (pycall) that then calls out to a C program (FCinterfaceCcode).  That's bad programming.  Streamline the program and get rid of python.

You also definitely need to fix whatever the issue is that requires running the program as root.  We can't have programs like this run as root.

  10277   Sat Jul 26 14:35:28 2014 AkhilUpdateGeneralData Acquisition from FC into EPICS Channels

Quote:

Quote:

 I succeeded in creating a new channel access server hosted on domenica ( R Pi) for continuous data acquisition from the FC into  accessible channels. For this I have written a ctypes interface between EPICS and the C interface code to write data into the channels. The channels which I created are:

C1:ALS-X-BEAT-NOTE-FREQ

C1:ALS-Y-BEAT-NOTE-FREQ

 

The scripts I have written for this can be found in:

db script in:     /users/akhil/fcreadoutIoc/fcreadoutApp/Db/fcreadout.db

 Python code:  /users/akhil/fcreadoutIoc/pycall

C code:          /users/akhil/fcreadoutIoc/FCinterfaceCcode.c

I will give the standard channel names(similar to the names on the channel root)once the testing is completed and confirm that data from FC is consistent with the C code readout. Once ready I will run the code forever so that both the server and data acquisition are in process always.

Yesterday, when I set out to test the channel, I faced few serious issues in booting the raspberry pi. However, I have backed up the files on the Pi and will try to debug the issue very soon( I will test with Eric Q's R Pi).

To run these codes one must be root ( sudo python pycall, sudo ./FCinterfaceCcode)  because the HID- devices can be written to only by the root(should look into solving this issue). 

Instructions for Installation of EPICS, and how to create channel server on Pi will be described in detail in 40m Wiki ( FOLL page).

 

controls@rossa|~ 2> ls /users/akhil/fcreadoutIoc
ls: cannot access /users/akhil/fcreadoutIoc: No such file or directory
controls@rossa|~ 2> 

This code should be in the 40m SVN somewhere, not just stored on the RPi.

I'm still confused why python is in the mix here at all.  It doesn't make any sense at all that a C program (EPICS IOC) would be calling out to a python program (pycall) that then calls out to a C program (FCinterfaceCcode).  That's bad programming.  Streamline the program and get rid of python.

You also definitely need to fix whatever the issue is that requires running the program as root.  We can't have programs like this run as root.

 I tried making these changes but there was a problem with R pi boot again.I now know how to bypass the python code using IOC.I will make these changes once the problem with the Pi is fixed.

  10200   Tue Jul 15 01:41:43 2014 JenneUpdateLSCData for DARM on sqrtInv investigation

I took some data tonight for a quick look at what combinations of DC signals might be good to use for DARM, as an alternative to ALS before we're ready for RF.

I had the arms locked with ALS, PRMI with REFL33, and tried to move the CARM offset between plus and minus 1.  The PRMI wasn't holding lock closer than about -0.3 or +0.6, so that is also a problem.  Also, I realized just now that I have left the beam dumps in front of the transmission QPDs, so I had prevented any switching of the trans PD source.  This means that all of my data for C1:LSC-TR[x,y]_OUT_DQ is taken with the Thorlabs PDs, which is fine, although they saturate around arm powers of 4 ever since my analog gain increase on the whitening board.  Anyhow, the IFO didn't hold lock for much beyond then anyway, so I didn't miss out on much.  I need to remember to remove the dumps though!!

Self:  Good stuff should be between 12:50am - 1:09am.  One set of data was ./getdata -s 1089445700 -d 30 -c C1:LSC-TRX_OUT_DQ C1:LSC-TRY_OUT_DQ C1:LSC-CARM_IN1_DQ C1:LSC-PRCL_IN1_DQ

  10208   Wed Jul 16 01:04:09 2014 JenneUpdateLSCData for DARM on sqrtInv investigation

I realized while I was looking at last night's data that I had been doing CARM sweeps, when really I wanted to be doing DARM sweeps.  I took a few sets of data of DARM sweeps while locked on ALSdiff.  However, Rana pointed out that comparing ALSdiff to TRX-TRY isn't exactly a fair comparison while I'm locked on ALSdiff, since it's an in-loop signal, so it looks artificially quiet. 

Anyhow, I may consider transitioning DARM over to AS55 temporarily so that I can look at both as out-of-loop sensors. 

Also, so that I can try locking DARM on DC transmission, I have added 2 more columns to the LSC input matrix (now we're at 32!), for TRX and TRY.  We already had sqrt inverse versions of these signals, but the plain TRX and TRY were only available as normalization signals before.  Since Koji put in the facility to sqrt or not the normalization signals, I can now try:

Option 1:  ( TRX - TRY ) / (TRX + TRY)

Option 2:  ( TRX - TRY ) / sqrt( TRX + TRY )

DARM does not yet have the facility to normalize one signal (DC transmission) and not another (ALS diff), so I may need to include that soon.  For tonight, I'm going to try just changing matrix elements with ezcastep.

Since I changed the c1lsc.mdl model, I compiled it, restarted the model, and checked the model in.  I have also added these 2 columns to the AUX_ERR sub-screen for the LSC input matrix.  I have not changed the LSC overview screen.

  2775   Tue Apr 6 11:27:11 2010 AlbertoUpdateComputer Scripts / ProgramsData formats in the Agilent AG4395a Spectrum Analyzer

Lately I've been trying to sort out the problem of the discrepancy that I noticed between the values read on the spectrum analyzer's display and what we get with the GPIB interface.

It turns out that the discrepancy originates from the two data vector that the display and the GPIB interface acquire. Whereas the display shows data in "RAW" format, the GPIB interface, for the way the netgpibdata script is written, acquires the so called "error-corrected data". That is the GPIB downloaded data is postprocessed and corrected for some internal calibration factors of the instrument.

Another problem that I noticed in the GPIB downloaded data when I was measuring noise spectrum, is an unwanted factor of 2 in the amplitude spectral density.
For example, measuring the amplitude spectral density of the FSS RF PD's dark noise at its resonant frequency (~21.5 MHz), I would expect ~15nV/rtHz from the thermal noise - as Rana pointed out in the elog entry 2759). However, the spectrum analyzer reads 30nV/rtHz, in both the display and the GPIB downloaded data, except for the above mentioned little discrepancy between the two. (The discrepancy is about 0.5dBm/Hz in the power spectrum density).
 
My measurement, as I showed it in the elog entry 2760) is of ~15nV/rtHz, but only becasue I divided by 2. Now I realize that that division was unjustified.
 
I'm trying to figure out the reason for that. By now I'm not sure we can trust the netgpib package for spectrum measurements with the AG4395.
  2776   Tue Apr 6 16:55:28 2010 AlbertoUpdateComputer Scripts / ProgramsData formats in the Agilent AG4395a Spectrum Analyzer

Quote:

Lately I've been trying to sort out the problem of the discrepancy that I noticed between the values read on the spectrum analyzer's display and what we get with the GPIB interface.

It turns out that the discrepancy originates from the two data vector that the display and the GPIB interface acquire. Whereas the display shows data in "RAW" format, the GPIB interface, for the way the netgpibdata script is written, acquires the so called "error-corrected data". That is the GPIB downloaded data is postprocessed and corrected for some internal calibration factors of the instrument.

Another problem that I noticed in the GPIB downloaded data when I was measuring noise spectrum, is an unwanted factor of 2 in the amplitude spectral density.
For example, measuring the amplitude spectral density of the FSS RF PD's dark noise at its resonant frequency (~21.5 MHz), I would expect ~15nV/rtHz from the thermal noise - as Rana pointed out in the elog entry 2759). However, the spectrum analyzer reads 30nV/rtHz, in both the display and the GPIB downloaded data, except for the above mentioned little discrepancy between the two. (The discrepancy is about 0.5dBm/Hz in the power spectrum density).
 
My measurement, as I showed it in the elog entry 2760) is of ~15nV/rtHz, but only becasue I divided by 2. Now I realize that that division was unjustified.
 
I'm trying to figure out the reason for that. By now I'm not sure we can trust the netgpib package for spectrum measurements with the AG4395.

 I noticed that someone, that wasn't me, has edited the wiki page about the netgpibdata under my name saying:

 " [...]

* A4395 Spectrum Units
Independetly by which unites are displayed by the A4395 spectrum analyzer on the screen, the data is saved in Watts/rtHz
"

That is not correct. The spectrum is just in Watts, since it gives the power over the bandwidth. The correspondent power spectral density is showed under the "Noise" measurement format and it's in Watts/Hz.
Watts/rtHz is not a correct unit.
  7746   Mon Nov 26 18:56:34 2012 JenneHowToComputersData logging suggestions

We've been talking for a while about how we want to store data.  I'm not in love with keeping it on the elog, although I think we should always be able to reference and go back and forth between the elogs and the data.

I have made a new folder: /data    EDIT: nevermind.  I want it to be on the file system just like /users, but I don't know how to do that.  Right now the folder is just on Ottavia. Jamie will help me tomorrow.

In this folder, we will save all of the data which goes into the elog. 

I propose that we should have a common format for the names of the data files, so that we can easily find things.

My proposal is that one begins ones elog regarding the data to be saved, and submit it immediately after putting in the first ~sentence or so. One should then make a new folder inside the data folder with a title "elog#####_Anything_Else_You_Want" Then, data (which was originally saved in ones own users folder) should be copied into the /data/elog#####_AnythingElse/ folder. Also in that folder should be any Matlab scripts used to create the plots that you post in the elog.  One should then edit the elog to continue making a regular, very thorough elog, including the path to the data.  Elog should include all of the information about the measurement, state of the IFO (or whatever you were measuring), etc. 

Riju will be alpha-testing this procedure tonight.  EDIT: nevermind...see previous edit.

  11444   Fri Jul 24 18:12:52 2015 Max IsiUpdateGeneralData missing

For the past couple of days, the summary pages have shown minute trend data disappear at 12:00 UTC (05:00 AM local time). This seems to be the case for all channels that we plot, see e.g. https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20150724/ioo/. Using Dataviewer, Koji has checked that indeed the frames seem to have disappeared from disk. The data come back at 24 UTC (5pm local). Any ideas why this might be?

  11455   Tue Jul 28 17:07:45 2015 JamieUpdateGeneralData missing
Quote:

For the past couple of days, the summary pages have shown minute trend data disappear at 12:00 UTC (05:00 AM local time). This seems to be the case for all channels that we plot, see e.g. https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20150724/ioo/. Using Dataviewer, Koji has checked that indeed the frames seem to have disappeared from disk. The data come back at 24 UTC (5pm local). Any ideas why this might be?

Possible explanations:

  • The data transfers to LDAS had been shut off while we were doing the DAQ debugging. I don't know if they have been turned back on.  Unlikely this is the problem since you would probably see no data at all if this were the case.
  • wiper script parameters might have been changed to store less of the trend data for some reason.
  • Frame size is different and therefore wiper script parameters need to be adjusted.
  • Steve deleted it all.
  • ...
  10973   Wed Feb 4 18:16:44 2015 KojiUpdateLSCData transfer rate of c1lsc reduced from ~4MB/s to ~3MB/s

c1lsc had 60 full-rate (16kS/s) channels to record. This yielded the LSC to FB connection to handle 4MB/s (mega-byte) data rate.
This was almost at the data rate limit of the CDS and we had frequent halt of the diagnostic systems (i.e. DTT and/or dataviewer)

Jenne and I reviewed DAQ channel list and decided to remove some channels.  We also reviewed the recording rate of them
and reduced the rate of some channels. c1lsc model was rebuilt, re-installed, and restarted. FB was also restarted. These are running as they were.
The data rate is now reduyced to ~3MB nominal.


The following is the list of the channels removed from the DQ channels:

AS11_I_ERR
AS11_Q_ERR
AS165_I_ERR
AS165_Q_ERR
POP55_I_ERR
POP55_Q_ERR

The following is the list of the channels with the new recording rate:

TRX_SQRTINV_OUT 2048
TRY_SQRTINV_OUT 2048
DARM_A_ERR 2048
DARM_B_ERR 2048
MICH_A_ERR 2048
MICH_B_ERR 2048
PRCL_A_ERR 2048
PRCL_B_ERR 2048
CARM_A_ERR 2048
CARM_B_ERR 2048

  13801   Mon Apr 30 23:13:12 2018 KevinUpdateComputer Scripts / ProgramsDataViewer leapseconds

I was trying to plot trends (min, 10 min, and hour) in DataViewer and got the following error message

Connecting.... done
 mjd = 58235
leapsecs_read()
  Opening leapsecs.dat
  Open of leapsecs.dat failed
leapsecs_read() returning 0
frameMemRead - gpstimest = 1208844718

 

thoough the plots showed up fine after. Do we need to fix something with the leapsecs.dat file?

  14781   Fri Jul 19 19:44:03 2019 gautamUpdateCDSDatabase file test

Summary:

The database files for C1ISCAUX seem to work file - the exception being the mbbo channels for the CM board.

Details:

This was just a software test - the actual functionality of the channels will have to be tested once the Acromag crate has been installed in the rack. One change I had to make on the MEDM screen for the LSC PD whitening gains was to get rid of the "NMS" suffix on the EPICS channel names for whitening gain sliders/drop-down-menus. I suspect this has to do with the EPICS version we are using, 7.0.1. Furthermore, AS165 and POP55 no longer exist - I hold off removing them from the MEDM screen for the moment.

Next steps:

From the software point of view, the major steps are:

  1. Fix the mbbo channel notation in the database files
  2. Write and test the latch enabling code
  3. Figure out what scripted tests can be done to test the functionality of the new Acromag box.

I am stopping the EPICS server on the new machine and restarting the old VME crate over the weekend.

Attachment 1: Whitening.png
Whitening.png
  14771   Thu Jul 18 10:46:04 2019 gautamUpdateCDSDatabase files made

I completed the translation of the .db files for the EPICS database records from the VME notation to the Acromag/Modbus/Asyn notation. The channels are now organized into 5 database files, located in /cvs/cds/caltech/target/c1iscaux3/,  for convenience:

  1. C1_ISC-AUX_LSCPDs.db -------- This handles whitening gain, AA enable/bypass, Demodulator FE, and PD Interface Board channels for REFL11, REFL55, REFL33, REFL165, POP22, POP110, POX11, POY11, AS55 and AS110 photodiodes.
  2. C1_ISC-AUX_CM.db -------------- This handles all channels for the CM board. The mbbo addressing notation needs to be checked.
  3. C1_ISC-AUX_QPDs.db ----------- This handles all channels for the IPPOS QPD.
  4. C1_ISC-AUX_ALS.db ------------- This handles all channels for the IR ALS DFD LO and RF power monitoring.
  5. C1_ISC-AUX_SPARE.db ---------- This handles the unused channels for the various whitening, AA and PD interface boards.

For reasons unknown to me, the database files in the other Acromag system target directories (e.g. c1susaux, c1auxex) all had 755 level access permission - maybe this is required for systemctl to handle the EPICS serving? Anyways, I upgraded the permission level of the above 5 files using chmod.

There are almost certainly typos / other errors, and I may have missed copying over some soft/calibrated channels, but I hope that this way of grouping by subsystem will make the debugging less painful. Once Chub connects up the power lines to the Acromags, I will run the soft tests. For this purpose, I've also made a C1_ISC-AUX.cmd file and a C1_ISC-AUX.env file in the above target directory, and also made the modbusIOC.service file in /etc/systemd/system on the supermicro.

  11830   Tue Dec 1 10:52:52 2015 SteveUpdateGeneralDataviewer

Dataviewer x axis end is not there.

On ( 2600 days) longer plots it is missing 8 moths and on (100 days) shorther plot it is missing 1 month.

Attachment 1: xAxisEndMissing.png
xAxisEndMissing.png
  9137   Wed Sep 18 11:29:43 2013 manasaUpdateCDSDataviewer cannot connect to fb

Masayuki pointed out that dataviewer wasn't connecting to the fb this morning.

When I started dataviewer from the terminal I obtained the following error:

controls@pianosa:~ 0$ dataviewer
Can't find hostname `fb:8088'
Can't find hostname `fb:8088'; gethostbyname(); error=1
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Error in obtaining chan info.
Can't find hostname `fb:8088'
Can't find hostname `fb:8088'; gethostbyname(); error=1

I checked the CDS FE status screen and it looks normal. I could ping the fb and ssh to it as well.

I restarted fb to see if it made any difference. telnet fb 8088

It hasn't helped. Anything else that can be done??

CDS_FE.png

  9138   Wed Sep 18 11:52:53 2013 JamieUpdateCDSDataviewer cannot connect to fb

Quote:

Masayuki pointed out that dataviewer wasn't connecting to the fb this morning.

When I started dataviewer from the terminal I obtained the following error:

controls@pianosa:~ 0$ dataviewer
Can't find hostname `fb:8088'
Can't find hostname `fb:8088'; gethostbyname(); error=1
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Error in obtaining chan info.
Can't find hostname `fb:8088'
Can't find hostname `fb:8088'; gethostbyname(); error=1

I checked the CDS FE status screen and it looks normal. I could ping the fb and ssh to it as well.

I restarted fb to see if it made any difference. telnet fb 8088

It hasn't helped. Anything else that can be done??

I've fixed the problem.  This was due to a change I made in the NDSSERVER environment variable so that it would work with cdsutils.  I didn't realize there was an incompatibility with how dataviewer parses NDSSERVER.  Joe and I will have to figure it out.

In the mean time I've changed things back so that that dataviewer should now work as expected.  You might have to log out and back in for it to work (or at least open a new terminal).

  14782   Fri Jul 19 22:48:08 2019 KruthiUpdate Dataviewer error

I'm not able to get trends of the TM adjustment test that Rana had asked us to perform, from the dataviewer. It's throwing the following error:

Connecting to NDS Server fb (TCP port 8088)
Connecting.... done
Server error 7: connect() failed
datasrv: DataWrite failed: daq_send: Resource temporarily unavailable
T0=19-07-20-01-27-39; Length=600 (s)
No data output.

  14783   Sat Jul 20 01:03:37 2019 gautamUpdate Dataviewer error

What channels are you trying to read?

Quote:

I'm not able to get trends of the TM adjustment test that Rana had asked us to perform, from the dataviewer. It's throwing the following error:

Connecting to NDS Server fb (TCP port 8088)
Connecting.... done
Server error 7: connect() failed
datasrv: DataWrite failed: daq_send: Resource temporarily unavailable
T0=19-07-20-01-27-39; Length=600 (s)
No data output.

  10611   Wed Oct 15 17:18:10 2014 JenneUpdateComputer Scripts / ProgramsDataviewer fix with Ubuntu 12.04

 

 I have modified the Dataviewer launcher (which runs when you either click the icon or type "dataviewer" in the terminal).  

A semi-old problem was that it would open in the file /users/Templates, but our dataviewer templates start in /users/Templates/Dataviewer_Templates.  Now this is the folder that dataviewer opens into.  This was not related to the upgrade to Ubuntu 12, but will be overwritten any time someone does a checkout of the /ligo/apps/launchers folder.

A problem that is related to the Ubuntu 12 situation, which we had been seeing on Ottavia and Pianosa for a few weeks, was that the variable NDSSERVER was set to fb:8088, which is required for cdsutils to work.  However, dataviewer wants this variable to be set to just fb.  So, locally in the dataviewer launcher script, I set NDSSERVER=fb.  NB: I do not export this variable, because I don't want to screw up the cdsutils.  This may need to be undone if we ever upgrade our Dataviewer.

  11832   Tue Dec 1 16:55:04 2015 SteveUpdateVACDataviewer fixed

Q adjusted the Dataviewer so it is not chopping of data any more. Thanks.

 

Cold cathode gauge reading of 10 years.

 

Attachment 1: 10yrsCC1&4M.png
10yrsCC1&4M.png
  13655   Sun Feb 25 00:03:12 2018 gautamUpdateALSDaughter board prototyping

Using one of the prototype PCB boards given to me by Johannes, I put together v1 of this board and tested it. 

Attachment #1 - Schematic with stages grouped by function and labelled. 

Attachment #2 - Measured vs modelled Transfer function.

Attachment #3 - Measured vs modelled noise. Measurement shown only between positive output and ground, the other port is basically the same. I will update this attachment to reflect the expected signal level in comparison to the noise, but suffice it to say that given the measured input referred noise, we will have plenty of SNR between 0.1Hz and 10kHz. The single stage of whitening should also be sufficient to amplify the signal above ADC noise in the same frequency band

Attachment #4 - Positive output as viewed on a fast (300 MHz) scope using a Tektronix x1 voltage probe.

Attachment #5 - Daughter board noise with measured ALS noise overlaid (the gain of x10 on the existing audio pre-amp has been divided out). 

Comments:

  • I may have overlooked the GBW of the OP27 in the design - specifically, the negative feedback is wired for gain x100 at high frequencies, and so the input signal should be filtered above 8MHz/100 ~80kHz. But the LC poles are at ~500kHz. I wonder if the small deviation seen between modelled and masured TFs is reflecting this. Practically, the easier fix is to add a feedback capacitor that rolls off the gain at high frequencies. 300pF WIMA should do the trick, and we have these in stock.
  • I don't understand why the modelled response starts to roll off around 5kHz, even though the poles of the LC filter at the input stage are at 500kHz. This happens because at low frequencies, the 1.5uH inductor is basically a short - so the RC divider at the input of the Op27 has a pole at 1/2/pi/R/C ~5kHz for R=499, C = 68nF.
  • I am not sure what to make of the peaky comb seen in Attachment #3, but I'm pretty sure it's electronic pickup from something. The GPIB adapter power suppy is not to blame. The peaks are 10 Hz spaced.
  • From Attachment #4, I don't suspect any opamp oscillations given that the signal seen is tiny, but I don't know what amplitude is characteristic of an oscillating op amp, so I am not entirely confident about this conclusion. 
  • Initially while thinking about the design, I was trying to think of making the design generic enough that we could use these signals for high-bandwidth ALS control (a.k.a. Fast ALS) but in the current incarnation, no consideration was given to minimizing phase lag at high frequencies. 
  • Putting the PCB board together was more painful than I imagined as the board is configured for 4 single op amps whereas my design requires 5 - so I needed to do some trace cutting surgery. Rather than make 3 more of these, I'm just going to finish the characterization, and if the design looks good, we can get some custom PCBs printed.
  • Power decoupling caps (47nF) are added to all op amp power pins, but is not shown in the schematic.

Given the overall good agreement between model and measurement, I am going to test this with the actual RF beat. For this test, we will need a differential receiving AA board to interface the output of the daughter board with the ADC input

Quote:

Next step is to actually make a prototype of this.

Attachment 1: schematic.pdf
schematic.pdf
Attachment 2: daughterBoard_TF.pdf
daughterBoard_TF.pdf
Attachment 3: daughterBoard_noise.pdf
daughterBoard_noise.pdf
Attachment 4: TEK00000.PNG
TEK00000.PNG
Attachment 5: daughterBoard_noise.pdf
daughterBoard_noise.pdf
  13657   Mon Feb 26 20:55:56 2018 ranaUpdateALSDaughter board prototyping

Looks good.

* for bypass type applications, you don't have to use Wima caps (which are bigger and more expensive). You can just use any old ceramic SMD cap.

* This seems like a classic case to use the 3 op-amp instumention amplifier config. This is similar, but not quite.

* Ought to use output resistors of ~50 Ohms by default in the output of any circuit. SInce this is a daughter board, maybe 10 Ohms is enough, but the eventual PCB should have pads for it.

  13658   Tue Feb 27 21:10:45 2018 gautamUpdateALSDaughter board testing

I thought a little bit about the next steps in testing the daughter board. The idea is to install this into the existing 1U chassis and tap the differential output from the FET Mixers as inputs to the daughter board. Looking at the D0902745 schematic, I think the best way to do this is to simply remove L3, L4, C10, C11, C15 and C16. I will then use the pads for L3 and L4 to pipe the differential output of the FET mixer to the differential input of the daughter board. 

The daughter board takes care of whitening the ALS signal.

Then we need to pipe the differential output of the daughter board into the differential input of a differential receiving AA board. Koji and Johannes surveyed the available stockpile from the WB workshop. The best option seems to be to use the available v5 of D070081 and install 4 of them into a 1U chassis unit (also available from WB EE shop). The v5s can be upgraded to v6 by replacing the set of input and output buffer OpAmps with AD8622, as per the revision history notes. Koji ordered 100pcs of these today. 

The input to the proposed 1U chassis housing these 8 AA boards (each with 8 channels) is a DB9 connector. The aLIGO demod board chassis that we use to demodulate the ALS signals has a nice DB25 output connector that supplies all the differential I and Q demodulated signals. But since we will install a daughter board, we will hae to hack together some connector solution anyways. I propose using a DB9 connector to pipe the outputs of the daughter board to the inputs of the AA board. Space is tight in the LSC rack, but I think we have space for a 1U chassis (see Attachment #3).

Finally - how to interface the AA board with the ADC? Koji and I discussed options, and seems like the least painful way will be to install a new ADC in the c1lsc expansion chassis in 1Y3. I checked the computer hardware cabinet and there seems to be 1 spare general standards 16bit ADC in there (see Attachment #1). Its health/providence is unknown. But Koji and I will test it after the meeting tomorrow. I also have another ADC card that Jamie and I removed from c1ioo sometime ago. I have labelled it as "GPIO0 LED RED", though I don't remember exactly what the problem was and can't find any elog about it. Incidentally, there are also 2 spare DAC cards available in the cabinet, although their health/rpovidence too is unknown. There are sufficient free slots in the c1lsc expansion chassis (see Attachment #2 though we will need a LIGO ADC adaptor card). Then we can just change the input ADC channels for the ALS signals in the c1lsc model.

In the short term, while the hardware for this plan is being put together, I can test the uncalibrated noise performance of the demod + daughter board combo (uncalibrated because I will make a measurement of voltage noise with an SR785 as opposed to frequency noise). A second daughter board will also need to be assembled - I'm just going to do it on another prototyping board as figuring out how to use Altium will probably take me longer. There is also the matter of fine tuning the polarization axes alignment of the input to the EX fiber coupler.

Attachment 1: IMG_6912.JPG
IMG_6912.JPG
Attachment 2: IMG_6913.JPG
IMG_6913.JPG
Attachment 3: IMG_6914.JPG
IMG_6914.JPG
  1341   Thu Feb 26 19:59:23 2009 YoichiUpdateLockingDaytime locking
Osamu, Yoichi

We tried locking today from about 2PM.
It took about 1000sec on average to acquire the initial lock.
After the initial lock is achieved, the hand-off/ramp-up steps were reasonably robust, although the AS beam sometimes fluctuates a lot (not good for mental health).

Like last night, the IFO loses lock at around arm-power=8.
We measured the CARM AO path loop gain at arm-power=4. We used the SR785 connected to the A-excitation channel of the common mode board through my TFSR785.py script.

The first attachment is the transfer function measured right after the arm power was ramped up to 4.
The overall bandwidth of the CM servo is only 400Hz. Note that since this is the loop gain of only the AO path, the low frequency gain is eaten by the MCL path.
The second attachment is the same transfer function measured after the AO path gain was increased by 6dB.
It is evident that the AO path is working.
We increased both the AO path and MCL gain by 18dB. The third attachment is the AO path TF in this state.
We then increased the arm power but lost lock at arm-power=6. We should have checked the DARM loop too.
BTW, these plots are automatically generated when you use TFSR785.py for transfer function measurements.


I added -notickle option to c1_watch_dr_bang, since tickling seems to be not necessary during the daytime (actually the initial lock was easier with no tickling).

As the construction work in the next building is now calmed down, I think it is ok to do locking during the day time, though I still plan to come at night.
The improvement of my brain efficiency during the day time may compensate for the longer wait time for initial lock.
Attachment 1: CM1.png
CM1.png
Attachment 2: CM2.png
CM2.png
Attachment 3: CM3.png
CM3.png
  201   Wed Dec 19 15:51:00 2007 AndreyUpdateComputer Scripts / ProgramsDaytime measurements in XARM and their results

I was making measurements in XARM for three different nights. All the results agree with each other (I will put the results from the last night soon).

Steve Vass recommended to me to compare those results with the daytime data, in order to see if there is a real necessity to run the scripts overnight or if daytime results will yield similar results.

XARM has been locked, and I am taking measurements today from 3.30PM till 11.30PM.

I will be changing the suspension damping gains in ETMX and ITMX "position" degrees of freedom in the interval from 1.0 to 3.75 with the step 0.25.

BELOW: RESULTS OF MEASUREMENTS WERE ADDED ON THURSDAY, DEC. 20.

All the meaning of the attachments 1-3, 4-6, 7-9, 10-11 is the same as in previous ELOG entries # 195, # 199, # 202, see in those entries which graph corresponds to which coordinate axes orientation.
Attachment 1: RMS-08Hz-Top_View.png
RMS-08Hz-Top_View.png
Attachment 2: RMS-3Hz-Top_View.png
RMS-3Hz-Top_View.png
Attachment 3: RMS-broadband-Top_View.png
RMS-broadband-Top_View.png
Attachment 4: RMS-08Hz-Side-View.png
RMS-08Hz-Side-View.png
Attachment 5: RMS-3Hz-Side_View.png
RMS-3Hz-Side_View.png
Attachment 6: RMS-broadband-Side_View.png
RMS-broadband-Side_View.png
Attachment 7: RMS-08Hz-Q_I-Q_E-Axes.png
RMS-08Hz-Q_I-Q_E-Axes.png
Attachment 8: RMS-3Hz-Side_View.png
RMS-3Hz-Side_View.png
Attachment 9: RMS-broadband-Side_View.png
RMS-broadband-Side_View.png
Attachment 10: Accelerometer_ETMX.png
Accelerometer_ETMX.png
Attachment 11: Accelerometer_ITMX.png
Accelerometer_ITMX.png
  14975   Thu Oct 17 12:34:51 2019 gautamUpdateGeneralDaytime wishlist

Some ideas that would help increase the locking duty-cycle in the short term. 

  1. Seismometer investigation - something is not quite right with the vertex seismometer. This is the one that is primarily used for feedforward, and can be really helpful.
  2. Drifting TTs - it is really annoying to have to re-set the input pointing into the interferometer every ~ hour. See Attachment #1.
  3. FSS - this isn't a scientific statement, but there were ~20-30 minute periods last night where the PC drive RMS was displaying sharp spikes repeating every 2-3 seconds, first with increasing and then decreasing height. This is a new feature to me in the long standing PC drive saga but it doesn't tell me exactly what is going on as I don't know in what frequency band the glitch is actually happening. See Attachment #2.
  4. ALS noise - while it is possible now to routinely transition the arm length control from the POX/POY to CARM/DARM basis, I see some sharp (<0.1 s) dives in the TRX/TRY levels when the arms are under ALS control. This wasn't present a week ago. Needs to be investigated - I defer this to the daytime tomorrow.
Attachment 1: DriftingTTs.png
DriftingTTs.png
Attachment 2: FSSweirdness.png
FSSweirdness.png
  13010   Tue May 23 22:58:23 2017 gautamUpdateGeneralDe-Whitening board noises

Summary:

I wanted to match a noise model to noise measurement for the coil-driver de-whitening boards. The main objectives were:

  1. Make sure the various poles/zeros of the Bi-Quad stages and the output stage were as expected from the schematics
  2. Figure out which components are dominating the noise contribution, so that these can be prioritized while swapping out the existing thick-film resistors on the board for lower noise thin-film ones
  3. Compare the noise performance of the existing configuration, which uses an LT1128 op-amp (max output current ~20mA) to drive the input of the coil-driver board, with that when we use a TLE2027 (max output current ~50mA) instead. This last change is motivated by the fact that an earlier noise-simulation suggested that the Johnson noise of the 1kohm input resistor on the coil driver board was one of the major noise contributors in the de-whitening board + coil driver board signal chain. Since the TLE2027 can drive an output current of up to 300mA, we could reduce the input impedance of the coil-driver board to mitigate this noise source to some extent. 

Measurement:

  • The back-plane pin controlling the MAX333A that determines whether de-whitening is engaged or not (P1A) was pulled to ground (by means of one of the new extender boards given to us by Ben Abbott). So two de-whitening stages were engaged for subsequent tests.
  • I first measured the transfer function of the signal path with whitening engaged, and then fit my LISO model to the measurement to tweak the values of the various components. This fitted file is what I used for subsequent noise analysis. 
  • ​For the noise measurement, I shorted the input of the de-whitening board (10-pin IDE connector) directly to ground.
  • I then measured the voltage noise at the front-panel SMA connector with the SR785
  • The measurements were only done for 1 channel (CH1, which is the UL coil) for 4 de-whitening boards (2 ITMs, BS, and SRM). The 2 ITM boards are basically identical, and the BS and SRM boards are similar. Here, only results for the board labelled "ITMX" are presented.
  • For this board, I also measured the output voltage noise when the LT1128 was replaced with a TLE2027 (SOIC package, soldered onto a SOIC-to-DIP adaptor). Steve has found (ordered?) some DIP variants of this IC, so we can compare its noise performance when we get it.

Results:

  • Attachment #1 shows the modeled and measured noises, which are in fairly good agreement.
  • The transfer function measurement/fitting (not attached) also suggests that the poles/zeros in the signal path are where we expect as per the schematic. I had already verified the various resistances, but now we can be confident that the capacitance values on the schematic are also correct. 
  • The LT1128 and TLE2027 show pretty much identical noise performance.
  • The SR785 noise floor was low enough to allow this measurement without any pre-amp in between. 
  • I have identified 3 resistors from the LISO model that dominate the noise (all 3 are in the Bi-Quad stages), which should be the first to be replaced. 
  • There are some pretty large 60 Hz harmonics visible. I thought I was careful enough avoiding any ground loops in the measurement, and I have gotten some more tips from Koji about how to better set up the measurement. This was a real problem when trying to characterize the Coil Driver noise.

Next steps:

  • I have data from the other 3 boards I pulled out, to be updated shortly.
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.
  • Once the coil driver board has been characterized, we need to decide what changes to make to these boards. Some things that come to mind at the moment:
    • Replace critical resistors (from noise-performance point of view) with low noise thin film ones.
    • Remove the "fast analog" path on the coil driver boards - these have potentiometers in series with the coil, which we should remove since we are not using this path anyways.
    • Remove all AD797s from both de-whitening and coil driver boards - these are mostly employed as monitor points that go to the backplane connector, which we don't use, and so can be removed.
    • Increase the series resistor at the output of the coil driver (currently, these are either 100ohm or 400ohm depending on the optic/channel). I need to double check the limits on the various LSC servos to make sure we can live with the reduced range we will have if we up these resistances to 1 kohm (which serves to reduce the current noise to the coils, which is ultimately what matters).
Attachment 1: ITMX_deWhite_ch1_noise.pdf
ITMX_deWhite_ch1_noise.pdf
  15783   Thu Jan 28 22:34:21 2021 gautamUpdateSUSDe-whitening

Summary:

  1. We will need de-whitening filters for the BHD relay optics in order to meet the displacement noise requirements set out in the DRD. I think these need not be remotely switchable (depends on specifics of LO phase control scheme). SR2, PR2 and PR3 can also have the same config, and probably MC1, MC3 as well.
  2. We will need de-whitening filters for the non test mass core IFO optics (PRM, SRM, BS, and probably MC2).
  3. I am pretty sure we will not be able to have sufficient DAC range for the latter class of optics if we have to:
    1. Supply the DC bias.
    2. Do the LSC and ASC actuation in the presence of reasonable sensing noise levels.
    3. Engage de-whitening to low-pass-filter the DAC noise at ~200 Hz.

Details:

Attachment #1 shows the DAC noise models for the General Standards 16-bit and 18-bit DACs we are expecting to have.

  • The 16-bit model has been validated by me at the 40m a few years ago.
  • We have never used the 18-bit flavor at the 40m, and there are all manner of quirks apparently related to zero crossings and such. So the noise may be up to x2 higher (we won't have as much freedom necessarily as the sites to bias the DAC on one side of the zero crossing if we also need to use the same DAC channel to supply the DC bias current for alignment.

Attachment #2 shows the expected actuation range for DC optic alignment, assuming we use the entire DAC range for this purpose.

  • Clearly, we need to do other things with the same DAC channels as well, so this is very much an upper bound of what will be possible.
  • Let's assume we will not go lower than 100ohms.
  • For all new optics we are suspending, we should aim to get the pitch balancing to within 500urad. With a 2x2m=4m optical lever arm, this corresponds to a 2mm spot shift. Should be doable.
  • This could turn out to be a serious problem for PRM, BS and SRM if we hope to measure squeezing - the <AUX DOF>-->DARM coupling could be at the level of -40dB, and at 200 Hz, the DAC noise would result in PRCL/MICH/SRCL noise at the level of ~10^-15m/rtHz, which would be 10^-17m/rtHz in DARM. I don't think we can get 20dB of feedforward cancellation at these frequencies. For demonstrating locking using a BHD error signal, maybe this is not a big deal.

Attachment #3 shows the current and proposed (by me, just a rough first pass, not optimized in any way yet) de-whitening filter shapes. These shapes can be tweaked for sure.

  • The existing de-whitening filter is way too aggressive. FWIW, the DRD "models" a "4th order Chebyshev low pass filter" which doesn't exist anywhere as far as I know.
  • Since the DAC noise is below 1 uV/rtHz at all frequencies of interest, we never need to have >60dB de-whitening anywhere as the input referred noise of any circuit we build will exceed 1 nV/rtHz.
  • I propose 3 poles, 3 zeros. In the plot, these poles are located at 30Hz, 50Hz, 2kHz, and the zeros are at 300 Hz, 300 Hz, 800 Hz. 
  • The de-whitening is less agressive below 100 Hz, where we still need significant LSC actuation ability. Considering the sensing noise levels at the 40m, I don't know if we can have reasonable LSC and ASC loop shapes and still have the de-whitening.
  • Once again, PRM, SRM and BS will be the most challenging.
  • For the BHD relay optics, once we have the de-whitening, we won't have the option of turning on a high-frequency (~kHz) dither line because of insufficient DAC range. 

Attachment #4 puts everything into displacement noise units. The electronics noise of the coil driver / de-whitening circuit have not been included so at high frequencies, the projection is better than what will actually be realizable, but still well below the BHD requirement of 3e-17 m/rtHz.

Attachment 1: DACnoiseModels.pdf
DACnoiseModels.pdf
Attachment 2: actuationRange.pdf
actuationRange.pdf
Attachment 3: deWhiteTFs.pdf
deWhiteTFs.pdf
Attachment 4: dispNoiseModels.pdf
dispNoiseModels.pdf
  1680   Tue Jun 16 17:38:35 2009 robUpdateLockingDeWhites ON

With the common mode servo bandwidth above 30kHz and the BOOST on (1), I was able to switch on the test mass dewhitening.  Finally.

  3216   Wed Jul 14 11:54:33 2010 josephbUpdateDAQDebugging Guralp and reboots

This is regards to zero signal being reported by the channels C1:PEM-SEIS_GUR1_X, C1:PEM-SEIS_GUR1_Y, and C1:PEM-SEIS_GUR1_Z.

I briefly swapped Guralp 1 EW and Guralp 2 EW to confirm to myself that it was not on the gurlap end (although the fact that its digital zero is highly indicative a digital realm problem).  I then unplugged the 17-32, and then 1-16 channel connections to the 110B.  I saw floating noise on the GUR2 channels, but still digital zero on the GUR1 channels, which means its not the BNC break out box.

There was a spare 110B, unconnected in the crate, so to do a quick test of the 110B, I turned off the crate and swapped the 110Bs, after copying the switch configuration of the first 110B to the second one.  The original 110B was labeled ADC 1, while the second 110B was labeled ADC 0.  The switches were identical except for the ones closest to the Dsub connectors on the front.  All those switches in that set were to the right, when looking down at the switches and the Dsub connectors pointing towards yourself.

Unfortunately, the c0duc1 never seemed to come up with the new 110B (ADC 0).  So we put the original 110B back.  And turned the crate back on. 

The fb then didn't seem to come back quite right.  We tried rebooting fb40m it, but its still red with status 1.  c0daqctrl is green, but c0dcu1 is red, although I'm not positive if thats due to fb40m being in a strange state.  Jenne tried a telnet in to port 8087 and shutdown, but that didn't seem to help.  At this point, we're going to contact Alex when he gets in around 12:30.

 

  3220   Wed Jul 14 16:39:06 2010 JenneUpdateDAQDebugging Guralp and reboots

[Joe, Jenne]

Joe got on the phone with Alex, and Alex's magic Alex intuition told him to ask about the RFM switch.  The C0DAQ_CTRL's overload light was orangeAlex suggested hitting the reset button on that RFM switch, which we did. That fixed everything -> c0dcu1 came back, as did the frame builder.  Rana had pointed out earlier that we could have brought back all of the other front ends, and enabled the damping of the optics even though the FB was still down.  It's okay to leave the front ends & watchdogs on, and just reboot the FB, AWG, and DAQ_CTRL computers if that is necessary.

Anyhow, once the FB was back online, we got around to bringing back all of the front ends (as usual, except for the ones which are unplugged because they're in the middle of being upgraded).  Everything is back online now.

After all of this craziness, all of the Guralp channels are working happily again. It is still unknown why they starting being digital zero, but they're back again. Maybe I should have rebooted the frame builder in addition to c0dcu1 last night?

 

Quote:

This is regards to zero signal being reported by the channels C1:PEM-SEIS_GUR1_X, C1:PEM-SEIS_GUR1_Y, and C1:PEM-SEIS_GUR1_Z.

I briefly swapped Guralp 1 EW and Guralp 2 EW to confirm to myself that it was not on the gurlap end (although the fact that its digital zero is highly indicative a digital realm problem).  I then unplugged the 17-32, and then 1-16 channel connections to the 110B.  I saw floating noise on the GUR2 channels, but still digital zero on the GUR1 channels, which means its not the BNC break out box.

There was a spare 110B, unconnected in the crate, so to do a quick test of the 110B, I turned off the crate and swapped the 110Bs, after copying the switch configuration of the first 110B to the second one.  The original 110B was labeled ADC 1, while the second 110B was labeled ADC 0.  The switches were identical except for the ones closest to the Dsub connectors on the front.  All those switches in that set were to the right, when looking down at the switches and the Dsub connectors pointing towards yourself.

Unfortunately, the c0duc1 never seemed to come up with the new 110B (ADC 0).  So we put the original 110B back.  And turned the crate back on. 

The fb then didn't seem to come back quite right.  We tried rebooting fb40m it, but its still red with status 1.  c0daqctrl is green, but c0dcu1 is red, although I'm not positive if thats due to fb40m being in a strange state.  Jenne tried a telnet in to port 8087 and shutdown, but that didn't seem to help.  At this point, we're going to contact Alex when he gets in around 12:30.

 

 

  7174   Tue Aug 14 11:39:13 2012 Jamie, Rolf, AlexUpdateCDSDebugging of c1sus machine and c1rfm models

Rolf and Alex came over this morning to see if they could help debug some issues we have been seeing with IPC transmission between the c1sus and c1lsc machines.

c1oaf, which runs on c1lsc, sees a lot of transmission errors on it's dolphin receivers from c1rfm, which runs on c1sus.  Their speculation is that c1rfm is trying to process too many channels, and it's not able to read off all the RFM channels and retransmit them over dolphin to c1lsc before the end of cycle.  To test this they turned off all RFM reads on c1rfm and the dolphin receiver errors on c1lsc all went away.  We ran into other problems before I had a chance to pester them about what the take-away is here.  We might just need to reduce the load on c1rfm, maybe by introducing a c1rfm2?

We then tried to debug an issue in the c1sus machine where models would occasionally run slow for a cycle, or run slow when a different model on the machine was loaded or unloaded.  The suspect was BIOS settings.  Unfortunately, we ran into trouble when we tried to tweak the BIOS setting on c1sus.  We found that all the serial/COM ports were on, which is usually a big no-no for the RTS (the interrupts cause many cycle delays).  However, turning off the COM ports prevented the machine from booting at all.  This was a big mystery.  The machine seemed to be acting flaky in general as well, since the boot (pre-kernel) would hang in various places after different reboots.  Alex went to grab us a spare machine that we're going to try swapping out this afternoon.

  7176   Tue Aug 14 11:49:15 2012 DenUpdateCDSDebugging of c1sus machine and c1rfm models

Quote:

 

  We might just need to reduce the load on c1rfm, maybe by introducing a c1rfm2?

 

 A huge data flow goes from PEM to OAF through RFM. I think we need to make PEM and OAF run on the same machine and transmit signals through the shared memory.

  4406   Fri Mar 11 18:32:45 2011 josephb, Chris, JamieUpdateCDSDebugging simplant damping

The FM1 filter module change for XXSEN was propagated to the ETMX suspension.  This was a change from a 30,100:3 with a DC gain of 1 to a 3:30 which just compensates the hardware filter.

The good gains for the Sim damping were found to be:  100 for ETMX_SUSPOS, 0.1 ETMX_SUSPIT, and 0.1 ETMX_SUSYAW, ETMX_SUSSIDE is -70.  Gains much higher tended to saturate the simulated coils (i.e. hitting 10V limit) and then had to have the histories cleared for the RESPONSE matrix.

These seem to work to damp the real ETMX as well.

  10775   Wed Dec 10 16:12:29 2014 manasaSummaryGeneralDec 10 - PSL table

Quote:

Attached is the timeline for Frequency Offset Locking related activities. All activities will be done mostly in morning and early afternoon hours.

I was working around the PSL table today.

I wanted to modify the telescope that couples PSL light into the fiber; now that I have the translation stages for the lenses. I could not finish it as the locking work started earlier than usual this afternoon. I measured the out of loop noise for ALS error signals before I opened the PSL enclosure. X and Y beat notes were at -18dBm at 49.3MHz and -29.56dBm at 62.2MHz for this measurement. DTT data can be found in /users/manasa/data/141210/ALSoutLoop.xml; so there is reference to go back to in case of any damage done due to the work on the PSL table.

Also, I received the front and back panels for the Fiber chassis and put it together. Find photos (front panel and inside) of chassis in attachment. This will go inside the PSL enclosure tomorrow.

FiberMod_front.jpg    FOL_fiber.jpg

 

ELOG V3.1.3-