40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 73 of 339  Not logged in ELOG logo
ID Date Author Type Category Subject
  13423   Fri Nov 10 08:52:21 2017 SteveUpdateVACTP3 drypump replaced

PSL shutter closed at 6e-6 Torr-it    

   The foreline pressure of the drypump is 850 mTorr at 8,446 hrs of seal life

V1 will be closed for ~20 minutes for drypump replacement..........

9:30am dry pump replaced, PSL shutter opened at 7.7E-6 Torr-it

  Valve configuration: vacuum normal as  TP3 is the forepump of the Maglev  & annuloses are not pumped.

Quote:

TP3 drypump replaced at 655 mTorr, no load, tp3 0.3A 

This seal lasted only for 33 days at  123,840 hrs

The replacement is performing well: TP3 foreline pressure is 55 mTorr, no load, tp3 0.15A at 15 min  [ 13.1 mTorr at d5 ]

 

Valve configuration: Vacuum Normal, ITcc 8.5E-6 Torr

Quote:

Dry pump of TP3 replaced after 9.5 months of operation.[ 45 mTorr d3 ]

The annulosses are pumped.

Valve configuration: vac normal, IFO pressure 4.5E-5 Torr [1.6E-5 Torr d3 ] on new ITcc gauge, RGA is not installed yet.

Note how fast the pressure is dropping when the vent is short.

Quote:

IFO pressure 1.7E-4 Torr on new not logged cold cathode gauge. P1 <7E-4 Torr

Valve configuration: vac.normal with anunulossess closed off.

TP3 was turned off with a failing drypump. It will be replaced tomorrow.

All time stamps are blank on the MEDM screens.

 

 

  13422   Thu Nov 9 15:33:08 2017 johannesUpdateCDSrevisiting Acromag
Quote:

We probably want to get a dedicated machine that will handle the EPICS channel serving for the Acromag system

http://www.supermicro.com/products/system/1U/5015/SYS-5015A-H.cfm?typ=H

This is the machine that Larry suggested when I asked him for his opinion on a low workload rack-mount unit. It only has an atom processor, but I don't think it needs anything particularly powerful under the hood. He said that we will likely be able to let us borrow one of his for a couple days to see if it's up to the task. The dual ethernet is a nice touch, maybe we can keep the communication between the server and the DAQ units on their separate local network.

  13421   Thu Nov 9 10:51:37 2017 gautamSummaryLSCcurrent procedure for compiling and installing c1dnn code

Jamie pointed out that the compile and install instructions are different for c1dnn:

cd /opt/rtcds/caltech/c1/rtbuild/test/nn-test
make c1dnn
make install-c1dnn

See also: https://nodus.ligo.caltech.edu:8081/40m/13383.

I think these build instructions have to be run on the c1lsc frontend - in the past, I have been able to compile and install models on any computer with the shared drive mounted (including the control room workstations), but I'm guessing that something has changed since the RCG upgrade. Jamie can correct me on this if I'm wrong.

  13420   Wed Nov 8 17:04:21 2017 gautamUpdateCDSgds-2.17.15 [not] installed

I wanted to use the foton.py utility for my NB tool, and I remember Chris telling me that it was shipping as standard with the newer versions of gds. It wasn't available in the versions of gds available on our workstations - the default version is 2.15.1. So I downloaded gds-2.17.15 from http://software.ligo.org/lscsoft/source/, and installed it to /ligo/apps/linux-x86_64/gds-2.17.15/gds-2.17.15. In it, there is a file at GUI/foton/foton.py.in - this is the one I needed. 


Turns out this was more complicated than I expected. Building the newer version of gds throws up a bunch of compilation errors. Chris had pointed me to some pre-built binaries for ubuntu12 on the llo cds wiki, but those versions of gds do not have foton.py. I am dropping this for now.

  13419   Wed Nov 8 16:39:24 2017 KiraUpdatePEMADC noise measurement

Gautam and I measured the noise of the ADC for channels 17, 18, and 19. We plan to use those channels for measuring the noise of the temperature sensors, and we need to figure out whether or not we will need whitening and if so, how much. The figure below shows the actual measurements (red, green and blue lines), and a rough fit. I used Gautam's elog here and used the same function, \sqrt{a^{2}(\frac{b}{f^{2}})+c^{2}} (with units of nV/sqrt(Hz)) to fit our results. I used a = 1, b = 1e6, c = 2000. Since we are interested in measuring at lower frequencies, we must whiten the signal from the temperature sensors enough to have the ADC noise be negligible.

We want to be able to measure to 1mK/\sqrt{Hz} accuracy at 1Hz, which translates to about 1nA/\sqrt{Hz} current from the AD590 (because it gives 1\mu A/K). Since we have a 10K resistor and V=IR, the voltage accuracy we want to measure will be 10^{-5}V/\sqrt{Hz}. We would need whitening for lower frequencies to see such fluctuations.

To do the measurements, we put a 50\Omega BNC end cap on the channels we wanted to measure, then took measurements from 0-900Hz with a bandwidth of 0.001Hz. This setup is shown in the last two attachments. We used the ADC in 1X7.

Attachment 1: ADC-fit.png
ADC-fit.png
Attachment 2: IMG_20171108_162532.jpg
IMG_20171108_162532.jpg
Attachment 3: IMG_20171108_162556.jpg
IMG_20171108_162556.jpg
  13418   Wed Nov 8 14:28:35 2017 gautamUpdateGeneralMC1 glitches return

There hasn't been a big glitch that misaligns MC1 by so much that the autolocker can't lock for at least 3 months, seems like there was one ~an hour ago.

I disabled autolocker and feedback to the PSL, manually aligned MC1 till the MC_REFL spot looked right on the CCD to me, and then re-engaged the autolocker, all seems to have gone smoothly.

 

Attachment 1: MC1_glitchy.png
MC1_glitchy.png
Attachment 2: 6AFDA67D-79B1-469C-A58A-9EC5F8F01D32.jpeg
6AFDA67D-79B1-469C-A58A-9EC5F8F01D32.jpeg
  13417   Wed Nov 8 12:19:55 2017 gautamUpdateSUScoil driver series resistance

We've been talking about increasing the series resistance for the coil driver path for the test masses. One consequence of this will be that we have reduced actuation range.

This may not be a big deal since for almost all of the LSC loops, we currently operate with a limiter on the output of the control filter bank. The value of the limit varies, but to get an idea of what sort of "threshold" velocities we are looking at, I calculated this for our Finesse 400 arm cavities. The calculation is rather simplistic (see Attachment #1), but I think we can still draw some useful conclusions from it:

  • In Attachment #1, I've indicated with dashed vertical lines some series resistances that are either currently in use, or are values we are considering.
  • The table below tabulates the fraction of passages through a resonance we will be able to catch, assuming velocities sampled from a Gaussian with width ~3um/s, which a recent ALS study suggests describes our SOS optic velocity distribution pretty well (with lcoal damping on).
  • I've assumed that the maximum DAC output voltage available for length control is 8V.
  • Presumably, this Gaussian velocity distribution will be modified because of the LSC actuation exerting impulses on the optic on failed attempts to catch lock. I don't have a good model right now for how this modification will look like, but I have some ideas.
  • It would be interesting to compare the computed success rates below with what is actually observed.
  • The implications of different series resistances on DAC noise are computed here (although the non-linear nature of the DAC noise has not been taken into account).
Series resistance [ohms] Predicted Success Rate [%] Optics with this resistance
100 >90 BS, PRM, SRM
400 62 ITMX, ITMY, ETMX, ETMY
1000 45 -
2000 30 -

So, from this rough calculation, it seems like we would lose ~25% efficiency in locking the arm cavity if we up the series resistance from 400ohm to 1kohm. Doesn't seem like a big deal, becuase currently, the single arm locking

Attachment 1: vthresh.pdf
vthresh.pdf
  13416   Wed Nov 8 09:59:12 2017 gautamUpdateLSCDRMI Nosie Budget v3.1

The Oplev trace is missing for now, as I have not re-measured the A2L coupling since modifying the Oplev loop shape (specifically the low pass filter and overall gain) to allow engageing the coil de-whitening.

The averaging for the white noise TFs plotted is computed using median averaging - I have used a python transcription of Sujan's matlab code. I use scipy.signal.spectrogram to compute the fft bins (I've set some defaults like 8s fft length and a tukey window), and then take the median average using np.median(). I've also incorporated the ln(2) correction factor.

It seems like GwPy has some in-built capability to compute median (and indeed various other percentile) averages, but since we aren't using it, I just coded this up. 

Quote:

why no oplev trace in the NB ?

also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method.

 

  13415   Wed Nov 8 09:37:45 2017 ranaUpdateLSCDRMI Nosie Budget v3.1

why no oplev trace in the NB ?

#4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz

also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method.

  13414   Wed Nov 8 00:28:16 2017 gautamUpdateLSCLaser intensity coupling measurement attempt

I tried measuring the coupling of PSL intensity noise by driving some broadband noise bandpassed between 80-300Hz using the spare DAC channel at 1Y3 that I had set up for this purpose a couple of weeks ago (via a battery powered SR560 buffer set to low-noise operation mode because I'm not sure if the DAC output can drive a ~20m long cable). I was monitoring the MC2 TRANS QPD Sum channel spectrum while driving this broadband noise - the "nominal" spectrum isn't very clean, there are a bunch of notches from a 60Hz comb and a forest of peaks over a broad hump from 300Hz-1kHz (see Attachment #1).

I was able to increase the drive to the AOM till the RIN in the band being driven increased by ~10x, and saw no change in the MICH error signal spectrum [see Attachment #1] - during this measurement, the RFPD whitening was turned on for REFL11, REFL55 and AS55, and the ITM coil drivers were de-whitened, so as to get a MICH spectrum that is about as "low-noise" as I've gotten it so far.

I tried increasing the drive further, but at this point, started seeing frequent MC locklosses - I'm not convinced this is entirely correlated to my AOM activities, so I will try some more, but at the very least, this places an upper bound on the coupling from intensity noise to MICH.

Attachment 1: PSL_RIN.pdf
PSL_RIN.pdf
  13413   Tue Nov 7 22:56:21 2017 gautamUpdateLSCDRMI locking recovered

I hadn't re-locked the DRMI after the work on the AS55 demod board. Tonight, I was able to recover the DRMI locking with the old settings.

The feature in the PRCL spectrum (uncalibrated, y-axis is cts/rtHz) at ~1.6kHz is mysterious, I wonder what that's about.

Wasted some time tonight futzing around with various settings because I couldn't catch a DRMI lock, thinking I may have to re-tune demod phases etc given that I've been mucking around the LSC rack a fair bit. But fortunately, the problem turned out to be that the correct feedforward filters were not enabled in the angular feedforward path (seems like these are not SDF monitored). Clue was that there was more angular motion of the POP spot on the CCD than I'm used to seeing, even in the PRMI carrier lock.

After fixing this, lock was acquired within seconds, and the locks are as robust as I remember them - I just broke one after ~20mins locked because I went into the lab. I've been putting off looking at this angular feedforward stuff and trying out some ideas rana suggested, seems like it can be really useful.

As part of the pre-lock work, I dither aligned arms, and then ran the PRCL/MICH dithers as well, following which I re-centered the ITM, PRM and BS Oplevspots onto their respective QPDs - they have not been centered for a couple of months now.

I'm now going to try and measure some other couplings like PSL RIN->MICH, Marconi phase noise->MICH etc.

 

Attachment 1: DRMI_7Nov20178.png
DRMI_7Nov20178.png
  13412   Tue Nov 7 17:45:05 2017 gautamUpdateLSCDRMI Nosie Budget v3.1

Some days ago, I had tried to measure the SRCL->MICH and PRCL->MICH cross couplings using broadband noise injected between 120-180 Hz, a frequency band chosen arbitrarily, in hindsight, I could have done a more broadband test. I've spent some time including the infrastructure to calculate "White-Noise TFs" in the noise budgeting code, where a transfer function is estimated by injecting a "broadband" excitation into a channel of interest, and looking at the resulting response in MICH. I figured this would be useful to estimate other couplings as well, e.g. laser intensity nosie, oscillator noise etc.

I estimate the transfer function of the coupling using the relation (MICH is the median ASD of the MICH error signal in the below expression, and similarly for PRCL)

|H_{cpl}| = \sqrt{\frac{|\mathrm{MICH}^{2}_{\mathrm{exc}} - \mathrm{MICH}^{2}_{\mathrm{quiet}}|}{|\mathrm{PRCL}^{2}_{\mathrm{exc}} - \mathrm{PRCL}^{2}_{\mathrm{quiet}}|}}

Attachments #1 and #2 show the spectra of the MICH, PRCL and SRCL signals during 'quiet' times and during the injection, while Attachment #3 shows the calculated coupling TFs using the above relation. These are significantly different (more than 10dB lower) than the numbers I reported in elog 13367, where the measurement was made using swept sine. As can be seen in the attached plots, the injected broadband excitation is visible above the nominal noise level, and I calculated the white noise TFs using ~5mins of data which should be plenty, so I'm not sure atm what to make of the answers from swept-sine and broadband injections being so different.

Attachment #4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz.

Nov 8 1600: Updating NB to inculde estimated Oplev A2L.

Quote:
 

AUX coupling

This is the other find.

  • While chatting with Gabriele, he suggested measuring the SRCL->MICH and PRCL->MICH cross couplings.
  • I injected a signal in SRCL servo EXC channel, and adjusted amplitude till coherence in MICH_IN1 was good.
  • The actual TF measured was MICH_IN1 / SRCL_IN1 (so units of cts/ct).
  • My multiplying the in-lock PRCL and SRCL IN1 signals by these coupling coefficients (assumed flat in frequency for now, note that measurement was only made between 100Hz and 1kHz), I get the trace labelled "AUX coupling" in Attachment #1 (this is the quadrature sum for SRCL and PRCL couplings).
  • Also repeated for PRCL -> MICH coupling in the same way.
  • Measurements of these TFs and coherence are shown in Attachment #5 (again png screenshot because of DTT).
  • However, there is no significant coherence in MICH/SRCL or MICH/PRCL in this frequency range.

This seems to be limiting us from saturating the dark noise once the coil de-whitening is engaged. But lack of coherence means the mechanism is not re-injection of SRCL/PRCL sensing noise? Need to think about what this means / how we can mitigate it.


Attachment 1: SRCL_MICH_whitenoise_tf.pdf
SRCL_MICH_whitenoise_tf.pdf
Attachment 2: PRCL_MICH_whitenoise_tf.pdf
PRCL_MICH_whitenoise_tf.pdf
Attachment 3: MICH_aux.pdf
MICH_aux.pdf
Attachment 4: C1NB_disp_40m_MICH_NB_2017-10-08.pdf
C1NB_disp_40m_MICH_NB_2017-10-08.pdf
  13411   Mon Nov 6 18:22:48 2017 jamieSummaryLSCcurrent procedure for running c1dnn code

This is the current procedure to start the c1dnn model:

$ ssh c1lsc
$ sudo systemctl start rts-epics@c1dnn
$ sudo systemctl start rts-awgtpman@c1dnn
$ sudo /usr/bin/cset proc -s rts-c1dnn --exec /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -- -m c1dnn
...

Then to shutdown:

...
Ctrl-C
$ sudo systemctl stop rts-awgtpman@c1dnn
$ sudo systemctl stop rts-epics@c1dnn

The daqd already knows about this model, so nothing should need to be done to the daqd to make the dnn channels available.

  13410   Mon Nov 6 11:15:43 2017 gautamUpdateCDSslow machine bootfest + IFO re-alignment

Eurocrate key turning reboots today morning for and c1susaux, c1auxex and c1auxey. Usual precautions were taken to minimize risk of ITMX getting stuck.

The IFO hasn't been aligned in ~1week, so I recovered arm and PRM alignment by locking individual arms and also PRMI on carrier. I will try recovering DRMI locking in the evening.

As far as MC1 glitching is concerned, there hasn't been any major one (i.e. one in which MC1 is kicked by such a large amount that the autolocker can't lock the IMC) for the past 2 months - but the MC WFS offsets are an indication of when smaller glitches have taken place, and there were large DC offsets on the MC WFS servo outputs, which I offloaded to the DC MC suspension sliders using the MC WFS relief script.

I'd like for the save-restore routine that runs when the slow machines reboot to set the watchdog state default to OFF (currently, after a key-turning reboot, the watchdogs are enabled by default), but I'm not really sure how this whole system works. The relevant files seem to be in the directory /cvs/cds/caltech/target/c1susaux. There is a script in there called startup.cmd, which seems to be the initialization script that runs when the slow machine is rebooted. But looking at this file, it is not clear to me where the default values are loaded from? There are a few "saverestore" files in this directory as well:

  • saverestore.sav
  • saverestore.savB
  • saverestore.sav.bu
  • saverestore.req

Are the "default" channel values loaded from one of these?

  13409   Mon Nov 6 09:09:43 2017 SteveUpdateVACsetting up new TP2 turbo

Our new Agilent Technology TwisTorr 84FS AG rack controller ( English Manual pages 195-297 )  RS232/485, product number X3508-64001, sn IT1737C383

This controller, turbo and it's drypump needs to be set up into our existing vacuum system. The intake valve of this turbo (V4) has to have a hardwired interlock that closes V4 when rotation speed is less than 20% of preset RPM speed.

The unit has an analoge 10Vdc output that is proportional to rotation speed. This can be used with a comperator to direct the interlock or there may be set software option in the controller to close the valve if the turbo slows down more than 20%

The last Upgrade of the 40m Vacuum System  1/2/2000 discribes our  vauum system  LIGO-T000054-00-R 

Here the LabView / Metrabus controls were replaced by VME processor and  an Epic interface

We do not have schematics of the hardware wiring.

We need help with this.

 

  13408   Mon Oct 30 11:15:02 2017 gautamUpdateCDSslow machine bootfest + vacuum snafu

Eurocrate key turning reboots today morning for c1psl and c1aux.c1auxex and c1auxey are also down but I didn't bother keying them for now. PSL FSS slow loop is now active again (its inactivity was what prompted me to check status of the slow machines).

Note that the EPCIS channels for PSL shutter are hosted on c1aux.But looks like the slow machine became unresponsive at some point during the weekend, so plotting the trend data for the PSL shutter channel would have you believe that the PSL shutter was open all the time. But the MC_REFL DC channel tells a different story - it suggests that the PSL shutter was closed at ~4AM on Sunday, presumably by the vacuum interlock system. I wonder:

  1. How does the vacuum interlock close the PSL shutter? Is there a non-EPICS channel path? Because if the slow machine happens to be unresponsive when the interlock wants to close the PSL shutter via EPICS commands, it will be unable to. The fact that the PSL shutter did close suggests that there is indeed another path.
  2. We should add some feature to the vacuum interlock (if it doesn't already exist) such that the PSL shutter isn't accidentally re-opened until any vacuum related issues are resolved. Steve was immediately able to identify that the problem was vacuum related, but I think I would have just re-opened the PSL shutter thinking that the issue was slow computer related.
  13407   Mon Oct 30 10:09:41 2017 SteveUpdateVACTP2 failed

 IFO pressure 1.2e-5 Torr at 9:30am

Quote:

Valve configuration: Vacuum normal

Note: Tp2 running at 75Krpm 0.25A 26C has a  load high pitch sound today. It's fore line pressure 78 mTorr. Room temp 20C

 

Atm. 1,  This was the vacuum condition  this morning.

               IFO P1  9.7 mTorr, V1 openV4 was in closed position , ~37 C warm Maglev at normal 560Hz rotation speed with foreline pressure 3.9 Torr because V4 closed 2 days ago when TP2 failed .....see Atm.3

               The error messege at TP2  controller was: fault overtemp.

I did the following to restored IFO pumping: stopped pumping of the annulose with TP3 and valves were configured so TP3 can be the forepump of the Maglev.

closed VM1 to protect the RGA,  close PSL shutter .....see Gautam  entry

aux fan on to cool down Maglev-TP1, room temp 20 C,

aux drypump turned on and opend to TP3 foreline to gain pumping speed,

closed PAN to isolate annulos pumping,

opened V7 to pump Maglev forline with TP3 running at 50Krpm, It took 10 minutes to reach P2  1mTorr from 3.9 Torr

aux drypump closed off at P2  1 mTorr, TP3 foreline pressure 362 mTorr.......see Atm.2

As we are running now:

IFO pressure 7e-6 Torr at Hornet cold cathode gauge at 15:50  We have no IFO CC1 logging now.  Annuloses are in 3-5 mTorr range are not pumped.

TP3 as foreline pump of TP1 at 50 Krpm, 0.24 A, 24 C, it's drypump forline pressure 324 mTorr

V4 valve cable is disconnected.

I need help with wiring up the logging of the Hornet cold cathode gauge.

 

 

 

Attachment 1: tp2failed.png
tp2failed.png
Attachment 2: ifo_1.0E-5_Torrit.png
ifo_1.0E-5_Torrit.png
Attachment 3: tp2failed2dago.png
tp2failed2dago.png
Attachment 4: 4days.png
4days.png
  13406   Mon Oct 30 08:08:06 2017 SteveUpdatesafetysafety training

Udit Kahndelwal received 40m specific basic safety traning on Friday, Oct. 27

  13405   Sun Oct 29 16:40:17 2017 ranaSummaryComputersdisk cleanup

Backed up all the wikis. Theyr'e in wiki_backups/*.tar.xz (because xz -9e gives better compression than gzip or bzip2)

Moved old user directories in the /users/OLD/

  13404   Sat Oct 28 00:36:26 2017 gautamUpdateCDS40m files backup situation - ddrescue

None of the 3 dd backups I made were bootable - at boot, selecting the drive put me into grub rescue mode, which seemed to suggest that the /boot partition did not exist on the backed up disk, despite the fact that I was able to mount this partition on a booted computer. Perhaps for the same reason, but maybe not.

After going through various StackOverflow posts / blogs / other googling, I decided to try cloning the drives using ddrescue instead of dd.

This seems to have worked for nodus - I was able to boot to console on the machine called rosalba which was lying around under my desk. I deliberately did not have this machine connected to the martian network during the boot process for fear of some issues because of having multiple "nodus"-es on the network, so it complained a bit about starting the elog and other network related issues, but seems like we have a plug-and-play version of the nodus root filesystem now.

chiara and fb1 rootfs backups (made using ddrescue) are still not bootable - I'm working on it.

Nov 6 2017: I am now able to boot the chiara backup as well - although mysteriously, I cannot boot it from the machine called rosalba, but can boot it from ottavia. Anyways, seems like we have usable backups of the rootfs of nodus and chiara now. FB1 is still a no-go, working on it.

Quote:

Looks to have worked this time around.

controls@fb1:~ 0$ sudo dd if=/dev/sda of=/dev/sdc bs=64K conv=noerror,sync
33554416+0 records in
33554416+0 records out
2199022206976 bytes (2.2 TB) copied, 55910.3 s, 39.3 MB/s
You have new mail in /var/mail/controls

I was able to mount all the partitions on the cloned disk. Will now try booting from this disk on the spare machine I am testing in the office area now. That'd be a "real" test of if this backup is useful in the event of a disk failure.

 

 

Attachment 1: 415E2F09-3962-432C-B901-DBCB5CE1F6B6.jpeg
415E2F09-3962-432C-B901-DBCB5CE1F6B6.jpeg
Attachment 2: BFF8F8B5-1836-4188-BDF1-DDC0F5B45B41.jpeg
BFF8F8B5-1836-4188-BDF1-DDC0F5B45B41.jpeg
  13403   Fri Oct 27 10:14:11 2017 SteveUpdateVACRGA scan at day 372

Valve configuration: Vacuum normal

Note: Tp2 running at 75Krpm 0.25A 26C has a  load high pitch sound today. It's fore line pressure 78 mTorr. Room temp 20C

 

Attachment 1: RGA_scan_d372.png
RGA_scan_d372.png
  13402   Fri Oct 27 09:34:20 2017 SteveUpdatePEMearthquakes

Lompoc 4.3M and 3.7M Avalon

 

Attachment 1: recentEQs.png
recentEQs.png
  13401   Wed Oct 25 09:32:14 2017 GabrieleSummaryLSCfurther testing of c1dnn integration; plugged in to DAQ
Quote:

 

 

We'll need to set the phase rotation of the demodulated RF PD signals (REFL11, REFL55, AS55, POP22) to match them with what the NN expects...

Here are the demodulation phases and rotation matrices tuned for the network. For the matrices, I am assuming that the input is [I, Q] and the output is [I,Q].

POP22
phi = 153 degrees
[[-0.894, 0.447],
 [-0.447, -0.894]]

REFL11
phi = 93 degrees
[[-0.058, 0.998],
 [-0.998, -0.058]]

REFL55
phi = -90 degrees
[[0.000, -1.000],
 [1.000, 0.000]]

AS55
phi = 7 degrees
[[0.993, 0.122],
 [-0.122, 0.993]]

  13400   Tue Oct 24 20:14:21 2017 jamieSummaryLSCfurther testing of c1dnn integration; plugged in to DAQ

In order to try to isolate CPU6 for the c1dnn neural network reconstruction model, I set the CPUAffinity in /etc/systemd/system.conf to "0" for the front end machines.  This sets the cpu affinity for the init process, so that init and all child processes are run on CPU0.  Unfortunately, this does not affect the kernel threads.  So after reboot all user space processes where on CPU0, but the kernel threads were still spread around.  Will continue trying to isolate the kernel as well...

In any event, this amount of isolation was still good enough to get the c1dnn user space model running fairly stably.  It's been running for the last hour without issue.

I added the c1dnn channel and testpoint files to the daqd master file, and restarted daqd_dc on fb1, so now the c1dnn channels and test points are available through dataviewer etc.  We were then able to observe the reconstructed signals:

We'll need to set the phase rotation of the demodulated RF PD signals (REFL11, REFL55, AS55, POP22) to match them with what the NN expects...

  13399   Tue Oct 24 16:43:11 2017 SteveUpdateCDSslow machine bootfest

[ Gautam , Steve ]

c1susaux & c1iscaux were rebooted manually.

Quote:

Had to reboot c1psl, c1susaux, c1auxex, c1auxey and c1iscaux today. PMC has been relocked. ITMX didn't get stuck. According to this thread, there have been two instances in the last 10 days in which c1psl and c1susaux have failed. Since we seem to be doing this often lately, I've made a little script that uses the netcat utility to check which slow machines respond to telnet, it is located at /opt/rtcds/caltech/c1/scripts/cds/testSlowMachines.bash.

The script can be executed by ./testSlowMachines.bash.

 

  13398   Tue Oct 24 16:22:53 2017 gautamUpdateCDSToy DARM model setup in c1tst

[alex, gautam]

Alex is going to have an undergrad work on a calibration optimization project on the 40m RTCDS system. For this purpose, we wanted to setup a "Simulated DARM loop". Today, Alex and I set this up. I figured we can use the c1tst model for this purpose. We basically copied the topology from Figure 2 of the h(t) paper. Attached are screenshots of the MEDM screens of the system we setup, and the simulink block diagram - the main screen can be accessed from the "SIM PLANT" tab in the sitemp.

It remains to setup the appropriate filters in the filter banks, and an EPICS channel monitor for monitoring the single excitation testpoint in the model. We also did not set up any DQ channels for the time being, as it is not even clear to me what channels need to be DQ-ed.

Attachment 1: TOY_DARM.png
TOY_DARM.png
Attachment 2: TOY_DARM_SIMULINK.png
TOY_DARM_SIMULINK.png
  13397   Mon Oct 23 09:17:41 2017 SteveUpdatePEMants alart

Do not leave organic trash or food boxes in the 40m to attrack ants !

 

Attachment 1: ants.jpg
ants.jpg
Attachment 2: ants2.jpg
ants2.jpg
  13396   Fri Oct 20 16:30:17 2017 gautamUpdateCDSFB1 installed on shelves

[steve, jamie, gautam]

The machine that now serves as out Frame Builder, FB1, was sitting on top of megatron. I decided that this wasn't ideal, and asked Steve to get some alternative mounting solution. Today, he procured some shelves to put FB1 on. Jamie suggested looking for the slider-rail that came with the machine, and using that instead, as it will allow us to slide FB1 out of the rack as we do megatron and the old FB. But as luck would have it, the distance between the rack vertical posts is 26 inches, but the rail is 27 inches. So we had to accept using the less ideal solution of putting FB1 on two shelves, with no sliding option. Photo to be uploaded shortly.

For this work, I had to shutdown FB1 for about 1 hour between 3pm and 4pm. It seems to have come back up fine now.

  13395   Thu Oct 19 15:42:03 2017 jamieSummaryLSCMICH/PRCL reconstruction neural network running on c1lsc

Gabriele's PRCL/MICH reconstruction neural network is now running on c1lsc.  Summary:

  • front-end model is called c1dnn, and is running as an experimental user-space process
  • c1dnn is getting most of it's needed inputs from existing SHMEM IPC outputs from c1lsc
  • none of the output signals from the network are being sent anywhere yet (grounded)
  • c1dnn has not been integrated in any way, into the DAQ etc.  it is being run manually by hand, and will be completely shut down after this test

Simple MEDM screen I made to monitor the input/output signals:

The RTS process seems to run fine, but there is quite a bit of jitter in the CPU_METER, at the 50% level:

It's not running over the limit, but it is jumping around more than I think it should be.  Will look into that...

cpuset for cpu isolation for user-space model

The c1dnn model is running on CPU6 on c1lsc.  CPU6 was isolated from the rest of the system using cpuset.  The "cset" utility was used to create a "system" CPU set that was assigned to CPU0, and the kernel was instructed to move all running processes to that set:

controls@c1lsc:~ 2$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y   343    0 /
controls@c1lsc:~ 0$ sudo cset set -c 0 -s system --cpu_exclusive
cset: --> created cpuset "system"
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y   342    1 /
       system          0 y       0 n     0    0 /system
controls@c1lsc:~ 0$ sudo cset proc --move -f root -t system -k
cset: moving all tasks from root to /system
cset: moving 292 userspace tasks to /system
cset: moving 0 kernel threads to: /system
cset: --> not moving 50 threads (not unbound, use --force)
[==================================================]%
cset: done
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    50    1 /
       system          0 y       0 n   292    0 /system
controls@c1lsc:~ 0$ sudo cset proc --move -f root -t system -k --force
cset: moving all tasks from root to /system
cset: moving 50 kernel threads to: /system
[==================================================]%
cset: **> 29 tasks are not movable, impossible to move
cset: done
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    29    1 /
       system          0 y       0 n   313    0 /system
controls@c1lsc:~ 0$

I then created a set for the RTS process ("rts-c1dnn") on CPU6, and executed the c1dnn model in that set:

controls@c1lsc:~ 0$ sudo cset set -c 6 -s rts-c1dnn --cpu_exclusive
cset: --> created cpuset "rts-c1dnn"
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    24    2 /
    rts-c1dnn          6 y       0 n     0    0 /rts-c1dnn
       system          0 y       0 n   340    0 /system
controls@c1lsc:~ 0$ sudo cset proc -s rts-c1dnn --exec /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -- -m c1dnn
cset: --> last message, executed args into cpuset "/rts-c1dnn", new pid is: 27572
sysname = c1dnn
....

When done I just hit Ctrl-C.

I left the cpusets as they are, with all system processes in the "system" set.  This should not pose any problems since it's the identical configuration as would be if a normal kernel-level model was running in CPU6.

The c1dnn process and it's EPICS sequencer were shutdown after this test.

  13394   Wed Oct 18 23:11:53 2017 gautamUpdateCDSFEs unresponsive

This happened again just now - it was roughly this time when this happened last night as well.

There was certainly an EPICS freeze of the kind we were used to seeing prior to replacing the martian wireless router sometime in late 2015 (or early 2016?). I was trying to run the dither alignment servos on the Y-arm at this time, and all the StripTool traces flatlined.

I took the opportunity to try accessing testpoints from the iscey ADCs - specifically C1:SUS-TRY_OUT, and it seemed to work just fine. However, I couldn't ssh into c1iscey.

Looking at the dmesg once I was able to ssh in eventually (~2 minutes deadtime tonight, I feel like it was longer yesterday but can't quantify), I see the following: not sure if there are any clues in here, or whether this is the correct log to check. But there are many instances of the nfs server related message in the log. Note that the system time-stamp corresponds to when this freeze happened.

[5461308.784018] nfs: server 192.168.113.201 not responding, still trying
[5461412.936284] nfs: server 192.168.113.201 OK
[5461412.937130] systemd[1]: Starting Journal Service...
[5461412.947947] systemd-journald[20281]: Received SIGTERM from PID 1 (systemd).
[5461412.996063] systemd[1]: Unit systemd-journald.service entered failed state.
[5461413.002627] systemd[1]: systemd-journald.service has no holdoff time, scheduling restart.
[5461413.008983] systemd[1]: Stopping Journal Service...
[5461413.014664] systemd[1]: Starting Journal Service...
[5461413.044262] systemd[1]: Started Journal Service.
[5461413.694838] systemd-journald[400]: Received request to flush runtime journal from PID 1

 

  13393   Wed Oct 18 19:17:42 2017 gautamUpdateGeneralPRC angular feedforward

Last night, I collected ~30mins of data for the vertex seismometer channels and the POP QPD PIT/YAW signals with the PRMI locked on carrier (angular FF OFF). The ITM Oplev loops weren't DC coupled, as they are in the full IFO locking sequence, but I feel like the angular FF filters can be improved - there are frequent sharp dives in the AS110 signal level which are correlated with large amplitude motion of the POP spot on the control room CCD monitor.

Repeating the frequency domain multicoherence analysis using BS_X and BS_Y seismometer channels as witnesses suggest that we can win significantly (see Attachment #1).

I've never really implemented feedforward filters - I was planning on using ericq's latest entry on this subject as a guide. From what I gather, the procedure is as follows:

  1. Pre-filter the target (POP QPD PIT or YAW) and witness (BS_X, BS_Y) channels
    • Downsample the 2k target data and 256Hz witness data to 32 Hz (how to choose this?)
    • Detrend (linear?)
    • Apply elliptic low pass filter (previously, a 3rd order Elliptic Low pass with 3dB ripple, 40dB stopband attenuation, corner at 5Hz was used).
  2. Filter the target signal (i.e. POP QPD PIT/YAW) by the inverse actuator TF.
    • This "actuator TF" is a measurement of how actuating on the angular DoFs of the PRM affects the POP QPD spot.
    • So by pre-filtering the target signal through the inverse actuator TF, we get a measure of how much the PRM angular motion is.
    • The reason we want to do this is to give the FIR filter that produces optic motion (output) given ground motion sensed by the seismometer (input) fewer poles/zeros to fit (?).
    • The actual actuator TF has to be measured using DTT, and fit - is there anything critical about this fitting? Seems like this should be just a simple pendulum transfer function so a pair of complex poles should be sufficient?
  3. The actual Wiener filter is calculated by the function miso_firlev.m. There are many versions of this floating around from what I can gather.
    • This function requires 3 input parameters.
      • Order of filter to be fit
      • Witness channels (can be multiple)
      • Target channel (has to be single, hence the "miso" in the function name).
    • Today, at the meeting, we talked about weighting the cost function that the optimal Wiener filter calculator minimizes.
    • The canonical wiener filter minimizes the mean squared error between the output of the filter and the desired signal profile (which for this particular problem is the angular motion of the PRM, calculated by dividing the target signal by the actuator TF, knowing which we can cancel it out).
    • But as seen in Attachment #1, the main reduction in RMS comes below f=5Hz.
    • So can we weight the cost function more heavily at lower frequencies? From what I can find in previous calculations, it looks like this weighting happens in the pre-filtering stage, which is not the same thing as including the frequency dependent weighting in the calculation of the Weiner filter? The PSD and acf are F.T. pairs per the Wiener-Khinchin theorem so intuitively I would think that weighting in the frequency domain corresponds to weighting on the lags at which the acf is calculated, but I need to think about this.
    • What kind of low-pass filter do we use to prevent noise injection at higher frequencies? Does the optimal filter calculation automatically roll-off the filter response at high frequencies?
  4. As I write this, seems like there is another level of optimization of "meta-parameters" possible in this whole process - e.g. what is the optimal order of filter to fit? what is the optimal pre-filtering of training data? Not sure how much we can gain from this though.

Some notes from Rana from some years ago: https://nodus.ligo.caltech.edu:8081/40m/11519

If anyone has pointers / other considerations I should take into account, please post here.

Attachment 1: pop_feedforward_potential.pdf
pop_feedforward_potential.pdf
  13392   Wed Oct 18 17:34:09 2017 gautamUpdateSUSASDC

Summary:

The signal path for the ASDC signal is AS55 PD --> D990543 (interface board) --> D990694 (whitening board) --> D000076 (AA board) --> ADC Ch 31. Everything in this signal chain should be able to handle signals in the range +/- 10V, which should correspond to the full range of our +/-10V, 16bit ADCs. But the ASDC signal seems to saturate at ~2000 counts (i.e. turning up the analog whitening gain doesn't make the signal get any bigger than this). I investigated this a little more today.

Details:

  • The ASDC signal is derived from the AS55 photodiode. According to the schematic, the Op27 that supplies this voltage is powered by +/- 15V, so the output should be able to swing between at least +/- 12V.
  • The DC signal goes from the DB15 connector on the side of the PD to the LSC electronics rack, 1Y2, where it is interfaced with an LSC PD Interface Card, D990543. Again, per the schematic, the Op27 driving this voltage is powered by +/- 15V, and so the available output voltage swing should be greater than +/-12V.
  • The D990543 output is to its backplane connector. There is an adaptor board hooked up to the backplane that makes these outputs available to a LEMO connector. A LEMO-SMA cable then pipes this output to a D990694.
    • I decided to test the functionality of this board.
    • Disconnected the SMA ASDC input signal (CH8 on the board).
    • Drove that channel with an SR function generator and gradually turned up the Vpp of the input signal (sine wave at 145Hz).
    • Monitored the ASDC channel on dataviewer while doing this.
    • Saw that the ASDC signal saturated at ~2000 counts. Turning up the signal amplitude did not have any effect.
  • From the whitening board, the signal goes through an anti-aliasing module (D000076). The final stage LT1125s on these boards should also be supplied with +/-15V.

So the problem lies somewhere downstream of the D990694. There are other anomalous behaviours of this channel - e.g. engaging the analog whitening filters changes the DC offset of the signal. I am going to pull out this board to check it out.

Why does this matter? I want to calibrate the ASDC level (and eventually the other PD DC signals as well) into Watts. This is useful for IFO diagnostics, noise budgeting the shot noise level etc.

According to the AS55 schematic, the DC transimpedance is 66.7 ohms. I claim that the DC power on the AS55 photodiode during a DRMI (no arms) lock is ~1mW. The C30642 photodiode (InGaAs) responsivity is ~0.8 A/W. So I'd expect ~50mV to be the signal level into the ADC (assuming gain of all the other electronics in the signal chain at the start of this elog is unity). This corresponds to ~163 counts (since the ADC conversion factor is 2^16 counts over 20volts). The DC signal level I observed is ~200 counts. So things seem roughly consistent.

*Note: Despite my above statement, I don't think it is true that the AS110 PD has more light on it - the BS splitting the light between

AS55 and AS110 PDs is a 50-50 BS, and using the crude method of putting an Ophir power meter in front of both PDs and

monitoring the power while the Michelson was swinging around freely showed roughly the same maximum value.

  13391   Wed Oct 18 15:26:58 2017 johannesHowToCamerasRevision: CCD calibration

The units were still off in my previous post. Here's the corrected, sanity-checked version:

Camera IP Calibration Factor
192.168.113.152 85.8 +/- 4.3 pW*μs
192.168.113.153 78.3 +/- 3.9 pW*μs

I estimated the uncertainties based on a linear fit to the data I recorded with 75nW incident on the CCD and assumed a 5% uncertainty in that number. This is just an upper limit, to be safe. I had calibrated the power reading placing the Ophir power meter where the CCD would otherwise be and comparing it to the PD voltage of a picked off beam. In my previous figures the axes were mislabeled, so I reproduce them here:

Using the current camera position I recorded 50 exposures both with and without beam (XARM locked vs PSL shutter closed) and averaged the images to see how much the reading fluctuates. The exposure time was 10 ms, which left the maximum reported pixel value in all exposures below 3800 out of 4096. The gain setting was 100, which is what I used to calibrate the CCDs.

Counts with XARM locked 2.799 +/- 0.027 x107
Counts with shutter closed 3.220 +/- 0.047 x106
Power on CCD 193.9 +/- 2.2 nW
Power scattered into 2π (*) 254 +/- 39 μW
ETMX scatter loss (**) 25.4 +/- 3.9 ppm

(*) I calculated the lens positions to focus at a plane 65cm from the front lens. We're pretty close to that, but I can't confirm the actual distance easily, so I assumed a 5cm error on the distance, which is where most of the error is coming from. This is also assuming uniform scatter.

(**) This is assuming 10W of circulating power

Attachment 1: calib_20170930_152.pdf
calib_20170930_152.pdf
Attachment 2: calib_20170930_153.pdf
calib_20170930_153.pdf
  13390   Wed Oct 18 12:14:08 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction
Quote:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

I tried moving the model to c1ioo, where there are plenty of free cores sitting idle, and the model seems runs fine.  I think the problem was just CPU contention on the c1lsc machine, where there were only two free cores and the kernel was using both for all the rest of the normal user space processes.

So there are two options:

  • Use cpuset on c1lsc to tell the kernel to remove all other processes from CPU6 and save it just for the c1dnn model.  This should not have any impact on the running of c1lsc, since that's exactly what would be happening if we were running the model in kernel space (e.g. isolating the core for the front end model).  The auxilliary support user space processes (epics seq/ioc, awgtpman) should all run fine on CPU0, since that's what usually happens.  Linux is only using the additional core since it's there.  We don't have much experience with cpuset yet, though, so more offline testing will be required first.
  • Run the model on c1ioo and ship the needed signals to/from c1lsc via PCIe dolphin.  This is potentially slightly more invasive of a change, and would put more work on the dolphin network, but it should be able to handle it.

I'm going to start testing cpuset offline to figure out exactly what would need to be done.

  13389   Wed Oct 18 11:37:58 2017 johannesHowToCamerasETMX GigE side view at 50 deg
uote:

 Telescope front lens to wall distance 25 cm,  GigE camera lenght 6 cm and cat6 cable 2cm

 Atm3,   Existing short camera  can has 16cm  lenght to lexan guard on viewport. Available 2" od periscope tube lenght is 8cm. The one in use 16 cm long.

             Note: we can fabricate a lite cover with tube that would accomodate longer telescope.

             Can we calibrate the AR coated M5018-SW and compare it's performance agains the 2" periscope

             Look at the Edmond Optics 3" od camera lens with AR

Atm1,   Now I can see dust. This is much better. The focus is not right yet.

Atm2,   Chamber viewport wiped and image refocused. Actually I was focusing on the dust.

We don't really have to calibrate the lens, just the CCD, which we've done. It's more about knowing the true aperture size to know how much solid angle you're capturing to infer the total amount of scatter. For our custom lens tubes this is the ID of the retaining ring.

The Edmund Optics lens tube looks tempting, but itcomes at a price. Thorlabs sells lens tubes that offer a more flexibility than what we have right now, so I bought a few different ones, and also more 150mm 2" lenses. This will allow for more compact solutions and offer some in-situ focusing ability that doesn't require detaching the lens tube like now. Should be here in a couple of days, then we'll be able to enclose the GigE camera in the viewport can with a similar field of view we have now.

I also bought a collimation package for the AS port fiber stuff so we can move ahead with the ringdown measurements and also mode spectroscopy.

  13388   Wed Oct 18 09:21:22 2017 jamieUpdateCDSFEs unresponsive
Quote:

I was looking at the ASDC channel on dataviewer, and toggling various settings like whitening gain. At some point, the signal just froze. So I quit dataviewer and tried restarting it, at which point it complained about not being able to connect to FB. This is when I brought up the CDS_OVERVIEW medm screen, and noticed the frozen 1pps indicator lights. There was certainly something going on with the end FEs, because I was able to ping the machine, but not ssh into it. Once the 1pps lights came back, I was able to ssh into c1iscex and c1iscey, no problems.

Could it be that some of the mx processes stalled, but the systemctl routine automatically restarted them after some time?

An mx_stream glitch would have interrupted data flowing from the front end to the DAQ, but it wouldn't have affected the heartbeat.  The heartbeat stop could mean either that the front end process froze, or the EPICS communication stopped.  The fact that everything came back fine after a couple of minutes indicates to me that the front end processes all kept running fine.  If they hadn't I'm sure the machines would have locked up.  The fact that you couldn't connect to the FE machine is also suspicious.

My best guess is that there was a network glitch on the martian network.  I don't know how to account for the fact that pings still worked, though.

  13387   Wed Oct 18 02:09:32 2017 gautamUpdateCDSFEs unresponsive

I was looking at the ASDC channel on dataviewer, and toggling various settings like whitening gain. At some point, the signal just froze. So I quit dataviewer and tried restarting it, at which point it complained about not being able to connect to FB. This is when I brought up the CDS_OVERVIEW medm screen, and noticed the frozen 1pps indicator lights. There was certainly something going on with the end FEs, because I was able to ping the machine, but not ssh into it. Once the 1pps lights came back, I was able to ssh into c1iscex and c1iscey, no problems.

Could it be that some of the mx processes stalled, but the systemctl routine automatically restarted them after some time? 

Quote:

So this wasn't just an EPICS freeze?  I don't see how this had anything to do with any of the work I did earlier today.  I didn't modify any of the running front ends, didn't touch either of the end station machines or the DAQ, and didn't modify the network in any way.  I didn't leave anything running.

If you couldn't access test points then it sounds like it was more than just EPICS.  It sounds like maybe the end machines somehow fell of the network momentarily.  Was there anything else going on at the time?

 

  13386   Wed Oct 18 01:41:32 2017 jamieUpdateCDSFEs unresponsive
Quote:

While working on the IFO tonight, I noticed that the blinky status lights on c1iscex and c1iscey were frozen (but those on the other 3 FEs seemed fine). But all other lights on the CDS overview screen were green I couldn't access testpoints from these machines, and the EPICS readbacks for models on these FEs (e.g. Oplev servo inputs outputs etc) were frozen at some fixed value. This lasted for a good 5 minutes at least. But the blinky lights started blinking again without me doing anything. Not sure what to make of this. I am also not sure how to diagnose this problem, as trending the slow EPICS records of the CPU execution cycle time (for example) doesn't show any irregularity.

So this wasn't just an EPICS freeze?  I don't see how this had anything to do with any of the work I did earlier today.  I didn't modify any of the running front ends, didn't touch either of the end station machines or the DAQ, and didn't modify the network in any way.  I didn't leave anything running.

If you couldn't access test points then it sounds like it was more than just EPICS.  It sounds like maybe the end machines somehow fell of the network momentarily.  Was there anything else going on at the time?

  13385   Tue Oct 17 23:07:52 2017 gautamUpdateCDSFEs unresponsive

While working on the IFO tonight, I noticed that the blinky status lights on c1iscex and c1iscey were frozen (but those on the other 3 FEs seemed fine). But all other lights on the CDS overview screen were green I couldn't access testpoints from these machines, and the EPICS readbacks for models on these FEs (e.g. Oplev servo inputs outputs etc) were frozen at some fixed value. This lasted for a good 5 minutes at least. But the blinky lights started blinking again without me doing anything. Not sure what to make of this. I am also not sure how to diagnose this problem, as trending the slow EPICS records of the CPU execution cycle time (for example) doesn't show any irregularity.

  13384   Tue Oct 17 19:31:53 2017 gautamUpdateLSCAS55Q Dark Noise

[Koji, gautam]

We took a closer look at the AS55 demod board today. The procedure was to just be as thorough as possible, and check the behaviour of the circuit (both Transfer Function and Noise) stage by stage. Checking the transfer function was the key.

During this process, we found that the reason why the Q channels had lower noise than the I channels was because of the gain of the AD829 stage of the circuit was 0dB rather than 4dB (which is what it should be according to the component values used). Specifically, resistor R12, which is supposed to be 1.30kohm, was measured to be 1.03kohmfrown. Replacing this resistor, the transfer functions (see Attachment #1) and noise levels (see Attachment #2) match the expectations from LISO. Some notes:

  1. The daughter board essentially consists of 2 stages
    • OP27 stage, which has a design gain of 16dB ((=316ohm/50ohm) (flat at frequencies <100kHz).
    • AD829 stage, which has a design gain of 4dB (=1.3kohm/768ohm), and is a 2nd order Butterworth LPF with corner @ 1MHz.
    • So the overall gain of the daughter board is 20dB (i.e. x10) at audio frequencies.
  2. The output noise of D040179 is expected to be ~35nV/rtHz at 100Hz, and the measurement (made with inputs soldered together) is consistent with this value.
  3. The measured voltage noise at the input to D040179 (i.e. the output of the minicircuits mixer + SCLF-5 LPF) is ~9nV/rtHz.
  4. The output voltage noise of the demod board with RFPD input terminated then is expected to be the quadrature sum of the noise due to the D040179 electronics (i.e. 40nV/rtHz) and the input noise to the D040179 (i.e. 9nV/rtHz) multiplied by the gain of the daughter board (i.e. x10) == \sqrt{40^2 + 90^2} \approx 98nV/\sqrt{\mathrm{Hz}}.
  5. To calculate the "dark noise" contribution of AS55 to MICH displacement noise, we have to further add the photodiode dark noise contribution: this gets us up to \sqrt{98^2 + 80^2} \approx 130nV/\sqrt{\mathrm{Hz}}. This is consistent with the measurement (see Attachment #2).
  6. Assming the whitened ADC noise level is much below this (should only be ~10nV/rtHz), and given the measured sensing element of 6.2e8 V/m, this means that the dark noise sets a maximum achievable sensitivity of 2e-16m/rtHz.

To figure out what (if anything) is to be done next, we need to first figure out what is the goal. In the end, we care about DARM and not MICH. The optical gain for the former is ~300x the latter, so the dark noise contribution gets scaled by this factor (giving us a number of 7e-19 m/rtHz). There are certainly many noises above that level which have to be handled first. Indeed, looking at the DARM spectrum from DRFPMI lock back in March 2016, it looks like the current 1f DRMI (with coils de-whitened) Michelson sensitivity is within a factor of 2 of DARM in the full lock (albeit with vertex DoFs on 3f signals, and no coil de-whitening). Koji pointed out that we need to consider the photodiode resonant circuit itself too.

TODO: Upload all this onto the DCC

Attachment 1: D040179_TFs.pdf
D040179_TFs.pdf
Attachment 2: AS55_DemodNoises.pdf
AS55_DemodNoises.pdf
  13383   Tue Oct 17 17:53:25 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction

I've been preparing for testing Gabriele's deep neural network MICH/PRCL reconstruction.  No changes to the front end have been made yet, this is all just prep/testing work.

Background:

We have been unable to get Gabriele's nn.c code running in kernel space for reasons unknown (see tests described in previous post).  However, Rolf recently added functionality to the RCG that allows front end models to be run in user space, without needing to be loaded into the kernel.  Surprisingly, this seems to work very well, and is much more stable for the overall system (starting/stopping the user space models will not ever crash the front end machine).  The nn.c code has been running fine on a test machine in this configuration.  The RCG version that supports user space models is not that much newer than what the 40m is running now, so we should be able to run user space models on the existing system without upgrading anything at the 40m.  Again, I've tested this on a test machine and it seems to work fine.

The new RCG with user space support compiles and installs both kernel and user-space versions of the model.

Work done:

  • Create 'c1dnn' model for the nn.c code.  This will run on the c1lsc front end machine (on core 6 which is currently empty), and will communicate with the c1lsc model via SHMEM IPC.  It lives at:
    • /opt/rtcds/userapps/release/isc/c1/models/c1dnn.mdl
  • Got latest copy of nn.c code from Gabriele's git, and put it at:
    • /opt/rtcds/userapps/release/isc/c1/src/nn/
  • Checked out the latest version of the RCG (currently SVN trunk r4532):
    • /opt/rtcds/rtscore/test/nn-test
  • Set up the appropriate build area:
    • /opt/rtcds/caltech/c1/rtbuild/test/nn-test
  • Built the model in the new nn-test build directory ("make c1dnn")
  • Installed the model from the nn-test build dir ("make install-c1dnn")

Test:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

Attachment 1: c1dnn.png
c1dnn.png
  13382   Mon Oct 16 16:01:04 2017 gautamUpdateLSCAS55Q Dark Noise

Koji suggested looking at the output of the AS55 demod board on a fast oscilloscope to look for differences in the two channel outputs (if there is some high-frequency oscillations, for example, we could miss this information in the SR785 spectra). Besides, I was only looking at spectra out to a few kHz on the SR785. I grabbed this data with a 300MHz BW Tektronix oscilloscope (battery mode) today. Input impedance of both channels were set to 1Mohm, and the measurement was made with the RFPD input terminated, output of the daughter board is what is measured. The vertical scaling of the channels was set to the minimum allowed, 1mV/div.

Attachment #1 shows that there is indeed a visible difference between the two channels - the (noisier) I channel has a much larger DC offset of ~5mV compared to the Q channel (I tried switching channels on the O'scope and the larger DC offset remained on the I channel, so seems real). There is also some kind of oscillation going on in the I channel, although the frequency is pretty low, with the peaks spaced ~50us apart. Indeed, in the ASD of the acquired data, the excess power in the I channel at 20kHz and higher harmonics are evident (see Attachment #2). Anyway all of this points to something being anomalous on the daughter board I channel signal path - I will pull it out and monitor the outputs at various points along the signal path with the fast scope to see if I can narrow down what's going on where.

Quote:

Both channels should be identical - I don't understand why the I channels are noisier than their Q counterparts. This is almost certainly a problem on the daughter board, as the orange traces are pretty much identical for both channels.

 

Attachment 1: DemodBoardwOscope.pdf
DemodBoardwOscope.pdf
Attachment 2: DemodBoardwOscope_ASD.pdf
DemodBoardwOscope_ASD.pdf
  13381   Mon Oct 16 12:13:38 2017 gautamUpdateCDSMegatron maintenance

Wall StripTool traces showed that IMC has not been locked for at least 8 hours when I came in this morning. Going to the IMC autolocker log, it looks like the last timestamp was at ~6pm yesterday. Megatron was responding to ping, but I couldn't ssh into it. So I went over to the machine and did a hard-reboot via front panel power switch. The computer took ~10mins to come back online and respond to ping. Once it did, I was able to ssh into it. However, trying the usual commands to restart the IMC autolocker and FSS Slow loops didn't work. Specifically, monitoring the logfile with tail -f Autolocker.log, I would see that the autolocker seemed to get stuck after starting the "blinky" script. Trying to restart the process using sudo initctl restart MCautolocker, init would print to shell that the restart had worked, and reported the PID, but the logfile wouldn't update "live" as it should when tail is used with the -f option. All very strange frown.

Anyways, as a last resort, I kill -9'ed the PID for the init instance, and init automatically restarted the Autolocker - this did the trick, IMC is locked now and logfile seems to be getting updated normallyyes.

I also cleared a bunch of matlab crash dump files in the home directory.

  13380   Fri Oct 13 12:26:12 2017 gautamUpdateLSCAS55Q Dark Noise

Attachment #1 - Measured / modelled noises for AS55 demod board. I've plotted quadrature sum of the LISO trace with the SR785 noise floor with input terminated to ground via 50ohm. Note that these measurements were made after all the changes in the marked up schematic in the previous elog were implemented.

Both channels should be identical - I don't understand why the I channels are noisier than their Q counterparts. This is almost certainly a problem on the daughter board, as the orange traces are pretty much identical for both channels.

The dark red curves were measured by shorting the inputs to D040179 to ground via 50ohms using some Pomona minigrabbers - I wanted to avoid ripping the daughter board out, but this probably explains the excess noise compared to the green trace at low frequencies. All other measurements were made with the board installed in the LSC rack eurocrate, with the LO input driven at the nominal level (I didn't measure this yesterday but a measurement from ~6months ago says that this level is 1.5dBm).

Attachment 1: AS55_DemodNoises.pdf
AS55_DemodNoises.pdf
  13379   Thu Oct 12 14:42:45 2017 gautamUpdateCDSslow machine bootfest

Steve reported problems getting the X arm locked. Alignment sliders were inaccessible. Eurocrate key turning reboots for c1susaux, c1auxex,c1auxey, c1iscaux and c1aux. Usual precautions were taken for ITMX.

This is becoming a once-a-week thing sad.

  13378   Thu Oct 12 12:17:28 2017 gautamUpdateLSCAS55Q Dark Noise

Here is the marked up schematic with the board as it is stuffed. Annoyingly, there is a capacitor (C1) which according to the schematic is supposed to be open, but is stuffed in our board. I can't find any elog about this, and its a pain to measure the value of this capacitance. I will upload all of this + LISO + noise model/measurements to a 40m AS55 daughter board DCC page.

 

Attachment 1: D040179_AS55_40m.pdf
D040179_AS55_40m.pdf D040179_AS55_40m.pdf
  13377   Thu Oct 12 07:56:33 2017 SteveHowToCamerasETMX GigE side view at 50 deg of IR scattering

 Telescope front lens to wall distance 25 cm,  GigE camera lenght 6 cm and cat6 cable 2cm

 Atm3,   Existing short camera  can has 16cm  lenght to lexan guard on viewport. Available 2" od periscope tube lenght is 8cm. The one in use 16 cm long.

             Note: we can fabricate a lite cover with tube that would accomodate longer telescope.

             Can we calibrate the AR coated M5018-SW and compare it's performance agains the 2" periscope

             Look at the Edmond Optics 3" od camera lens with AR

This lower priced   1" apeture Navitar lens  can be an option too.

 

 Atm1,   Now I can see dust. This is much better. The focus is not right yet.

Atm2,   Chamber viewport wiped and image refocused. Actually I was focusing on the dust.

Quote:

I calculated a better lens solution for the ETMX side view with the simple python script that's attached. The camera is still not as close to the viewport as we would like, and now the front lens is almost all the up to the end of the tube. With a little more playing around there maybe a better way, especially if we expand the repertoire of focal lengths. Using Steve's wonderful camera fixture I put the beam spot in focus. I turned the camera sideways for better use of the field of view, and now the beam spot actually fills the center area of the beam, to the point where we probably don't want more magnification or else we start losing the tails of the Gaussian.

We'll take a serious of images tomorrow, and will have an estimate of the scatter loss by the end of tomorrow.

 

 

Attachment 1: Image__2017-10-11__15-29-52_15k400g.png
Image__2017-10-11__15-29-52_15k400g.png
Attachment 2: Image__2017-10-12__15-50-18wipedRefocud2.png
Image__2017-10-12__15-50-18wipedRefocud2.png
Attachment 3: camCan16cm.jpg
camCan16cm.jpg
  13376   Thu Oct 12 01:50:11 2017 gautamUpdateLSCAS55Q Dark Noise

I worked on the daughter board a little more in the evening. I have somehow managed to make the dark noise ~25% worse [Attachment #1].

  • Earlier in the day, I had switched out both on-board AD797s for OP27. The latter has ~3x the input voltage noise, and LISO modeling suggests that this is the dominant contribution to the output voltage noise.
  • There are some differences in the actual components with which the board is stuffed, and the schematic. 
  • After updating the LISO model, I expect to get an output voltage noise of ~50nV/rtHz. But I measured ~2x this value (measured with LO input of demod board driven, RFPD input terminated).
  • While I had the board out, I replaced most of the installed thick-film resistors with thin film ones. For good measure, I also changed the AD829s.

After making all these changes, I re-installed the card in the eurocrate and repeated the measurement. The Q channel noise was close to the expected value (~50nV/rtHz), but the I channel is twice as noisy. I will continue this investigation tomorrow.

Attachment 1: AS55_dark.png
AS55_dark.png
  13375   Thu Oct 12 01:03:49 2017 johannesHowToCamerasETMX GigE side view

I calculated a better lens solution for the ETMX side view with the simple python script that's attached. The camera is still not as close to the viewport as we would like, and now the front lens is almost all the up to the end of the tube. With a little more playing around there maybe a better way, especially if we expand the repertoire of focal lengths. Using Steve's wonderful camera fixture I put the beam spot in focus. I turned the camera sideways for better use of the field of view, and now the beam spot actually fills the center area of the beam, to the point where we probably don't want more magnification or else we start losing the tails of the Gaussian.

We'll take a serious of images tomorrow, and will have an estimate of the scatter loss by the end of tomorrow.

 

Attachment 1: IMG_20171011_164549698.jpg
IMG_20171011_164549698.jpg
Attachment 2: Image__2017-10-11__16-52-01.png
Image__2017-10-11__16-52-01.png
Attachment 3: GigE_lens_position_helper.py.zip
  13374   Wed Oct 11 19:31:32 2017 gautamUpdateLSCAS55Q Dark Noise

I tried replacing the AD797s on the daughter board with OP27s, and saw no significant improvement in the electronics noise of the demod board. Note that according to LISO, in this configuration, the voltage noise of the Op27 is expected to dominate the total noise of the daughter board. Measurement condition was that the RFPD input was terminated, but the LO input was still being driven (SR785 input range is -50dBVpk for all traces, and the input ranging was set to "UpOnly"). Need to do a more systematic investigation to figure out where this excess noise is coming from. I will upload photos of the board later.

Quote:

This supports the hypothesis that something is wonky on the daughter board, because the purple trace should only be the quad sum of the orange and green traces. I will pull it out and have a look.

 

Attachment 1: AS55Q_darkNoises.pdf
AS55Q_darkNoises.pdf
ELOG V3.1.3-