40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 307 of 355  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  14698   Tue Jun 25 23:52:37 2019 MilindUpdateCamerasSimulation enhancements

Yesterday, Rana asked me to look at Hiro Yamamoto's docs on the DCC to improve the simulation. I'm performing a first pass (=> Just skimming through to see if they're relevant, I will go through them more carefully soon!) and putting up stuff here for future reference. @Kruthi's help much appreciated!

  14699   Wed Jun 26 10:55:13 2019 aaronUpdateIOOPMC and IMC locked again, some MEDM maintenance

The PMC was locking again after Gautam's steps above. However, after I added the directional coupler between the mixer I and the servo card (coupled to the Agilent analyzer), the PMC was again not locking, except occasionally with gain of -10 dB.

I removed the coupler (so the mixer I goes directly to the PMC servo card, as Gautam had it), and the PMC was still not locking. While checking connections, I noticed that one of the SMA cables between the LO and the mixer was not even finger tight, so I tightened them to approximately the right torque with a non-torque wrench.

This did not lead to the PMC locking, so Millind helped me key the c1psl VME crate. I burt restored the latest snapshot. Now, the PMC locks up until gain of -5. I try burt restoring the previous snapshot, which was from when the PMC was locking, and now it locks. Adding in the directional coupler again leads to the PMC not locking, though this time removing the coupler restores the normal behavior. I also tried using the coupler with the coupling port connected to a 50 Ohm terminator, and this configuration also did not lock.

I had been using a ZFDC-20-5-S+ (0.1-2000 MHz) with SMA ports and SMA-to-BNC on the input and output ports (since the mixer has BNC connectors). To reduce the number of potentially flaky connections, I am trying the ZFDC-20-4 (1-1000 MHz) that I found with BNC ports. The PMC still doesn't lock.

To get some spectrum, I've connected the PMC servo card's 'mixer out' to the Agilent's A channel, and collected a spectra from [10 Hz, 75 kHz], [75 kHz, 750 kHz], and [750 kHz, 2 MHz].


Wed Jun 26 15:23:37 2019

After the lab cleaning, I added a BNC T on the mixer I port, so now the configuration is:

Mixer I -> BNC T

-> PMC Servo card FP1TEST

-> directional coupler -> coupled to the spectrum analyzer, out port is terminated with 50 Ohms.

I thought maybe the issue was that the TF from in->out on the directional coupler is not what I expect (and Gautam suggested the in-out port might block DC), but the PMC still does not lock in the above configuration, in which the coupler is not between the mixer and the servo board--so only reflections from the coupler should matter, I think.

However, even when I plug the mixer directly into the servo board, the PMC is not locking (again) with gain above -8 dB or so. I did a burt restore again, and this fixed the problem. I wasn't sure why this burt restore is working, because all I am changing is the DC output adjust voltage and the gain, and switching on/off FP1TEST. However, I observed that after running the PMC autolocking script, observing that the autolocker did not achieve lock as it swept through resonance, and cancelling the autolocker, the PMC again cannot be locked for high gains. When I let the autolocker complete, this doesn't happen, so probably I'm just not letting some channel return to its nominal value after being changed by the autolocker.

Now after another burt restore, I'm avoiding using the autolocker and am still having trouble locking with the BNC T + directional coupler configuration above. However, now I'm noticing that the PZT control mon is always railed, as long as FP1TEST is in the loop (and independent of the output adjust voltage). I try returning to the 'baseline' configuration (mixer -> PMC servo card directly), and the PMC locks but with only 0.68 V transmission (was >0.7 V before).


Per Gautam's earlier suggestion, I switched to using the Agilent 41800A probe instead of the directional coupler. I was able to lock the OMC with this probe on a BNC T coming out of the mixer (transmission is 0.71 V). I recorded the spectra of the PMC servo board's "Mixer Out" channel, and the mixer's I as seen by the probe. I recorded spectra from 10 Hz to 100 MHz. The soft linked netgpibdata folder I had in my users directory is no longer soft linked--presumably intentional so I don't tamper with it?

I'm a bit skeptical that I've used the probe correctly, so I'm checking out the manual.

Indeed, I needed to pull back the sheath; I also noticed that the GPIB script I've been using doesn't save the data from both channels when I take a spectrum in dual mode, so I'm taking the spectra again one at a time (lights are on, IMC is locked).

  14700   Wed Jun 26 11:11:40 2019 MilindUpdateIOOPMC and IMC locked again, some MEDM maintenance

After helping Aaron key the crate and do a burt restore, I realized that it would probably be best to record the steps that Koji showed me to do a burt restore as a reference for (anyone) the future

Commands (in terminal):

  1. burttoday: changes to the directory with snapshots for the day (/opt/rtcds/caltech/c1/burt/autoburt/today)
  2. burtgooey: opens a new window with several buttons of which "Restore" needs to be selected. This opens up a second window as shown in Attachment #1. Click on Snapshot files and navigate to the snapshot you wish to restore (these are present at /opt/rtcds/caltech/c1/burt/autoburt/snapshots) and select that. A green "OK" button indicates if the Restore can be performed without a hitch. Hit "Restore" to perform the burtrestore.

 

Also Gautam explained today that the sticky slider problem is a hardware issue. That it basically means that the signal (voltage output for instance) that you request from the medm screen is not what the hardware delivers. Twice now, we have got around that with a burtrestore. My understanding of a burt restore is that it is a restoration of values from a certain time to the EPICS channels. Therefore, I don't understand why a restoration at the software level should fix how the hardware responds? Why does this happen?

Attachment 1: burtgooey.pdf
burtgooey.pdf
  14701   Wed Jun 26 18:28:24 2019 ranaUpdateIOOPMC and IMC locked again, some MEDM maintenance

a useful piece of code that we should ask one of this summer's SURFs to write:

  1. read in a BURT .req file which is usually used to make the autosnap / restore.
  2. change ALL of the values to some value (slightly different from its current value)
  3. restore it to its current value

this will solve the sticky slider problem and do it in a systematic way. We can run it from the command line: e.g. 'unsticky.py c1psl c1ioo c1lsc'

Quote:

Aaron complained to me earlier that the PMC could not be locked. Turned out to be a classic sticky slider problem,

  14702   Wed Jun 26 19:12:00 2019 KruthiUpdateCamerasGigE

The GigE is focused now (judged by eye) and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs.

What was the solution to resolving the flaky video streaming during the alignment process????

-> I think, the issue was with either the poor wireless network conection or the GigE-PoE ethernet cable.

Quote:

Turns out, focusing the GigE is actually a bit tricky. With pylon, everytime I change the exposure or the focus, I'm running into the error I had mentioned earlier in one of my elogs; so I tried using the python scripts to interact with the GigE. But whenever I try to change the focal plane distance by rotating the lens coupler, the ethernet cable connection becomes loose and the camera server needs to be relaunched every now and then. Also, everytime we want to change the distance between the lenses, the telescope needs to be dismantled and refocused again. I'll try to come up with a better telescope design for this.

Yesterday, I had focused the GigE using a low exposure time and small aperture of iris, to make sure that we are actually seeing a sharp image of the beam spot. I'm attaching a picture of the beam spot I had clicked while focusing it, unfortunately, I forgot to take a picture after I had focused it completely. I'm also attaching a picture of the final setup for future reference. 


Yesterday night, Rana asked me to lock the MC2. I figured that the PSL shutter was closed; I just opened it and was able to see the beam spot on the analog camera screen.

Attachment 1: MC2_GigE_image.pdf
MC2_GigE_image.pdf
  14703   Wed Jun 26 20:45:03 2019 gautamUpdateCamerasField of view options

For the beam spot position tracking, I am wondering if there is any benefit to going for a wider field of view and getting the OSEMs in the frame? It may provide some "anchor points" against which whatever algorithm can calibrate the spot position against. But there are also several point scatterers visible in the current view, and perhaps the Gaussiam beam profile moving over them and tracking the scattered intensity from these point scatterers serves the same function? I don't know of a good solution to have a "switchable" field of view configuration in the already cramped camera enclosure though.

Also, I think it may be useful to have a cron job take a picture of MC2 and archive it (once a week? or daily?) to have some long term diagnostic of how the scattered light received by the camera changes over several months.

Quote:

The GigE is focused now and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs

  14704   Wed Jun 26 21:01:26 2019 gautamUpdateLSCPOX and POY locking

Now that the IMC is remaining locked for extended periods of time, the next problem to attack is the ASS dither alignment system. For a start, I decided to try and get the POX and POY locking working again, as we have not fully recovered the interferometer alignment after the most recent pumpdown. I spent a couple of hours tweaking the alignment of the arm cavity mirrors, BS, and TTs to try and recover the maximum possible TRX and TRY - however, my best efforts only yielded TRX~0.8, TRY~0.75. Moreover, the beam axis is such that the spot is significantly off in YAW on both ETMs, as evidenced by the camera views (also true but less obvious on the ITMs). However, trying to bring the beam back to the center of the optics yields TRY and TRX values lower than the above reported maxima. The EX green beam is currently unavailable to verify the arm cavity alignment because of my hijacking the EX NPROs PZT control for PLL investigations, but with the Y arm, I'm able to lock a TEM00 mode. Probably just needs more careful systematic alignment, but I'm not pursuing this tonight.

  14705   Thu Jun 27 14:28:12 2019 gautamUpdateLSCPOX and POY locking

After a more systematic alignment effort, I was able to get the spots better centered on the optics (judged by eye from the analog camera views). TRY ~0.7, TRX~1.15. The X-arm dither alignment system seems to work out-of-the-box with the existing settings, I was able to run it and maximize the X-arm transmission.

Other work: I also cleaned up the area around MC2 a litte - laptop from on top of the vacuum chamber was removed and a rogue ethernet cable was also removed. The resulted in some misalignment of the IMC, which I corrected by manual alignment. Now the IMC is locked again with nominal transmission levels.

On the PSL table, I re-routed the RF output from the BeatMouth to the regular IR-ALS electronics chain (it was hijacked for PLL investigations). At EX, I disconnected the cable running from the LB1005 to the EX NPRO laser PZT (again was being used for PLL locking), and re-connected the output from the Green uPDH box to allow for some ALS tests to be done. I could then lock the EX green beam to the X-arm, and achieved GTRY ~ 0.35 using the ASX system. More to follow on ALS tests later today.

  14706   Thu Jun 27 20:48:22 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

And finally, a network is trained!

Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github.

 

More details of the experiment:

Aim:

  1. To train a network to check that training occurs and get a feel for what the learning might be like.
  2. To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison.
  3. To track beam spot motion.

What I did:

  1. Set up a network that learns a framewise mapping as described in here.
  2. Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames
  3. Hyperparameters: Attachment #1
  4. Did no tuning of hyperparameters.
  5. Compiled and fit the models and saved the results.

 

What I saw

  1. Attachment #2: data fed to the network after pre-processing - median blur + crop
  2. Attachment #3: learning curves.
  3. Attachment #4: true and predicted motion. Nothing great.

What I think is going wrong-

  1. No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments.
  2. Too little data.
  3. Maybe wrong architecture.

Well, what now?

  1. Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?)
  2. Currently the network has around 200k parameters. Maybe reduce that.
  3. Set up a network that takes as input (one example corresponding to one forward pass)  a bunch of frames and predicts a vector of position values that can be used as continuous data).
Quote:

I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.

 
Quote:
 
  1. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.

 

 

Attachment 1: readme.txt
Experiment file: train.py
batch_size: 32
dropout_probability: 0.8
eta: 0.0001
filter_size: 19
filter_type: median
initializer: xavier
num_epochs: 50
activation_function: relu
dense_layer_units: 64
... 10 more lines ...
Attachment 2: frame0.pdf
frame0.pdf
Attachment 3: Learning_curves.png
Learning_curves.png
Attachment 4: Motion.png
Motion.png
  14708   Sat Jun 29 03:11:18 2019 KruthiUpdateCamerasCCD Calibration

Finding the gain of the Photodiode: The three-position rotary switch of the photodiode being used (PDA520) wasn't working, so I determined its gain by making a comparative measurement between ophir power meter and the photodiode. The photodiode has a responsitivity of 0.34 A/W at 1064 nm (obtained from the responsitivity curve given in the spec sheet using a curve digitizing software). Using the following equation, I determined the gain setting, which turned out to be 20dB.

\large Transimpedance\ Gain (V/A) = \frac{Photodiode\ reading (V)}{Ophir\ reading (W) * Responsitivity (A/W)}

Setup: Here a 1050nm (closest we have to 1064nm) LED is used as the light source instead of a laser to eliminate the effects caused by coherence of a laser source, which might affect our radiometric calibration. The LED is placed in a box with a hole of diameter 5mm (aperture angle = 40 degrees approx.). Suitable lenses are used to focus the light onto a white paper, which is fixed at an arbitrary angle and serves as a Lambertian scatterer. To make a comparative measurement between the photodiode (PDA520) and GigE, we need to account for their different sensor areas, 8.8mm (aperture diameter) and 3.7mm x 2.8 mm respectively . This can be done by either using an iris with a common aperture so that both the photodiode and GigE receive same amount of light , or by calculating the power incident on GigE using the ratio of sensor areas and power incident on the photodiode (here we are using the fact that power scattered by Lambertian scatterer per unit solid angle is constant). 

Calibration of GigE 152 unit: I took around 50 images, starting with an exposure time of 2000 \LARGE \mu s in steps of 2000, using the exposure_variation.py code. But the code doesn't allow us to take images with an exposure time greater than 100 ms, so I took few more images at higher exposures manually. From each image I subtracted a dark image (not in the sense of usual CCD calibration, but just an image with same exposure time and no LED light). These dark images do the job of usual dark frame + bias frame and also account for stray lights. A plot of pixel sum vs exposure time is attached. From a linear fit for the unsaturated region, I obtained the slope and calculated the calibration factor.

Equations:      \LARGE Power (P)=\frac{Photodiode\ reading(V)}{Transimpedance\ gain (V/W) * Responsivity (A/W)}                    \LARGE Calibration factor (CF) = \frac{P}{slope}

Result: CF = 1.91x 10^-16 W-sec/counts  Update: I had used a wrong value for the area of photodiode. On using 61.36 mm^2 as the area, I got 2.04 x 10^-15 W-sec/counts.

I'll put the uncertainities soon. I'm also attaching the GigE spectral response curve for future reference.

Attachment 1: calibration_setup.jpg
calibration_setup.jpg
Attachment 2: CCD_calibration_2.jpeg
CCD_calibration_2.jpeg
Attachment 3: GigE_spectral_response_curve.png
GigE_spectral_response_curve.png
Attachment 4: 152_calibration_plot.png
152_calibration_plot.png
  14709   Sun Jun 30 19:47:09 2019 ranaUpdateIOOIMC WFS agenda

we are thinking of doing a sprucing up of the input mode cleaner WFS (sensors + electronics + feedback loops)

  1. WFS Heads
    1. it has been known since ~2002 that the RF circuits in the heads oscillate. 
    2. in the attached PDF you can see that 2 opamps (U3 & U4; MAX4106) are used to amplify the tank circuit made up of the photodiode capacitance and L6.
    3. due to poor PCB layout (the output of U4 runs close to the input of U3) the opamps oscillate if the if the Reed relay (RY2) is left open (not attenuating)
    4. we need to remove/disable the relay
    5. also remove U3 for each quadrant so that it has a fixed gain of (TBD) and a 50 Ohm output
    6. also check that all the resonances are tuned to 1f, 2f, & 3f respectively
  2. Demod boards
  3. DC quadrant readouts
  4. Whitening
  5. Noise budget of sensors, including electronics chain
  6. diagonalization of sensors / actuators
  7. Requirements -
  8. Optical Layout
  9. What does the future hold ?

  1. what is our preferred pin-for-pin replacement for the MAX4106/MAX4107? internet suggests AD9632. Anyone have any experience with it? The Rabbott uses LMH6642 in the aLIGO WFSs. It has a lower slew rate than 9632, but they both have the same distortion of ~ -60 dB for 29.5 MHz.
  2. the whole DC current readout is weird. Should have a load resistor and go into the + input of the opamp, so as to decouple it from the RF stuff. Also why such a fast part? Should have used a OP27 equivalent or LT1124.
  3. LEMO connectors for RF are bad. Wonder if we could remove them and put SMA panel mount on there.
  4. as usual, makes me feel like replacing with better heads...and downstream electronics...
Attachment 1: WFS-Head.pdf
WFS-Head.pdf
  14710   Sun Jun 30 22:02:26 2019 MilindUpdateCamerasKeyed c1aux crate

I wanted to try out the unstick.py script on c1aux but kept running into timeout errors. I was also confronted by a blank GigE screen. Further, couldn't telnet into c1aux using telnet c1aux as described here. Therefore, I went in and keyed the c1aux crate (1Y1).

  14711   Sun Jun 30 22:21:07 2019 MilindUpdateIOOPMC and IMC locked again, some MEDM maintenance

Wrote the script. It currently lives at /users/milind/NonlinearControl/milind/unstick/unstick.py. Also pushed it to the repo here. It does the following:

  1. When run as python unstick.py c1aux (for instance) from the terminal, it parses the autoBurt.req file at /cvs/cds/caltech/target/c1aux/autoBurt.req and obtains the channels.
  2. Iterates through the channels and changes it to "some value"
    1. For channels corresponding to buttons on MEDM screen (Enable/Disable etc.) toggles between 0 and 1
    2. For channels corresponding to continuous values (such as say exposure time or the like) changes to abs(1+current_value)
  3. Resets to original value and then moves to the next channel

I tried print statements instead of actually writing to the channels as Gautam asked me to do that with supervision. Is this good enough?

Quote:

a useful piece of code that we should ask one of this summer's SURFs to write:

  1. read in a BURT .req file which is usually used to make the autosnap / restore.
  2. change ALL of the values to some value (slightly different from its current value)
  3. restore it to its current value

this will solve the sticky slider problem and do it in a systematic way. We can run it from the command line: e.g. 'unsticky.py c1psl c1ioo c1lsc'

Quote:

Aaron complained to me earlier that the PMC could not be locked. Turned out to be a classic sticky slider problem,

  14712   Sun Jun 30 23:52:09 2019 KojiUpdateIOOPMC and IMC locked again, some MEDM maintenance

> For channels corresponding to continuous values (such as say exposure time or the like) changes to abs(1+current_value)

Why abs? Is the current_value is like -5.4321 (for example for the alignment slider), this returns +4.4321 and the suspension will suffer from huge motion (well it will be returned to the original value soon though). 

  14713   Mon Jul 1 11:02:05 2019 MilindUpdateIOOPMC and IMC locked again, some MEDM maintenance

Made changes as discussed in this issue. Further, I need to add signal handling capabilities to the code. I belive Jon has pointed me to some code. I will take a look at that ASAP.

Quote:

> For channels corresponding to continuous values (such as say exposure time or the like) changes to abs(1+current_value)

Why abs? Is the current_value is like -5.4321 (for example for the alignment slider), this returns +4.4321 and the suspension will suffer from huge motion (well it will be returned to the original value soon though). 

  14714   Mon Jul 1 20:11:34 2019 MilindUpdateCamerasSimulation enhancements

Today, I read a lot more about BRDF and modelling but could not make much headway regarding the implementation in the simulation. I've stopped for now and I'll take a crack at it tomorrow again.

Quote:

Yesterday, Rana asked me to look at Hiro Yamamoto's docs on the DCC to improve the simulation. I'm performing a first pass (=> Just skimming through to see if they're relevant, I will go through them more carefully soon!) and putting up stuff here for future reference. @Kruthi's help much appreciated!

  14715   Mon Jul 1 20:18:01 2019 MilindUpdateComputer Scripts / ProgramsPMC autolocker

I've begun working on this. Steps to complete:

  1. Convert the autolocker to python. Test that it works.
  2. Run the script with different settings of the servo gain adjust and DC output adjust parameters and obtain a plot of the average time of lock to determine what the best settings of the aforementioned parameters are.
Quote:

As Rana asked me to in the last meeting, I dug through the elogs to determine what had become of the previous autolockers. I stumbled upon this elog by Rana from before Gautam cleaned up the medm screen. Out of curiosity, I ran the autolocker script using the instructions in Rana's elog. I did this a total of 5 times and could lock the PMC 3 times fairly quickly. I attempted to decipher the details of the code but did not make much headway owing to my unfamiliarity with the language. From what I could make out from the medm screen while the autolocker was running, it appeared to be the same method as that in this elog. I will take a look at it again tomorrow. However, I intend to spend most of tomorrow working on preprocessing the data, developing the CNN script and then the simulation. 

Quote:
 
  1.  I shall also begin working on a script to autolock the PMC based on what Rana showed me on Monday. I will also take a look at the the contents of this elog and try to pick up from there. I hope to make significant progress by the next lab meeting.
  14716   Mon Jul 1 20:27:44 2019 gautamUpdateASCASX tuning

Summary:

To practise the dither alignment servo tuning, I decided to make the ASX system work again (mainly because it has fewer DoFs and so I thought it'd be easier to manage). Setup is: dither PZT mirrors on EX table-->demodulate green transmission at the dither frequencies-->Servo the error signals to 0 by an integrator.

Details:

  1. Started by checking the dither lines are showing up with good SNR in GTRX. They are, see Attachment #1. The dither lines are at 18.23 Hz, 27.13 Hz, 53.49 Hz and 41.68 Hz, and all of them show up with SNR ~100.
  2. Hand-aligned the beam till I got a maximum of GTRX ~ 0.35. This is lower than the usual ~0.5 I am used to - possibilities are (i) in the process of plugging in the BNC cable to the rear of the EX laser for my PLL investigations, I disturbed the alignment into the SHG crystal ever so slightly and I now have less green light going into the cavity or (ii) there is an iris on the EX table just before the green beam goes into the vacuum on which it is getting clipped. IIRC, I had centered the GTRX camera view such that the spot was well centered in the field of view, but now I see substantial mis-centering in pitch. So the cavity alignment for IR could also be sub-optimal (although I saw TRX ~1.15). Anyways, I decided to push on.
  3. Introduced a deliberate offset in a given DoF, e.g. M1 PIT. Then I looked at the demodulated error signals (filtered through an RLP0.5 filter post demodulation, so the 2f component should be attenuated by 100 dB at least), and tuned the demod phase until most of the signal appeared in the I-phase, which is what is used for servoing. The Q-phase signals were ~x10 lower than their I-phase counterparts after the tuning.
  4. Checked the linearity of the error signal in response to misalignment of a given DoF. I judged it to be sufficiently linear for all four DoFs about the quadratic part of the GTRX variation.
  5. Tweaked the overall servo gains to have the error signals be driven to 0 in ~10 seconds.
  6. There was quite significant cross-coupling between the DoFs - why should this be? I can understand the PIT->YAW coupling because of imperfect mounting of the PZT mounted mirror in a rotational sense, but I don't really understand the M1->M2 coupling.
  7. Nevertheless, the servo appears to work - see Attachment #2.

The adjusted demod phases, servo gains were saved to the .snap file which gets called when we run the "DITHER ON" script. Also updated the StripTool template.

I plan to repeat similar characterization on the IR dither alignment servos. I think the tuning of the ASS settings can be done independently of figuring out the mystery of why the TRY level is so low.

Attachment 1: ASX_ditherlines.pdf
ASX_ditherlines.pdf
Attachment 2: ASX.png
ASX.png
  14717   Tue Jul 2 12:30:44 2019 MilindUpdateComputer Scripts / ProgramsPMC autolocker

Just finished a raw version of the autolocker!! Tested it once and was able to achieve lock! This is a python version of the code at /opt/rtcds/caltech/c1/scripts/PSL/PMC/AutoLock.sh.

The current code lives in my users directory. Gautam asked me to put the completed autolocker at /opt/rtcds/caltech/c1/scripts/PSL/PMC/ and that I needn't necessarily put it on git. However, I had previously added it to my Non-linear control repo. Not sure if I should take it off? The current script still lacks some checks like those that enable it to stop after a certain time of attempting to lock or those that handle interrupt signals. Will do that in some time.

P.S. As Koji says, Victory! :-P

P.P.S. Rana pointed out that this is not the objective and what we actually wanna do is run a search over the parameter space of the locking process. I will document my ideas about this process as soon as I do a little more reading. He also said that it would not do to have command line arguments as the main source from which parameters are procured and that .yml files ought to be used instead. I will make that change asap.

 

Quote:

I've begun working on this. Steps to complete:

  1. Convert the autolocker to python. Test that it works.
  2. Run the script with different settings of the servo gain adjust and DC output adjust parameters and obtain a plot of the average time of lock to determine what the best settings of the aforementioned parameters are.
Quote:

As Rana asked me to in the last meeting, I dug through the elogs to determine what had become of the previous autolockers. I stumbled upon this elog by Rana from before Gautam cleaned up the medm screen. Out of curiosity, I ran the autolocker script using the instructions in Rana's elog. I did this a total of 5 times and could lock the PMC 3 times fairly quickly. I attempted to decipher the details of the code but did not make much headway owing to my unfamiliarity with the language. From what I could make out from the medm screen while the autolocker was running, it appeared to be the same method as that in this elog. I will take a look at it again tomorrow. However, I intend to spend most of tomorrow working on preprocessing the data, developing the CNN script and then the simulation. 

Quote:
 
  1.  I shall also begin working on a script to autolock the PMC based on what Rana showed me on Monday. I will also take a look at the the contents of this elog and try to pick up from there. I hope to make significant progress by the next lab meeting.
  14718   Tue Jul 2 12:30:53 2019 gautamUpdateElectronicsAcromag crate switched to Sorensens

[chub, gautam]

We crossed off another couple of bullets today.

It took me ~1 hour to realize that c1susaux requries the running of sudo /sbin/ifup eth0 to be run in order to see the martian network - why???

Activity:

  1. Stopped the c1susaux machine:
    • Moved alignment sliders of ITMX and ITMY to 0 as a precaution.
    • Shutdown the c1susaux machine so that it doesn't become unhappy with the missing Acromags when we power the unit down.
  2. Dialled down supply voltages on the +/- 15 V and +/- 20 V DC Sorensens. Current draw became 0 A on the front panel indicators.
  3. Chub tapped some new terminal blocks for +15 V DC and +20 V DC
    • This required some additional daisy chaining, which is why we dialled down the Sorensens.
    • New cables were made using the "standard" LIGO color scheme, which isn't really applicable in this case because we are using +15 V DC (orange sheath wire) and + 20 V DC (yellow sheath wire) whereas the closest LIGO standard voltages are +18 V DC and +24 V DC.
    • A test cable, presumably meant to be used in the electronics area (orange for +15 V DC) was destroyed for this work as we opted for speed rather than making a new cable.
  4. Disconnected bench power supplies that were powering the Acromags, and connected the new cables.
    • I opted to use 5 A fuses in the terminal blocks for these supplies as the current draw is pretty significant.
  5. Dialled the Sorensens back up to the nominal voltages:
    • Attachment #1 shows the front panels of the Sorensens before and after this work.
    • The current limit on the +20 V DC Sorensen had to be raised, because the Acromag box draws ~2.3 A on its own, whereas the previous current draw was 2.8 A.
  6. Brought the c1susaux machine back online. Took me a while to get to the bottom of why I wasn't able to see c1susaux on the martian, but eventually, I figured out the whole sbin/ifup thingy. 

I don't understand the exact chain of causation, but during this work, the fast c1sus model crashed. I had to go through a few iterations of the scripted vertex machine rebooting, but things seem to be back in a normal state now, see Attachment #2. Should probably run the IFO test suite to make sure everything is a-okay, but for now, I am able to lock the IMC so I'm moving on.

The main task remaining here is to take new pictures of everything and upload to the wiki. Also, need to update the Sorensen labels to reflect their current values, some of them are outdated.

Quote:
  • Take photos of the new setup, cabling.
  • Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
  • Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
Attachment 1: 1X5Sorensens.pdf
1X5Sorensens.pdf
Attachment 2: CDS_20190702.png
CDS_20190702.png
  14719   Tue Jul 2 16:57:09 2019 gautamUpdateCDSc1sus is flaky

Since the work earlier this morning, the fast c1sus model has crashed ~5 times. Tried rebooting vertex FEs using the reboot script a few times, but the problem is persisting. I'm opting to do the full hard reboot of the 3 vertex FEs to resolve this problem.

Judging by Attachment #1, the processes have been stable overnight.

Attachment 1: c1sus_timing.png
c1sus_timing.png
  14720   Tue Jul 2 17:34:54 2019 gautamUpdateLSCIrides opened up on EY table

In preparation for the ASS debugging, I decided to check out the beam path on the EY table. In order to be able to do this, I had to setup the POY locking to trigger on AS110 instead of TRY (as is usual for this kind of debugging). Then I could poke an IR card in the beam path without destroying the lock.

There are two irides in the beam path immediately between the vacuum window and the harmonic separator that splits off the IR and green beams. I found that the beam was in fact getting clipped on both of them. It was also somewhat off center on a 2" beamsplitter that sends half of the light to the QPD (currently decommissioned). The purpose of these irides are (I think) to eliminate some ghost reflections of the green beam and also the Oplev beam. I opened up the irides until I felt that there wasn't any more clipping of the IR beam, but the appropriate ghost beams were still getting caught.

I also re-aligned the beam onto the TRY Thorlabs PD so as to better center it on the active area. In summary, the result of this work was that the TRY level went from ~0.6 to ~0.93. There may still be some scope for optimizing this - I tried running the Y-arm ASS scripts, and already, the loops don't run away any more. I'll do the systematic analysis of the servo anyways. But given that the IMC Trans lev el used to be ~15,500 counts and is now ~14,500 counts, I think ~7% drop in TRY level is in line with what we "expect" (assuming the pre-power-degradation TRY level was 1.000).

Note that these irides were installed (I think) by Yuki, and so cannot explain the ASS anomalies of July 2018 (i.e. it does not exonerate in-vacuum clipping of the beam, as Koji had already verified that the in-air path was clean back then).

  14721   Tue Jul 2 19:36:18 2019 aaronUpdateIOOIMC diagnostics

The latest in my fling with the PMC. Though PMC trans is back to nominal levels (~0.713 V), we'd still like to understand the PMC noise.

Last time, I took some spectra with the RF probe (Agilent 41800A). I had already measured the PDH error signal by sweeping the PZT at ~1 Hz. The notebook I used for analysis has been updated in /users/aaron/analysis/PDH_calibrate.ipynb. The analysis was the following:

  • fit the PDH error signal, assuming a 35.5MHz modulation frequency. Here are the (approximate) fit parameters:
    • Mapping of PZT mon voltage to Hz: 5.92 Hz/V_{PZT_mon}
    • P_carrier*P_signal: 0.193 W^2
    • HV mon voltage on resonance: 0.910 V
    • Error signal far off resonance: 0.249 V
    • Transmission: 0.00238
      • ​yikes. The nominal transmission is T=0.003. I let this parameter be free as a check, and to avoid overconstraining the data; is this consistent with measurements of the PMC optics' transmission?
    • Length: 0.0210 m
      • This is consistent with the nominal PMC length
  • Using the fit of the full PDH error signal, I am able to plot error signal vs frequency, and fit the linear portion of the carrier PDH signal. The results of this fit are:
    • -9.75e-7 V_PDH per Hz
    • 0.105 V error signal at DC
  • I then divide the power spectra by the squared slope of the linear fit above (V_PDH^2/Hz^2) to get the spectra in Hz^2/rtHz
    • I've plotted both the spectrum I took directly at the mixer I using the agilent probe, as well as the spectrum taken by sending the PMC servo card's mixer mon to an SR560 (G=2) then to the spectrum analyzer

There are a few problems remaining:

  • There should be a gain of 100 between the mixer I and the servo board's mixer out. It's not clear to me that this is reflected in the spectra. Moreover, the header files on the spectra I grabbed from the Agilent say that the R (mixer I) channel has 20dB of input attenuation, which is also not reflected. If I have swapped the two spectra and not accounted for either the gain of the servo card or the attenuation of the spectrum analyzer, these two gains would cancel, but I'm not confident that's what's going on.
Attachment 1: PDH_error.pdf
PDH_error.pdf
Attachment 2: PMC_Error_Spectrum.pdf
PMC_Error_Spectrum.pdf
  14722   Wed Jul 3 11:47:36 2019 gautamUpdateBHDPRC filtering

A question was raised as to how much passive filtering we benefit from if we pick off the local oscillator beam for BHD from the PRC. I did some simplified modeling of this. For the expected range of arm cavity round trip losses (20-50 ppm), I think that the 40m CARM pole will be between 75-85 Hz. The corresponding recycling gain will be 40-50, with the current PRM. I assumed 1000 ppm loss inside the PRC. The net result is that, assuming the single pole coupled cavity response, we will get ~8-9 dB of filtering at ~200 Hz of the intensity noise of the input laser field to the interferometer if we pick the LO beam off from the PRC (e.g. PR2 transmission), instead of picking it off before.

The next questions are: (i) can we do a sufficiently good job of achieving the required RIN stability on the LO field for BHD without relying on the passive filtering action of the PRC? and (ii) is the benefit of the PRC filtering ruined in the process of routing the LO field from wherever the pickoff happens to the BHD setup?

Attachment 1: PRCfiltering.pdf
PRCfiltering.pdf
  14723   Wed Jul 3 23:53:38 2019 MilindUpdateCamerasdata for nns

Tried collecting data today. Was unable to keep the camera_server code running for any length of time as it threw segfaults. Will take a shot again tomorrow.

Quote:

The GigE is focused now (judged by eye) and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs.

What was the solution to resolving the flaky video streaming during the alignment process????

-> I think, the issue was with either the poor wireless network conection or the GigE-PoE ethernet cable.

Quote:

Turns out, focusing the GigE is actually a bit tricky. With pylon, everytime I change the exposure or the focus, I'm running into the error I had mentioned earlier in one of my elogs; so I tried using the python scripts to interact with the GigE. But whenever I try to change the focal plane distance by rotating the lens coupler, the ethernet cable connection becomes loose and the camera server needs to be relaunched every now and then. Also, everytime we want to change the distance between the lenses, the telescope needs to be dismantled and refocused again. I'll try to come up with a better telescope design for this.

Yesterday, I had focused the GigE using a low exposure time and small aperture of iris, to make sure that we are actually seeing a sharp image of the beam spot. I'm attaching a picture of the beam spot I had clicked while focusing it, unfortunately, I forgot to take a picture after I had focused it completely. I'm also attaching a picture of the final setup for future reference. 


Yesterday night, Rana asked me to lock the MC2. I figured that the PSL shutter was closed; I just opened it and was able to see the beam spot on the analog camera screen.

  14724   Thu Jul 4 10:47:37 2019 MilindUpdateGeneralEarthquake now

There was a magnitude 6.6 earthquake just a few minutes ago. I am attaching photographs of the monitor feeds for reference here. Is there a standard protocol to be followed in this situation? I'm looking through the wiki now.

Further, the IMC seems to be misaligned and is not locking! cryingcrying As Koji has let me know, I really hope this is not too serious and can be fixed easily.

Attachment 1: after_earthquake2.jpg
after_earthquake2.jpg
Attachment 2: after_earthquake.jpg
after_earthquake.jpg
  14726   Thu Jul 4 18:19:08 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details. 

Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is  the training data I overfit to and the latter the validation data that the network has not generalized well to.

Quote:

And finally, a network is trained!

Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github.

 

More details of the experiment:

Aim:

  1. To train a network to check that training occurs and get a feel for what the learning might be like.
  2. To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison.
  3. To track beam spot motion.

What I did:

  1. Set up a network that learns a framewise mapping as described in here.
  2. Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames
  3. Hyperparameters: Attachment #1
  4. Did no tuning of hyperparameters.
  5. Compiled and fit the models and saved the results.

 

What I saw

  1. Attachment #2: data fed to the network after pre-processing - median blur + crop
  2. Attachment #3: learning curves.
  3. Attachment #4: true and predicted motion. Nothing great.

What I think is going wrong-

  1. No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments.
  2. Too little data.
  3. Maybe wrong architecture.

Well, what now?

  1. Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?)
  2. Currently the network has around 200k parameters. Maybe reduce that.
  3. Set up a network that takes as input (one example corresponding to one forward pass)  a bunch of frames and predicts a vector of position values that can be used as continuous data).
Quote:

I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.

Quote:
 
  1. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
Attachment 1: Motion.pdf
Motion.pdf
Attachment 2: Error.pdf
Error.pdf
Attachment 3: Learning_curves.pdf
Learning_curves.pdf
  14727   Fri Jul 5 20:57:04 2019 KojiUpdateSUSAnother M7.1 EQ

[Kruthi, Koji]

Koji came to the lab to align the IMC/IFO, but found the mirrors are dancing around. Kruthi told me that there was M7.1 EQ at Ridgecrest. Looks like there are aftershocks of this EQ going on. So we need to wait for an hour to start the alignment work.

ITMX and ETMX are stuck.

Attachment 1: Screenshot_from_2019-07-05_21-03-06.png
Screenshot_from_2019-07-05_21-03-06.png
  14728   Fri Jul 5 21:53:10 2019 KojiUpdateSUSAnother M7.1 EQ

- ITM unstuck now
- IMC briefly locked at TEM00

A series of aftershocks came. I could unstick ITMX by turning on the damping during one of the aftershocks.
Between the aftershocks, MC1~3 were aligned to the previous dof values. This allowed the IMC flashing. Once I got the lock of a low order TEM mode, it was easy to recover the alignment to have a weak TEM00.
Now at least temporarily the full alignment of the IMC was recovered.

  14729   Fri Jul 5 22:21:13 2019 KojiUpdateSUSAnother M7.1 EQ

In fact, ETMX was not stuck until the M7.1 EQ today. After that it got stuck, but during the after shocks, all the OSEMs occasionally showed full swing of the light levels. So I believe the magnets are OK.

Attachment 1: Screenshot_from_2019-07-05_22-19-57.png
Screenshot_from_2019-07-05_22-19-57.png
  14731   Sun Jul 7 17:54:34 2019 MilindUpdateComputer Scripts / ProgramsPMC autolocker

I modified the autolocker code I wrote to read from a .yaml configuration file instead of commandline arguements (that option still exists if one wishes to override what the .yaml file contains). I have pushed the code to github. I started reading about MCMC and will put up details of the remaining part of the work ASAP.

Quote:
 

P.P.S.  He also said that it would not do to have command line arguments as the main source from which parameters are procured and that .yml files ought to be used instead. I will make that change asap.

  14732   Sun Jul 7 21:59:28 2019 KruthiUpdateCamerasGhost image due to beamsplitter

The beam splitter (BS1-1064-33-2037-45S) that is currently being used has an antireflection coating on the second surface and a wedge of less than 5 arcmin; yet it leads to ghosting as shown in the figure attached (courtesy: Thorlabs). I'm also attaching its spec sheet I dug up on internet for future reference.

I came across pellicle beamsplitters, that are primarily used to eliminate ghost images. Pellicle beamsplitters have a few microns thick nitrocellulose layer and superimpose the secondary reflection on the first one. Thus the ghost image is eliminated. 

Should we go ahead and order them? (https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=898

https://www.edmundoptics.eu/c/beamsplitters/622/#28438=28438_s%3AUGVsbGljbGU1&27614=27614_d%3A%5B46.18%20TO%2077.73%5D)

Attachment 1: ghosting_schematic.png
ghosting_schematic.png
Attachment 2: Beamsplitter_spec.pdf
Beamsplitter_spec.pdf Beamsplitter_spec.pdf
  14733   Mon Jul 8 17:33:10 2019 KruthiUpdateLoss MeasurementOptical scattering measurements

I came across a paper (see reference) where they have used DAOPHOT, an astronomical software tool developed by NOAO, to study the point scatterers in LIGO test masses using images of varying exposure times. I'm going through the paper now. I think using this we can analyze the MC2 images and make some interesting observations.

Reference:  L.Glover et al., Optical scattering measurements and implications on thermal noise in Gravitational Wave detectors test-mass coatings Physics Letters A. 382. (2018)

  14734   Mon Jul 8 17:52:30 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

After the two earthquakes, I collected some data by dithering the optic and recording the QPD readings. Today, I set up scripts to process the data and then train networks on this data. I have pushed all the code to github. I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

Trainng networks with memory

I set up a network to handle input volumes (stacks of frames) instead of individual frames. It still uses 2D convolution and not 3D convolution. I am currently training on the new data. However, I was curious to see if it would provide any improved performance over the results I put up in the previous elog. After a bit of hyperparameter tuning, I did get some decent results which I have attached below. However, this is for Pooja's old data which makes them, ah, not so relevant. Also, this testing isn't truly representative because the test data isn't entirely new to the network. I am going to train this network on the new data now with the following objectives (in the following steps):

  1. Train on data recorded at one frequency, generalize/ test on unseen data of the same frequency, large amplitude of motion
  2. Train on data recorded at one frequency, generallize/ test on unseen data of a different frequency, large amplitude of motion
  3. Train on data recorded at one frequency, generalize/ test on unseen data of  same/ different frequency, small amplitude of motion
  4. Train on data at different frequencies and generalize/ test on data with a mixture of frequencies at small amplitudes - Gautam pointed out that the network would truly be superb (good?) if we can just predict the QPD output from the video of the beam spot when nothing is being shaken.

I hope this looks alright? Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).

Quote:

The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details. 

Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is  the training data I overfit to and the latter the validation data that the network has not generalized well to.

Attachment 1: Motion.pdf
Motion.pdf
  14735   Mon Jul 8 21:42:39 2019 ranaUpdateCamerasGhost image due to beamsplitter

you have to use a BS with a larger wedge angle (5 arcmin ~ 1 mrad) so that the beams don't overlap on the camera

  14737   Tue Jul 9 10:37:42 2019 MilindUpdateIOOkeyed psl crate, unstick.py, pmc autolocker code- working

Today, Gautam keyed the C1PSL crate and we got to test my unstick.py code. It seems to be working fine. Remarks:

  1. Gautam moved the unstick.py code to /opt/rtcds/caltech/c1/scripts/cds. Therefore, the steps to run this code are now:
    1. cd /opt/rtcds/caltech/c1/scripts/cds
    2. python unstick.py c1psl (for the c1psl machine)
  2. There is now a sleepTime global variable in the code which defines the amount of delay between successive channel toggles. We set this to 1ms and it took the code around 3s to run.
  3. Gautam was curious to see if this would work even if we set the sleepTime parameter to 0 but decided that that could be tested the next time something was keyed.
  4. I still need to add the signal handling thing to this code.

Following this, we tested my PMC autolocker code. The code ran for about a minute before achieveing lock. Remarks:

  1. Gautam moved my code (pmc_autolocker.py and autolocker_config.yaml) to /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/ . Therefore, the steps to run this code are now:
    1. cd /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/
    2. python pmc_autolocker.py (check code or use --help to see what the command line arguments do which is only for when you wanna override the details in the .yaml file)
  2. Gautam suggested that I add some delay between succesive steps of DC output adjust so that it locks quickly. I'll do that ASAP. For now, it works.
  14738   Tue Jul 9 18:06:05 2019 gautamUpdateLSCY-arm ASS in a workable state

The Y-arm ASS was tuned to be in a workable state. Basically, I followed Koji's recipe.

The SNR of the dither lines in the TRY and YARM control signals were checked - Attachment #1. The dither frequencies are marked with vertical dashed lines (can't figure out how to add 4 cursors in DTT so there's two in each row for a total of 4). A couple of days ago, when I was doing some preliminary checks, I found that the oscillator at 24.91 Hz caused a broadband increase in the TRY noise between DC and ~100 Hz. But today I saw no evidence of such behaviour. So I decided against changing the frequency.

The linearity of the demodulated error signals around the quadratic maxima of the TRY level was checked. I did not, however, investigate in detail the frequency-dependent offset Koji has reported in his elog. 

After this work, the TRY level is at 0.95. This is commensurate with the MC trans level being lower by ~7% relative to July 2018. Furthermore, the ASS servo is able to return to TRY~0.95 with a time-constant of ~5 seconds in response to misalignment of the cavity optics. After I investigate the X-arm ASS, I will reset the normalization for TRX and TRY.

Update 645pm: In the spirit of general IFO recovery, I re-centered the ITM and ETM oplev spots, and also the IR beam on the IPPOS QPD to mark the new input pointing alignment (the spot is slightly lower on the AS camera than what I remember). I then tweaked the XARM transmission to maximize it, and re-set the TransMon normalization. I edited the normalization script to comment out the normalizing of the TransMon QPD gains as the QPDs are in some kind of indeterminate state now. Attachment #2 shows the current status, you can also see the normalization being reset. LSC mode disabled for overnight.

Once the XARM ASS is also checked out, I propose moving back to locking the DRMI / PRFPMI configs. 

Attachment 1: ditherFreqs.pdf
ditherFreqs.pdf
Attachment 2: transRenorm.png
transRenorm.png
  14739   Tue Jul 9 18:17:48 2019 gautamUpdateGeneralProjector lightbulb blown out

Last documented replacement in Nov 2018, so ~7 months, which I believe is par for the course. I am disconnecting its power supply cable.

  14740   Tue Jul 9 18:42:15 2019 gautamUpdateALSEX green doubling oven temperature controller power was disconnected

There was no green light even though the EX NPRO was on. I checked the doubling oven temperature controller and found that its power cable was loose on the rear. I reconnected it, and now there is green light again. 

  14741   Tue Jul 9 22:13:26 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-

  1. On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
  2. On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
  3. On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.

Therefore, I will carry out all training only on this machine from now.

 


Note to self:

Steps to repeat what you did are:

  1. ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
  2. activate virtualenv as descirbed above
  3. navigate to code  and run it.
Quote:

 I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

Attachment 1: predicted_motion_first.pdf
predicted_motion_first.pdf
Attachment 2: pcdev5_time.png
pcdev5_time.png
  14742   Wed Jul 10 10:04:09 2019 gautamUpdateSUSTip-Tilt moved from South clean cabinet to bake lab cleanroom

Arnaud and I moved one of the two spare TT suspensions from the south clean cabinet to the bake lab clean room. The main purpose was to inspect the contents of the packaging. According to the label, this suspension was cleaned to Class A standards, so we tried to be clean while handling it (frocks, gloves, masks etc). We found that the foil wrapping contained one suspension cage, with what looked like all the parts in a semi-assembled state. There were no OSEMs or electronics together with the suspension cage. Pictures were taken and uploaded to gPhoto. Arnaud is going to plan his tests, so in the meantime, this unit has been stored in Cabinet #6 in the bake lab cleanroom.

  14743   Wed Jul 10 14:55:32 2019 KojiUpdateGeneralProjector lightbulb blown out

In fact the projector is still working. The lamp timer showed ~8200hrs. I just reset the timer, but not sure it was the cause of the shutdown. I also set the fan mode to be "High Altitude" to help cooling.

  14745   Wed Jul 10 16:53:22 2019 gautamUpdateSUSPRM watchdog condition modified

[koji, gautam]

We noticed that the PRM watchdog was tripping frequently. This is a period of enhanced seismic activity. The reason PRM in particular trips often is because the SIDE OSEM has 5x increased transimpedance. We implemented a workaround by modifying the watchdog tripping condition to scale the SD channel RMS by a factor of 0.2 (relative to the UL and LL channels). We restarted the modbus process on c1susaux and tested that the new logic works. Here is the relevant snippet of code:

# Disable fast DAC if variation tests too high
# PRM Side is special, see elog 14745
record(calc,"C1:SUS-PRM_LOGIC")
{
    field(DESC,"Tests whether RMS too high")
    field(SCAN,"1 second")
    field(PHAS,"1")
    field(PREC,"0")
    field(HOPR,"1")
        field(LOPR,"0")
        field(CALC,"(A<B)&(C<B)&(0.2*D<B)")
        field(INPA,"C1:SUS-PRM_ULPD_VAR  NPP  NMS")
        field(INPB,"C1:SUS-PRM_PD_MAX_VAR  NPP  NMS")
        field(INPC,"C1:SUS-PRM_LLPD_VAR  NPP  NMS")
        field(INPD,"C1:SUS-PRM_SDPD_VAR  NPP  NMS")
}

The db file has a note about this as well so that future debuggers aren't mystified by a factor of 0.2.

  14746   Wed Jul 10 22:32:38 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I trained a bunch (around 25 or so - to tune hyperparameters) of networks today. They were all CNNs. They all produced garbage. I also looked at lstm networks with CNN encoders (see this very useful link) and gave some thought to what kind of architecture we want to use and how to go about programming it (in Keras, will use tensorflow if I feel like I need more control). I will code it up tomorrow after some thought and discussion. I am not sure if abandoning CNNs is the right thing to do or if I should continue probing this with more architectures and tuning attempts. Any thoughts?

Right now, after speaking to Stuart (ldas_admin) I've decided on coding up the LSTM thing and then running that on one machine while probing the CNN thing on another.

 


Update on 10 July, 2019: I'm attaching all the results of training here in case anyone is interested in the future.

Quote:

I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-

  1. On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
  2. On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
  3. On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.

Therefore, I will carry out all training only on this machine from now.

 


Note to self:

Steps to repeat what you did are:

  1. ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
  2. activate virtualenv as descirbed above
  3. navigate to code  and run it.
Quote:

 I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

  14752   Thu Jul 11 16:22:54 2019 KruthiUpdateGeneralProjector lightbulb blown out

I heard a popping sound in the control room; the projector lightbulb has blown out.sad

  14753   Thu Jul 11 17:58:38 2019 gautamUpdateEquipment loanTT suspension --> Downs

Arnaud has taken 1 TT suspension from the 40m clean lab to Downs for modal testing. Estimated time of return is tomorrow evening.

  14755   Fri Jul 12 07:37:48 2019 gautamUpdateSUSM4.9 EQ in Ridgecrest

All suspension watchdogs were tripped ~90mins ago. I restored the damping. IMC is locked.

ITMX was stuck. I set it free. But notice that the UL Sensor RMS is higher than the other 4? I thought ITMY UL was problematic, but maybe ITMX has also failed, or maybe it's coincidence? Something for IFOtest to figure out I guess. I don't think there is a cable switch between ITMX/ITMY as when I move the ITMX actuators, the ITMX sensors respond and I can also see the optic moving on the camera.

Took me a while to figure out what's going on because we don't have the seis BLRMS - i moved the usual projector striptool traces to the TV screen for better diagnostic ability.

Update 16 July 1515: Even though the RMS is computed from the slow readback channels, for diagnosis, I looked at the spectra of the fast PD monitoring channels (i.e. *_SENSOR_*) for ITMX - looks like the increased UL RMS is coming from enhanced BR-mode coupling and not of any issues with the whitening switching (which seems to work as advertised, see Attachment #3, where the LL traces are meant to be representative of LL, LR, SD and UR channels).

Attachment 1: 56.png
56.png
Attachment 2: ITMXunstick.png
ITMXunstick.png
Attachment 3: ITMX_UL.pdf
ITMX_UL.pdf
  14756   Fri Jul 12 18:54:47 2019 KojiUpdateGeneralItem loan: optical chopper from Cryo Lab

Optical chopper borrowed from CryoLab to 40m

https://nodus.ligo.caltech.edu:8081/Cryo_Lab/2458

  14757   Sun Jul 14 00:24:29 2019 KruthiUpdateCamerasCCD Calibration

On Friday, I took images for different power outputs of LED. I calculated the calibration factor as explained in my previous elog (plots attached).

Vcc (V) Photodiode
reading(V)

Power incident on photodiode (W)

Power incident on GigE (W)
Slope (counts/​𝝁s)
Uncertainity in
 slope (counts/​𝝁s)
CF (W-sec/counts)
16 0.784 2.31E-06 3.89E-07 180.4029 1.02882 2.16E-15
18 0.854 2.51E-06 4.24E-07 207.7314 0.7656 2.04E-15
20 0.92 2.71E-06 4.57E-07 209.8902 1.358 2.18E-15
22 0.969 2.85E-06 4.81E-07 222.3862 1.456 2.16E-15
25 1.026 3.02E-06 5.09E-07 235.2349 1.53118 2.17E-15
  Average  2.14E-15

To estimate the uncertainity, I assumed an error of at most 20mV (due to stray lights or difference in orientation of GigE and photodiode) for the photodiode reading. Using the uncertainity in slope from the linear fit, I expect an uncertainity of maximum 4%. Note: I haven't accounted for the error in the responsivity value of the photodiode.

GigE area 10.36 sq.mm
PDA area 61.364 sq.mm
Responsivity 0.34 A/W
Transimpedance gain (at gain = 20dB) 10^6 V/W +/- 0.1%
Pixel format used Mono 8 bit

Johannes had reported CF as 0.0858E-15 W-sec/counts for 12 bit images, with measured a laser source. This value and the one I got are off by a factor of 25. Difference in the pixel formats and effect of coherence of the light used might be the possible reasons.

Attachment 1: CCD_calibration.png
CCD_calibration.png
  14758   Mon Jul 15 03:15:24 2019 KruthiUpdateLoss MeasurementImaging scatterometer

On Friday, Koji helped me find various components required for the scatterometer setup. Like he suggested, I'll first set it up on the SP table and try it out with an usual mirror. Later on, once I know it's working, I'll move the setup to the flow bench near the south arm and measure the BRDF of a spare end test mass.

ELOG V3.1.3-