Today, I stuck on the sensors to a metal block using a flag, rubber bands, and some thermal paste (1st attachment). I then wrapped the whole thing in about 4 layers of insulation and a lot of tape (2nd attachment). The only things leading out of the box were the three connections to the sensors and a thermometer. I then connected the wires to their respective places on the board of the sensor. To get the readings out we would need to use an ADC. Gautam and I checked to make sure the ADC we have inside the lab goes from -10V to 10V so that it would be able to measure the 3V value the sensor typically measures. We then tried to connect all three sensors to a DC source simultaneously, but unfortunately one of them seems to have disconnected somewhere during the process, as it only showed 1.2V instead of 3V. I plan to fix this tomorrow morning so that we can hopefully set this up soon.
I took off the AD590 and attached it to two long wires leading out from the board. This will allow us to attach the sensor to a metal block and not have to stick the whole board to it. I have also completed three identical copies of this and it's pretty much ready to be tested. According to Craig and Andrew's elog here, the sensor is very noisy and they added in a low pass filter to fix that, so that's something to consider for the final version of the circuit. I'll test what I have so far and see how that goes. We still need to figure out how to get readings from the sensors.
To attach the sensor to the metal block, I'll use some thermal paste and fasteners. I'll also put a thermometer on the block to record the actual temperature. I'll then wrap it in some insulation we have in the lab and have only some wires leading out of it to make measurements. I'll leave this setup overnight and record the outputs for about a full day. The fluctuations between the sensors will then indicate the noise of each individual sensor.
This is an update on the results already presented earlier (refer to elog 13274 for more introductory details). I improved significantly the results with the following tricks:
An example of time domain reconstruction is visible below. It already looks better than the old results:
As before, to better evaluate the performance I plotted averaged values of the real signals as a function of the reconstructed MICH and PRCL positions. The results are compared with simulation below. They match quite well (left real data, right simualtion expectation)
One thing to better understand is that MICH seems to be somewhat compressed: most of the output values are between -100 and +100 nm, instead of the expected -lambda/4, lambda/4. The reason is still unclear to me. It might be a bug that I haven't been able to track down yet.
I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.
I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.
I decided to calculate the fluctuation in power that we will have in the heater circuit. The resistors we ordered have 50 ppm/C and it would be useful to know what kind of fluctuation we would expect. For this, I assumed that the heater itself is an ideal resistor that has no temperature variation. The circuit diagram is found in Kevin's elog here. At saturation, the total resistance (we will have a resistor instead of for our new design) will be . Therefore, with a 24V input, the saturation current should be . Therefore, the power in the heater should be (in the ideal case)
Now, in the case where the resistor is not ideal, let's assume the temperature of the resistor changes by 10C (which is about how much we would like to heat the whole thing). Therefore, the resistor will have a new value of . The new current will then be and the new power will be . So the difference in power going through the heater is about 0.00088W.
We can use this power difference to calculate how much the temperature of the metal can we wish to heat up will change. where is the thermal conductivity and x is the thickness of the material. For our seismometer, I calculated it to be 0.012K.
Tonight, I was able to lock the DRMI, turn on the whitening filters for the sensing PDs, and also turn on the coil de-whitening filters for ITMX, ITMY and BS. However, I didn't see the expected improvement in the MICH spectrum between ~50-300 Hz . Sad.
I basically went through the list of tasks I made in the previous elog. Some notes:
Attachment #1: Comparison of MICH_ERR with and without the BS de-whitening. Note that the two ITMs have their coils de-whitened in both sets of traces.
Attachment #2: Spectra of MICH output and one of the BS coil outputs in both states. The DAC RMS increases by ~30x when the de-whitening is engaged, but is still well within limits.
So it looks like the switching of paths is happening correctly. The "CDS BIO STATUS" MEDM screen also shows the appropriate bits toggling when I turn the de-whitening on/off. There is no broadband coherence with MCF between 50-300 Hz so it seems unlikely that this could be frequency noise.
Clearly I am missing something. But anyways I have a good amount of data, may be useful to put together the post CDS/electronics modification DRMI noise budget. More analysis to follow.
not immediately necessary, since you have already got it sort of working, but one of these days we should optimize this for real. In the past, we used to do this by putting a o'scope on the coil Vmon during the switching to catch the transient w/ triggering. We download the data/picture via ethernet. Run for loop on tolerance to see what's what.
Now that the DRMI locking seems to be repeatable again, I want to see if I can improve the measured MICH noise. Recall that the two dominant sources of noise were
In preparation for some locking attempts today evening, I did the following:
Hopefully, I can successfully engage a similar transition tonight with the DRMI locked. The main difference compared to this daytime test is going to be that the MICH control signal is also going to be routed to the BS.
Tasks for tonight, if all goes well:
Unrelated to this work: the PMC was locked near the upper rail of the PZT, so I re-locked it closer to the middle of the range.
Surprisingly, there was no evidence of REFL55 behaving weirdly tonight, and I was able to easily lock the DRMI on 1f error signals using the recipe I've been using in the last few months.
I did some work today to see if I could use the IR beat for ALS control. Initial tests were encouraging.
I will now embark on the noise budgeting.
I am leaving the green beat electronics on the PSL table in the switched state for further testing...
I haven't done an exhaustive check just yet, but I have loaded a few testpoints in dataviewer, and ran a script that use testpoint channels (specifically the ALS phase tracker UGF setting script), all seems good.
So if I remember correctly, the major CDS fix now required is to solve the model unloading issue.
Thanks to Jamie/Jonathan Hanks/KT for getting us back to this point! Here are the details:
After reading logs and code, it was a simple daqdrc config change.
The daqdrc should read something like this:
configure channels begin end;
What had happened was tpconfig was put before the configure channels
begin end. So when daqd_rcv went to configure its test points it did
not have the channel list configured and could not match test points to
the right model & machine. Dave and I suspect that this is so that it
can do an request directly to the correct front end instead of a general
broadcast to all awgtpman instances.
Simply reordering the config fixes it.
I tested by opening a test point in dataviewer and verifiying that
testpoints had opened/closed by using diag -l. Xmgr/grace didn't seem
to be able to keep up with the test point data over a remote connection.
You can find this in the logs by looking for entries like the following
while the daqd is starting up. When we looked we saw that there was an
entry for every model.
Unable to find GDS node 35 system c1daf in INI fiels
I re-enabled the MC SUS damping and IMC locking for some IFO work just now.
MC1, MC2 and MC3 damping turned off to see glitching action at 9:57am
There was a pretty large glitch in MC1 about an hour ago. The misalignment was so large that the autolocker wasn't able to lock the IMC. I manually re-aligned MC1 using the bias sliders, and now IMC locks fine. Attached is a 90 second plot of 2K data from the OSEMs showing the glitch. Judging from the wall StripTool, the IMC was well behaved for ~4 hours before this glitch - there is no evidence of any sort of misalignment building up, judging from the WFS control signals.
After our Demod/Whitening electronics investigations suggested nothing obviously wrong, I decided to give DRMI locking another go tonight.
Not sure what to make of all this .
I got in a ~15 minute lock, but I wasn't prepared to do any sort of characterization/ sensing / attempt to turn on coil-dewhitening, and I'm too tired to try again tonight. I was however able to whiten the error signals, as I have been able to do in the past. There is a ~45Hz bump in MICH that I haven't seen in the past.
I'll try and do some characterization tomorrow eve, but it's encouraging to at least get back to the pre-FB-failure state of locking.
We did an ingenious checkup of the whitening board tonight.
I've restored all connections at that we messed with at the LSC rack to their original positions.
The TT alignment seems to be drifting around more than usual after we disconnected one of the channels - when I came in today afternoon, the spot on the AS camera had drifted by ~1 spot diameter so I had to manually re-align TT1.
Based on my tests, everything on the Demod board seems to work as expected. I need to think more about what else could be happening here - specifically do a more direct test on the whitening board.
sudo dd if=SL-7.3-x86_64-2017-01-20-LiveCD.iso of=/dev/sdf
Debian doesn't like EPICS. Or our XY plots of beam spots...Sad!
No, not confused on that point. We just will not be testing OS versions at the 40m or running multiple OS's on our workstations. As I've said before, we will only move to so-called 'reference' systems once they've been in use for a long time.
Ubuntu16 is not to my knowledge used for any CDS system anywhere. I'm not sure how you expect to have better support for that. There are no pre-compiled packages of any kind available for Ubuntu16. Good luck, you big smelly doofuses. Nyah, nyah, nyah.
K Thorne recommends that we use SL7.3 with the 'xfce' window manager instead of the Debian family of products, so we'll try it out on allegra and rossa to see how it works for us. Hopefully the LLO CDS team will be the tip of the spear on solving the usual software problems we have when we "~up" grade.
nominal changed from 22 to 23 dB to minimize PC drive RMS
previous loop gain measurement is sort of bogus (made on SR785); need some 4395 loop measurements and checking of crossover and error point spectrum
I have moved the USB flash drives from the electronics bench back into the middle drawer of the cabinet next to the AC which is west of the fridge. Drawer re-enlabeled.
Today I tried debugging the mysterious increase in REFL55 signal levels in the DRMI configuration. I focused on the demod board, because last week, I had tried routing these signals through different channels on the whitening board, and saw the same effect.
I did a quick check by switching the output of the REFL55 demod board to the inputs normally used by AS55 signals on the whitening board. Setting the whitening gain to +18dB for these channels had the same effect - ADC overflow galore. So looks like the whitening board isn't to blame. I will have to check the demod board out.
All connections have been restored untill further debugging later in the evening.
A couple of minutes ago, Larry W swapped the fibers to our 40m Edgeswitch (BROCADE FWS 648G) to a faster connection. This is the switch to which our gateway machine, NODUS, is connected. The actual swap itself happened at the core router in Bridge, and took only a few seconds. After the switch, I double checked that I was able to ssh into nodus from my laptop, and Larry informed me that everything is working as expected on his end.
Larry also tells us that the other edgeswitch at the 40m (Foundry Networks), to which most of our GC network machines are connected, is a 100MBPS switch, and so we should re-route the connections from this switch to the BROCADE switch at our convenience to take advantage of the faster connection.
I trained a deep neural network (DNN) to reconstruct MICH and PRCL degrees of freedom in the PRMI configuration. For details on the DNN architecture please refer to G1701455 or G1701589. Or if you really want all the details you can look at the code. I used the following signals as input to the DNN: POPDC, POP22_Q, ASDC, REFL11_I/Q, REFL55_I/Q, AS55_I/Q.
Gautam took some PRMI data in free swinging and driven configuration:
In contrast to the Fabry-Perot cavity case, we don't have a direct measurement of the real PRCL/MICH degrees of freedom, so it's more difficult to assess if the DNN is working well.
All MICH and PRCL values are wrapped into the unique region [-lambda/4, lambda/4]^2. It's even a bit more complicated than simpling wrapping. Indeed, MICH is periodic over [-lambda/2, lambda/2]. However, the Michelson interferometer reflectivity (as seen from PRC) in the first half of the segment is the same as in the second half, except for a change in sign. This change of sign in Michelson reflectivity can be compensated by moving PRCL by lambda/4, thus generating a pi phase shift in the PRC round trip propagation that compensate for the MICH sign change. Therefore, the unit cell of unique values for all signals can be taken as [-lambda/4, lambda/4] x [-lambda/4, lambda/4] for MICH x PRCL. But when we hit the border of the MICH region, PRCL is also affected by addtion of lambda/4. Graphically, the square regions A B C below are all equivalent, as well as more that are not highlighted:
This makes it a bit hard to un-wrap the resonstructed signal, especially when you add in the factor that in the reconstruction the wrapping is "soft".
The plot below shows an example of the time domain reconstruction of MICH/PRCL during the free swinging period.
It's hard to tell if the positions look reasonable, with all the wrapping going on.
Here's an attempt at validating the DNN reconstruction. Using the reconstructed MICH/PRCL signal, I can create a 2d map of the values of the optical signals. I binned the reconstructed MICH/PRCL in a 51x51 grid, and computed the mean value of all optical signals for each bin. The result is shown in the plot below, directly compared with the expectation from a simulation.
The power signals (POP_DC, AS_DC, PO22_Q) looks reasonably good. REFL11_I/Q also looks good (please note that due to an early mistake in my code, I reversed the convention for I/Q, so PRCL signal is maximized in Q instead than in I). The 55MHz signals look a bit less clear...
MC autolocker and FSS loops were stuck because c1psl was unresponsive. I rebooted it and did a burtrestore to enable PSL locking. Then the IMC locked fine.
c1susaux and c1iscaux were also unresponsive so I keyed those crates as well, after taking the usual steps to avoid ITMX getting stuck - but it still got stuck when the Sat. Box. connectors were reconnected after the reboot, so I had to shake it loose with bias slider jiggling. This is annoying and also not very robust. I am afraid we are going to knock the ITMX magnets off at some point. Is this problem indicative of the fact that the ITMX magnets were somehow glued on in a skewed way? Or can we make the situation better by just tweaking the OSEM-holding fixtures on the cage?
In any case, I've started listing stuff down here for things we may want to do when we vent next.
I changed the heater circuit described in this elog to a current sink. The new and old circuits are shown in the attachment. The heater is and is currently 24Ω; the sense resistor is currently 6Ω. The op-amp is still an OP27 and the MOSFET is still an IRF630.
The current through the old circuit was saturating because the gate voltage on the MOSFET was saturating at the op-amp supply rails. This is because the source voltage is relatively high: .
In the new circuit the source voltage is lower and the op-amp can thus drive a large enough to draw more current (until the power supply saturates at 25V/30Ω = 0.8A in this case). The source and DAC voltages are equal in this case and so the current is . Since this is the same current through the heater, the drain voltage is . I observed this behavior in this circuit until the power supply saturated at 0.8A. Note that when this happens and the gate voltage saturates at the supply rails in an attempt to supply the necessary current.
there is a large (~1 cm) aperture Pockels cell that Frank Siefert was using for making pulses to damage photo diodes. There is a DEI Pulser unit near the entrance to the QIL in Bridge which can drive it.
I'll look for it tomorrow, but I haven't given up on the AOMs yet. I swapped in the ISOMET modulator today and saw the same behavior, both in 0th and 1st order. The fall time is pretty much identical. Gautam saw no such thing in the PSL AOM using the same photodetector.
In the meantime I prepared the fiber mode-matching but realized in the process that I had mixed up some lenses. As a result the beam did not have a waist at the AOM location and thus didn't have the intended size, although I doubt that this would cause the slower decay. I'll fix it tomorrow, along with setting up the fiber injection, beat note with the PSL, and routing the fiber if possible.
I don't understand why the 1st order diffracted beam doesn't go to zero when you shut off the drive. My guess is that the standing acoustic wave in the AO crystal needs some time to decay: f = 40 MHz, tau = 1 usec... Q ~ 100. Perhaps, the crystal is damped by the PZT and ther output impedance of the mini-circuits switch is different from the AO driver.
In any case, if you need a faster shut off, or want something that more cleanly goes to zero, there is a large (~1 cm) aperture Pockels cell that Frank Siefert was using for making pulses to damage photo diodes. There is a DEI Pulser unit near the entrance to the QIL in Bridge which can drive it.
I worked with Kevin and Gautam to create a heater circuit. The first attachment is Kevin's schematic of the circuit. The OP amp connects to the gate of the power MOSFET, and the power supply connects to the drain, while the source goes into the heater. We set the power supply voltage to 22V and varied the voltage of the input to the OP amp. At 6V to the OP amp, we got a current of 0.35A flowing through the heater and resistor. This was the peak current we got due to the OP amp being saturated (an increase in either of the power supplies did not change the current), but when we increased the voltage of the supply rails of the OP amp from 15V to 20V, we got a current of 0.5A. We would want a higher current than this, so we will need to get a different OP amp with a higher max voltage rating, and a resistor that can take more power than this one (it currently takes 5W of power, and is the best one we could find).
Kevin and I created a simulation of this circuit using CircuitLab to understand why the current was so low (second attachment). The horizontal axis is the voltage we supply to the OP amp. The blue line shows the voltage at the point between the output of the OP amp and the gate of the MOSFET. The orange line is the voltage at the point between the source of the MOSFET and the heater. The brown line is the voltage at the point between the heater and resistor. Thus, we can see that saturation occurs at about 2.1V. At that point, the gate-source voltage is the difference between the blue curve and the orange curve, which is about 4V, which is what we measured. Likewise, the voltage across the heater is the difference between the orange curve and the brown curve, which comes out to around 8V, which is also what we measured. Lastly, the voltage across the resistor is the brown curve, which is about 2V, which matches our observations. The circuit works as it should, but saturates too soon to get a high enough current out of it.
Gautam noted that it is important to measure the current correctly. We can't just use an ammeter and place it across the resistor or heater, because the internal resistance of the ammeter (~0.5 ohm) is comparable to the resistance we want to measure, so the current gets split between the circuit and the ammeter and we get an equivalent resistance of 1/R = 1/R0 + 1/Ra, where R0 is the resistance of the part we want to measure the current across, and Ra is the ammeter resistance. Thus, the new resistance will be lower and the ammeter will show a higher current value than what is actually there. So to accurately measure the current, we must place the ammeter in series with the part we want to measure. We initially got a 1A reading on the heater, which was not correct, and our setup did not heat up at all basically. When we placed the ammeter in series with the heater, we got only 0.35A.
The last two images are the setup for testing of the heater. We wrapped it around an aluminum piece and covered it with a few layers of insulating material. We can stick a thermometer in between the insulation and heater to see the temperature change. In later tests, we may insulate the whole piece so that less heat gets dissipated. In addition, we used a heat sink and thermal paste to secure the MOSFET to it, as it got very hot.
Our next steps will be to get a resistor and an OP amp that are better suited for our purposes. We will also run simulations with components that we choose to make sure that it can provide the desired current of 1A (the maximum output of the power supply is 24V, and the heater is 24 ohm, so max current is 1A). Kevin is working on that now.
RP1 and RP3 roughing pump manual of Leybold D30A oily rotory pump
Fore pump of TP2 & TP3 Varian SH-100 Dry Scroll
TP2 and TP3 small turbo drag pump Varian 969-9361
TP2 and TP3 turbo controller Varian 969-9505
TP1 magnetically suspended turbo pump Osaka TG390MCAB, sn360 and controller TC010M and note : this pump running on 208VAC single phaseIt is not on the UPS !
Osaka Maglev Manual and Osaka Controller Communication Wiring
VC1 cryo pump CTI-Cryogenics Cryo Torr 8 sn 8g23925 SAFETY note: compressor single phase 208VAC and the head driver 3 phase 208VAC Compressor and driver have each separate power cord!
Installed at 40m wiki also
The V1 gate valve specs installed at 40m wiki page. VAT model number 10846-UE44-0007 Our main volume pumping goes through this 8" id gate valve V1 to Maglev turbo or Cryo pump to VC1
The ion pumps have 6" id gate valves:VAT 10844-UE44-AAY1, Pneumatic actuator with position indicator and double acting solenoid valve 115V 60Hz Purchased 1999 Dec 22
UHV gate valves 2.5" id. VAT 10836-UE44 Pneumatic actuator with position indicator and double acting solenoid valve 115V 60 hz, IFO to RGA VM1 & RGA to Maglev VM2
mini UHV gate valve 1.5" id. VAT 01032-UE01 2016 cataloge page 14, manual - no position indicator, VM4 next to manual adjustable fine leak valve to RGA
UHV angle valve 1.5" id, model VAT 28432-GE41, Viton plate seal, pneumatic actuator with position indicator & solenoid valve 115V & single acting closing spring MEDM screen: VM3,VC2, V3,V4,V5,V6,VA6,V7 & annuloses Each chamber annulos has 2 valves.
UHV angle valve 1.5" id, model VAT 57132-GE05 go page 208, Metal tip seal, manual actuating only with position indicator, MEDM screen: roughing RV1 and venting VV1 hand wheel needed to close to torque spec
UHV angle valve 1.5" id. model VAT 28432-GE01 Viton plate seal, manual operation only at IT gauges Hornet & Super Bee and ion pumps roughing ports. These are not labeled.
The Cryo pump interlock wiring was added too
Note: all moving valve plate seals are single.
Last night, while we were working on the ALS, I noticed the GTRY spot moving around (in PITCH) on the CCD monitor in the control room at ~1-2Hz. The operating condition was that the arm was locked to the IR, and the PSL green shutter was closed, so that only the arm transmissions were visible on the CCD screens. There was no such noticable movement of the GTRX spot. When looking at the out-of-loop ALS nosie in this configuration (but now with the PSL green shutter open of course), the Y arm ALS noise at low frequencies was much worse than the X arm.
Today, I looked into this a little more. I first checked that the Y-endtable enclosure was closed off as usual (as I had done some tweaking to the green input pointing some days ago). There are various green ghost beams on the Y-endtable. When time permits, we should make an effort to cleanly dump these. But the enclosure was closed as usual.
Then I looked at the in-loop Oplev error signal spectra for the ITMY and ETMY Oplev loops. There was high coherence between ETMYP Oplev error signal and GTRY. So I took a loop transfer function measurement - the upper UGF was around 3.5Hz. I increased the loop gain such that the upper UGF was around 4.5Hz, with phase margin ~30degrees. Doing so visibly reduced the angular movement of the GTRY spot on the CCD. Attachment #1 shows the Oplev loop TF after the gain increase, while Attachment #2 compares the GTRX and GTRY spectra (DC value is approximately the same for both, around 0.4). GTRY still seems a bit noisier at low frequencies, but the out-of-loop ALS noise for the Y arm now lines up much more closely with its reference trace from a known good time.
Y-arm ALS wasn't so stellar tonight, especially at low frequencies. I can see the GTRY spot moving on the CCD monitor, so something is wonky. To be investigated.
I was having a chat with EricQ about this today, just noting some points from our discussion down here so that I remember to look into this tomorrow.
Can we make use of the Jetstor raid array for some kind of consolidated 40m CDS backup system? Once we've gotten everything of interest out of it...
It is unclear when this was last done, and since I modified the coil driver electronics for the ITMs and BS recently, I figured it would be useful to get this calibration done. The primary motivation was to see if we could resolve the discrepancy between the current ALS noise (using POX as a sensor) compared to the Izumi et. al. plot.
Because we are planning to change the coil driver electronics further soon anyways, we decided to do the calibration at a single frequency for tonight. For future reference, the extension of this method to calibrate the actuator over a wider range of frequencies is here. The procedure followed, and the relevant numbers from tonight, are as follows.
Once these calibrations were updated, we decided to control the arms with ALS, and look at the POX spectrum. Y-arm ALS wasn't so stellar tonight, especially at low frequencies. I can see the GTRY spot moving on the CCD monitor, so something is wonky. To be investigated. But the X arm ALS noise looked pretty good.
Seems like updating the calibration did the job; see the attached comparison plot.
I moved the axuiliary NPRO to the PSL table today and started setting up the optics.
The Faraday Isolator was showing a pretty unclean mode at the output so I took the polarizers off to take a look through them, and found that the front polarizer is either out of place or damaged (there is a straight edge visible right in the middle of the aperture, but the way the polarizer is packaged prevents me from inspecting it closer). I proceeded without it but left space so an FI can be added in the future. The same goes for the broadband EOM.
There are two spare AOMs (ISOMET and Intraaction, both resonant at 40MHz) available before we have to resort to the one currently installed in the PSL.
I installed the Intraaction AOM first and looked at the switching speed of its first order diffracted beam using both its commercial driver and a combination of minicircuits components. Both show similar behavior. The fall time of the initial step is ~110ns in both cases, but it doesn't decay rapidly no light but a slower exponential. Need to check the 0 order beam and also the other AOM.
In addition to bootable full disk backups, it would be wise to make sure the important service configuration files from each machine are version controlled in the 40m SVN. Things like apache files on nodus, martian hosts and DHCP files on chiara, nds2 configuration and init scripts on megatron, etc. This can make future OS/hardware upgrades easier too.
What are the critical filesystems? I've also indicated the size of these disks and the volume currently used, and the current backup situation.
Not backed up
LDAS pulls files from nodus daily via rsync, so there's no cron job for us to manage. We just allow incoming rsync.
Local backup on /media/40mBackup on chiara via daily cronjob
Remote backup to ldas-cit.ligo.caltech.edu::40m/cvs via daily cronjob on nodus
Currently mounted on Megatron, not backed up.
Then there is Optimus, but I don't think there is anything critical on it.
So, based on my understanding, we need to back up a whole bunch of stuff, particularly the boot disks and root filesystems for Chiara, Megatron and Nodus. We should also test that the backups we make are useful (i.e. we can recover current operating state in the event of a disk failure).
Please edit this elog if I have made a mistake. I also don't have any idea about whether there is any sort of backup for the slow computing system code.
The RGA was turned on 7 days ago. It's 46 C now. The X-arm room tem ~20 C
IFO pressure 6.5e-6 Torr at IT-Hornet gauge. Valve configuration vacuum normal.
It seems like the main contribution to the RMS comes from the high frequency bump. When using the ALS loop to lock the arm to the beat, only the stuff below ~100 Hz will matter. Interesting to see what that noise budget will show. Perhaps the discrepancy between inloop and out of loop will go down.
I included the 55 MHz sideband and higher order modes in my training examples. To keep things simple, I just assumed there are higher order modes up to n+m=4 in the input beam. The power in each HOM is randomly chosen from a random gaussian distribution with width determined from experimental cavity scans. I used a value of 0.913+-0.01 rad for the Gouy phase (again estimated from cavity scans, but in reasonable agreement with the nominal radius of curvature of ETMX)
Results are improved. The plot belows show the performance of the neural network on 100s of experimental data
For reference, the plots below show the performance of the same network on simulated data (that includes sensing noise but no higher order modes)
Is it better to mount the box in the PSL under the existing shelf, or in a nearby PSL rack?
Further characterization needs to be done, but the results of this test are encouraging. If we are able to get this kind of out of loop ALS noise with the IR beat, perhaps we can avoid having to frequently fine-tune the green beat alignment on the PSL table. It would also be ideal to mount this whole 1U setup in an electronics rack instead of leaving it on the PSL table
Attachment #1 - Photo of the revamped beat setup. The top panel has to be installed. New features include:
Attachment #2 - Power budget inside the box. Some of these FC/APC connectors seem to not offer good coupling between the two fibers. Specifically, the one on the front panel meant to accept the PSL light input fiber seems particularly bad. Right now, the PSL light is entering the box through one of the front panel connectors marked "PSL + X out". I've also indicated the beat amplitude measured with an RF analyzer. Need to do the math now to confirm if these match the expected amplitudes based on the power levels measured.
Attachment #3 - We repeated the measurement detailed here. The X arm (locked to IR) was used for this test. The "X" delay line electronics were connected to the X green beat PD, while the "Y" delay line electronics were connected to the X IR beat PD. I divided the phase tracker Hz calibration factor by 2 to get IR Hz for the Y arm channels. IR beat was at ~38MHz, green beat was at ~76MHz. The broadband excess noise seen in the previous test is no longer present. Indeed, below ~20Hz, the IR beat seems less noisy. So seems like the cleaning / electronics revamp did some good.
Further characterization needs to be done, but the results of this test are encouraging. If we are able to get this kind of out of loop ALS noise with the IR beat, perhaps we can avoid having to frequently fine-tune the green beat alignment on the PSL table. It would also be ideal to mount this whole 1U setup in an electronics rack instead of leaving it on the PSL table.
Photos + power budget + plan of action for using this box to characterize the green PDH locking to follow.
GV Edit: I've added better photos to the 40m Google Photos page. I've also started a wiki page for this box / the proposed IR ALS system. For the moment, all that is there is the datasheet to the Fiber Couplers used, I will populate this more as I further characterize the setup.
Looks like MC1 got another big kick just under 4 hours ago. None of the other optics show any evidence of a glitch so it seems unlikely that this was some sort of global event. It's been well behaved for ~2weeks now. IMC was unlocked. I manually re-aligned MC1, at which point the autolocker was able to lock the IMC.
Looking at this plot, it seems that LR and UL coils seem to have the largest kicks. UR barely saw it. Not sure what (if anything) to make of this - apparently the optic moved by ~20urad with the UR magnet approximately the pivot.
I tried some DRMI locking again tonight, but had no success. Here is the story.
Looks like I will have to embark on the REFL55 LSC electronics investigation. I was able to successfully lock the PRC on carrier and sideband, and the Michelson lock also seems to work fine, all of which seem to point to a hardware problem with the REFL55 signal chain.
In brief, I trained a deep neural network (DNN) to recosntuct the cavity length, using as input only the transmitted power and the reflection PDH signals. The training was performed with simulated data, computed along 0.25s long trajectories sampled at 8kHz, with random ending point in the [-lambda/4, lambda/4] unique region and with random velocity.
The goal of thsi work is to validate the whole approach of length reconstruction witn DNN in the Fabry-Perot case, by comparing the DNN reconstruction with the ALS caivity lenght measurement. The final target is to deploy a system to lock PRMI and DRMI. Actually, the Fabry-Perot cavity problem is harder for a DNN: the cavity linewidth is quite narrow, forcing me to use very high sampling frequency (8kHz) to be able to capture a few samples at each resonance crossing. I'm using a recurrent neural network (RNN), in the input layers of the DNN, and this is traine using truncated backpropagation in time (TBPT): during training each layer of RNN is unrolled into as many copies as there are input time samples (8192 * 0.25 = 2048). So in practice I'm training a DNN with >2000 layers! The limit here is computational, mostly the GPU memory. That's why I'm not able to use longer data stretches.
But in brief, the DNN reconstruction is performing well for the first attempt.
In the results shown below, I'm using a pre-trained network with parameters that do not match very well the actual data, in particular for the distribution of mirror velocity and the sensing noises. I'm working on improving the training.
I used the following parameters for the Fabry-Perot cavity:
The uncertaint is assumed to be the 90% confidence level of a gaussian distribution. The DNN is trained on 100000 examples, each one a 0.25/8kHz long trajectory with random velocity between 0.1 and 5 um/s, and ending point distributed as follow: 33% uniform on the [-lambda/4, lambda/4] region, plus 33% gaussian distribution peaked at the center with 5 nm width. In addition there are 33% more static examples, distributed near the center.
For each point along the trajectory, the signals TRA, POX11_I and POX11_Q are computed and used as input to the DNN.
Gautam collected about 10 minutes of data with the free swinging cavity, with ALS locked on the arm. Some more data were collected with the cavity driven, to increase the motion. I used the driven dataset in the analysis below.
The ALS signal is calibrated in green Hz. After converting it to meters, I checked the calibration by measuring the distance between carrier peaks. It turned out that the ALS signal is undercalibrated by about 26%. After correcting for this, I found that there is a small non-linearity in the ALS response over multiple FSR. So I binned the ALS signal over the entire range and averaged the TRA power in each bin, to get the transmission signals as a function of ALS (in nm) below:
I used a peak detection algorithm to extract the carrier and 11 MHz sideband peaks, and compared them with the nominal positions. The difference between the expected and measured peak positions as a function of the ALS signal is shown below, with a quadratic fit that I used to improve the ALS calibration
The result is
The ALS calibrated z error from the peak position is of the order of 3 nm (one sigma)
Using the calibrated ALS signal, I computed the cavity length velocity. The histogram below shows that this is well described by a gaussian with width of about 3 um/s. In my DNN training I used a different velocity distribution, but this shouldn't have a big impact. I'm retraining with a different distirbution.
The plot below shows a stretch of time domain DNN reconstruction, compared with the ALS calibrated signal. The DNN output is limited in the [-lambda/4, lambda/4] region, so the ALS signal is also wrapped in the same region. In general the DNN reconstruction follows reasonably well the real motion, mostly failing when the velocity is small and the cavity is simultanously out of resonance. This is a limitation that i see also in simulation, and it is due to the short training time of 0.25s.
I did not hand-pick a good period, this is representative of the average performance. To get a better understanding of the performance, here's a histogram of the error for 100 seconds of data:
The central peak was fitted with a gaussian, just to give a rough idea of its width, although the tails are much wider. A more interesting plot is the hisrogram below of the reconstructed position as a function of the ALS position, Ideally one would expect a perfect diagonal. The result isn't too far from the expectation:
The largest off diagonal peak is at (-27, 125) and marked with the red cross. Its origin is more clear in the plot below, which shows the mean, RMS and maximum error as a function of the cavity length. The second peak corresponds to where the 55 MHz sideband resonate. In my training model, there were no 55 MHz sidebands nor higher order modes.
The DNN reconstruction performance is already quite good, considering that the DNN couldn't be trained optimally because of computation power limitations. This is a validation of the whole idea of training the DNN offline on a simulation and then deploy the system online.
I'm working to improve the results by
However I won't spend too much time on this, since I think the idea has been already validated.
A couple of weeks ago, I was trying to modernize the python version of the FSS Slow temperature control loops, when I accidentally ended up deleting it . There was no svn backup. So the old Perl PID script has been running for the last few days.
Today, I checked out the latest version that Andrew and co. have running in the PSL lab. I had to make some important modifications for the script to work for the 40m setup.
python FSSSlow.py -i FSSSlowPy.ini
Then I stopped the Perl process on megatron by running
sudo initctl stop FSSslow
and started the Python process by running
sudo initctl start FSSslowPy
I have now committed the files FSSSlow.py and FSSSlowPy.ini to the 40m svn. Things seem to be stable for the last 20 mins or so, let's keep an eye on this though - although we had been running the Python PID loop for some months, this version is a slightly modified one.
The initctl stuff still isn't very robust - I think both the Autolocker and the FSS slow servos have to be manually restarted if megatron is shutdown/restarted for whatever reason. It doesn't seem to be a problem with the initctl routine itself - looking at the logs, I can see that init is trying to start both processes, but is failing to do so each time. To be investigated. The wiki procedure to restart this process is up to date.
GV Edit 0000 25 Aug 2017: I had to add a line to the script that checks MC transmission before enabling the PID loop. Change has been committed to svn. Now, when the MC loses lock or if the PSL shutter is kept closed for an extended period of time, the temperature loop doesn't rail.
Since the single arm locking and dither alignment seemed to work alright after the CDS overhaul, I decided to try some recycling cavity locking tonight.
Why should this have changed? I was just on the AS table and did re-center the beam onto the REFL 55 RFPD, but I had also done this in April/May when I was last doing DRMI locking. But I can't explain the apparent factor of ~4 increase in light level. I think I have some measurements of the light levels at various PDs from April 2017, I will see how the present levels line up.
Of course dataviever won't cooperate when I am trying to monitor testpoints.
I may be missing something obvious, but I am quitting for tonight, will look into this more tomorrow.
Unrelated to this work: looking at the GTRY spot on the CCD monitor, there seems to be some excess angular motion. Not sure where this is coming from. In the past, this sort of problem has been symptomatic of something going wonky with the Oplev loops. But I took loop measurements for ITMY and ETMY PIT and YAW, they look normal. I will investigate further when I am doing some more ALS work.
I completed the revamp of the box, and re-installed the box on the PSL table today. I think it would be ideal to install this on one of the electronic racks, perhaps 1X2 would be best. We would have to re-route the fibers from the PSL table to 1X2, but I think they have sufficient length, and this way, the whole arrangement is much cleaner.
Did a quick check to make sure I could see beat notes for both arms. I will now attempt to measure the ALS noise with this revamped box, to see if the improved power supply and grounding arrangement, as well as fiber cleaning, has had any effect.
For quick reference: here is the AM/PM measurement done when we re-installed the repaired Innolight NPRO on the new X endtable.
Didn't someone look at what the OLG req. should be for these servos at some point? I wonder if we can make a parallel digital path that we switch on after green lock. Then we could make this a simple 1/f box and just add in the digital path (take analog control signal into ADC, filter, and then sum into the control point further down the path to the laser) for the low frequency boost.