In brief, I trained a deep neural network (DNN) to recosntuct the cavity length, using as input only the transmitted power and the reflection PDH signals. The training was performed with simulated data, computed along 0.25s long trajectories sampled at 8kHz, with random ending point in the [-lambda/4, lambda/4] unique region and with random velocity.
The goal of thsi work is to validate the whole approach of length reconstruction witn DNN in the Fabry-Perot case, by comparing the DNN reconstruction with the ALS caivity lenght measurement. The final target is to deploy a system to lock PRMI and DRMI. Actually, the Fabry-Perot cavity problem is harder for a DNN: the cavity linewidth is quite narrow, forcing me to use very high sampling frequency (8kHz) to be able to capture a few samples at each resonance crossing. I'm using a recurrent neural network (RNN), in the input layers of the DNN, and this is traine using truncated backpropagation in time (TBPT): during training each layer of RNN is unrolled into as many copies as there are input time samples (8192 * 0.25 = 2048). So in practice I'm training a DNN with >2000 layers! The limit here is computational, mostly the GPU memory. That's why I'm not able to use longer data stretches.
But in brief, the DNN reconstruction is performing well for the first attempt.
In the results shown below, I'm using a pre-trained network with parameters that do not match very well the actual data, in particular for the distribution of mirror velocity and the sensing noises. I'm working on improving the training.
I used the following parameters for the Fabry-Perot cavity:
The uncertaint is assumed to be the 90% confidence level of a gaussian distribution. The DNN is trained on 100000 examples, each one a 0.25/8kHz long trajectory with random velocity between 0.1 and 5 um/s, and ending point distributed as follow: 33% uniform on the [-lambda/4, lambda/4] region, plus 33% gaussian distribution peaked at the center with 5 nm width. In addition there are 33% more static examples, distributed near the center.
For each point along the trajectory, the signals TRA, POX11_I and POX11_Q are computed and used as input to the DNN.
Gautam collected about 10 minutes of data with the free swinging cavity, with ALS locked on the arm. Some more data were collected with the cavity driven, to increase the motion. I used the driven dataset in the analysis below.
The ALS signal is calibrated in green Hz. After converting it to meters, I checked the calibration by measuring the distance between carrier peaks. It turned out that the ALS signal is undercalibrated by about 26%. After correcting for this, I found that there is a small non-linearity in the ALS response over multiple FSR. So I binned the ALS signal over the entire range and averaged the TRA power in each bin, to get the transmission signals as a function of ALS (in nm) below:
I used a peak detection algorithm to extract the carrier and 11 MHz sideband peaks, and compared them with the nominal positions. The difference between the expected and measured peak positions as a function of the ALS signal is shown below, with a quadratic fit that I used to improve the ALS calibration
The result is
The ALS calibrated z error from the peak position is of the order of 3 nm (one sigma)
Using the calibrated ALS signal, I computed the cavity length velocity. The histogram below shows that this is well described by a gaussian with width of about 3 um/s. In my DNN training I used a different velocity distribution, but this shouldn't have a big impact. I'm retraining with a different distirbution.
The plot below shows a stretch of time domain DNN reconstruction, compared with the ALS calibrated signal. The DNN output is limited in the [-lambda/4, lambda/4] region, so the ALS signal is also wrapped in the same region. In general the DNN reconstruction follows reasonably well the real motion, mostly failing when the velocity is small and the cavity is simultanously out of resonance. This is a limitation that i see also in simulation, and it is due to the short training time of 0.25s.
I did not hand-pick a good period, this is representative of the average performance. To get a better understanding of the performance, here's a histogram of the error for 100 seconds of data:
The central peak was fitted with a gaussian, just to give a rough idea of its width, although the tails are much wider. A more interesting plot is the hisrogram below of the reconstructed position as a function of the ALS position, Ideally one would expect a perfect diagonal. The result isn't too far from the expectation:
The largest off diagonal peak is at (-27, 125) and marked with the red cross. Its origin is more clear in the plot below, which shows the mean, RMS and maximum error as a function of the cavity length. The second peak corresponds to where the 55 MHz sideband resonate. In my training model, there were no 55 MHz sidebands nor higher order modes.
The DNN reconstruction performance is already quite good, considering that the DNN couldn't be trained optimally because of computation power limitations. This is a validation of the whole idea of training the DNN offline on a simulation and then deploy the system online.
I'm working to improve the results by
However I won't spend too much time on this, since I think the idea has been already validated.
A couple of weeks ago, I was trying to modernize the python version of the FSS Slow temperature control loops, when I accidentally ended up deleting it . There was no svn backup. So the old Perl PID script has been running for the last few days.
Today, I checked out the latest version that Andrew and co. have running in the PSL lab. I had to make some important modifications for the script to work for the 40m setup.
python FSSSlow.py -i FSSSlowPy.ini
Then I stopped the Perl process on megatron by running
sudo initctl stop FSSslow
and started the Python process by running
sudo initctl start FSSslowPy
I have now committed the files FSSSlow.py and FSSSlowPy.ini to the 40m svn. Things seem to be stable for the last 20 mins or so, let's keep an eye on this though - although we had been running the Python PID loop for some months, this version is a slightly modified one.
The initctl stuff still isn't very robust - I think both the Autolocker and the FSS slow servos have to be manually restarted if megatron is shutdown/restarted for whatever reason. It doesn't seem to be a problem with the initctl routine itself - looking at the logs, I can see that init is trying to start both processes, but is failing to do so each time. To be investigated. The wiki procedure to restart this process is up to date.
GV Edit 0000 25 Aug 2017: I had to add a line to the script that checks MC transmission before enabling the PID loop. Change has been committed to svn. Now, when the MC loses lock or if the PSL shutter is kept closed for an extended period of time, the temperature loop doesn't rail.
Since the single arm locking and dither alignment seemed to work alright after the CDS overhaul, I decided to try some recycling cavity locking tonight.
Why should this have changed? I was just on the AS table and did re-center the beam onto the REFL 55 RFPD, but I had also done this in April/May when I was last doing DRMI locking. But I can't explain the apparent factor of ~4 increase in light level. I think I have some measurements of the light levels at various PDs from April 2017, I will see how the present levels line up.
Of course dataviever won't cooperate when I am trying to monitor testpoints.
I may be missing something obvious, but I am quitting for tonight, will look into this more tomorrow.
Unrelated to this work: looking at the GTRY spot on the CCD monitor, there seems to be some excess angular motion. Not sure where this is coming from. In the past, this sort of problem has been symptomatic of something going wonky with the Oplev loops. But I took loop measurements for ITMY and ETMY PIT and YAW, they look normal. I will investigate further when I am doing some more ALS work.
I completed the revamp of the box, and re-installed the box on the PSL table today. I think it would be ideal to install this on one of the electronic racks, perhaps 1X2 would be best. We would have to re-route the fibers from the PSL table to 1X2, but I think they have sufficient length, and this way, the whole arrangement is much cleaner.
Did a quick check to make sure I could see beat notes for both arms. I will now attempt to measure the ALS noise with this revamped box, to see if the improved power supply and grounding arrangement, as well as fiber cleaning, has had any effect.
Photos + power budget + plan of action for using this box to characterize the green PDH locking to follow.
For quick reference: here is the AM/PM measurement done when we re-installed the repaired Innolight NPRO on the new X endtable.
The V1 gate valve specs installed at 40m wiki page. VAT model number 10846-UE44-0007 Our main volume pumping goes through this 8" id gate valve V1 to Maglev turbo or Cryo pump to VC1
The ion pumps have 6" id gate valves:VAT 10844-UE44-AAY1, Pneumatic actuator with position indicator and double acting solenoid valve 115V 60Hz Purchased 1999 Dec 22
UHV gate valves 2.5" id. VAT 10836-UE44 Pneumatic actuator with position indicator and double acting solenoid valve 115V 60 hz, IFO to RGA VM1 & RGA to Maglev VM2
mini UHV gate valve 1.5" id. VAT 01032-UE01 2016 cataloge page 14, manual - no position indicator, VM4 next to manual adjustable fine leak valve to RGA
UHV angle valve 1.5" id, model VAT 28432-GE41, Viton plate seal, pneumatic actuator with position indicator & solenoid valve 115V & single acting closing spring MEDM screen: VM3,VC2, V3,V4,V5,V6,VA6,V7 & annuloses Each chamber annulos has 2 valves.
UHV angle valve 1.5" id, model VAT 57132-GE05 go page 208, Metal tip seal, manual actuating only with position indicator, MEDM screen: roughing RV1 and venting VV1 hand wheel needed to close to torque spec
UHV angle valve 1.5" id. model VAT 28432-GE01 Viton plate seal, manual operation only at IT gauges Hornet & Super Bee and ion pumps roughing ports. These are not labeled.
The Cryo pump interlock wiring was added too
Note: all moving valve plate seals are single.
Didn't someone look at what the OLG req. should be for these servos at some point? I wonder if we can make a parallel digital path that we switch on after green lock. Then we could make this a simple 1/f box and just add in the digital path (take analog control signal into ADC, filter, and then sum into the control point further down the path to the laser) for the low frequency boost.
After getting the go ahead from Jamie, I recompiled all the FE models against the same version of RCG that we tested on the c1iscex models.
To do so:
IFO alignment needs to be redone, but at least we now have a (admittedly rounabout way) of getting testpoints. Did a quick check for "nan-s" on the ASC screen, saw none. So I am re-enabling watchdogs for all optics.
GV 23 August 9am: Last night, I re-aligned the TMs for single arm locks. Before the model restarts, I had saved the good alignment on the EPICs sliders, but the gain of x3 on the coil driver filter banks have to be manually turned on at the moment (i.e. the safe.snap file has them off). ALS noise looked good for both arms, so just for fun, I tried transitioning control of both arms to ALS (in the CARM/DARM basis as we do when we lock DRFPMI, using the Transition_IR_ALS.py script), and was successful.
Here is what was done (Jamie will correct me if I am mistaken).
So while we are in a better state now, the problem isn't fully solved.
Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.
I surveyed the lab today to see what we may need to buy for the AS laser setup.
NPRO 200 mW + Driver
Faraday Isolator from cabinet
ISOMET Model 1201E: This is a free space AOM I found in the modulator cabinet. It needs to be driven at 40MHz (to be confirmed) with ~6W of electrical power. For a 500 micron beam it can allegedly achieve rise times of '93' [units not specified, could this be nanoseconds?]. I did not find a dedicated driver for it, however there was a 5W minicircuits amplifier ZHL-5W-1 in the RF cabinet and a switch ZSDR-230, which has a typical switch time of 2 microseconds, however I'm not sure how this translates to rise/fall times of the deflected power. It seems we have everything to set this up, so we'll by the end of the week if we can use a combination of these things or if we need to buy additional driver electronics.
New Focus model 4004 broadband phase modulator which is labeled as dusty, and in fact quite dirty when looking through. We should attempt to clean this thing and maybe we can use it here or at the ends.
Probably all the optics we need for the PSL table setup.
Beat PD: How about one of these: EOT ET-3000A? I didn't find a broadband PD for the beat with the PSL
Fiber Stuff: coupler & polarization maintaining fiber 20m & collimator. There are a couple options here, which we can discuss in the meeting.
Faraday Isolator: If we want to inject P-polarization. If S is okay we can use a polarizing plate beamsplitter instead.
Possibly some large lenses for mode-matching to IFO (TBD)
I had some trouble getting the daqd processes up and running again using Jamie's instructions.
With Jamie's help however, they are back up and running now. The problem was that the mx infrastructure didn't come back up on its own. So prior to running sudo systemctl restart daqd_*, Jamie ran sudo systemctl start mx. This seems to have done the trick.
c1iscey was still showing red fields on the CDS overview screen so Jamie did a soft reboot. The machine came back up cleanly, so I restarted all the models. But the indicator lights were still red. Apparently the mx processes weren't running on c1iscey. The way to fix this is to run sudo systemctl start mx_stream. Now everything is green.
Now we are going to work on trying the fix Rolf suggested on c1iscex.
It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.
I hooked it up to megatron, and it was automatically recognized and mounted.
I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!
At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.
Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.
There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.
Attachment #1 shows the results of my measurements tonight (SR785 data in Attachment #2). Both loops have a UGF of ~10kHz, with ~55 degrees of phase margin.
Excitation was injected via SR560 at the PDH error point, amplitude was 35mV. According to the LED indicators on these boxes, the low frequency boost stages were ON. Gain knob of the X end PDH box was at 6.5, that of the Y end PDH box was at 4.9. I need to check the schematics to interpret these numbers. GV Edit: According to this elog, these numbers mean that the overall gain of the X end PDH box is approx. 25dB, while that of the Y end PDH box is approx. 15dB. I believe the Y end Lightwave NPRO has an actuator discriminant ~5MHz/V, while the X end Innolight is more like 1MHz/V.
Not sure what to make of the X PDH loop measurement being so much noisier than the Y end, I need to think about this.
More detailed analysis to follow.
I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.
I worked a little bit on the Y arm ALS today.
Some weeks ago, I had moved some of the Green steering optics on the PSL table around, in order to flip some mirror mounts and try and get angles of incidence closer to ~45deg on some of the steering mirrors. As a result of this work, I can see some light on the GTRY CCD when the X green shutter is open. It is unclear if there is also some scattered light on the RFPDs. I will post pictures + a more detailed investigation of the situation on the PSL table later, there are multiple stray green beams on the PSL table which should probably be dumped.
As I was writing this elog, I saw the X green lock drop abruptly. During this time, the X arm stayed locked to the IR, and the Y arm beat on the control room network analyzer did not jump (at least not by an amount visible to the eye). Toggling the X end shutter a few times, the green TEM00 lock was re-acquired, but the beatnote has moved on the control room analyzer by ~40MHz. On Friday evening however, the X green lock held for >1 hour. Need to keep an eye on this.
In case you want to use it, I had profiled the Lightwave NPRO sometime back, and we were even using it as the AUX X laser for a short period of time.
As for using the AS laser for mode spectroscopy: don't we want to match the beam into the cavity as best as possible, and then use some technique to disturb the input mode (like the dental tooth scraper technique from Chris Mueller's thesis)?
Johannes and I did an arm scan of the X arm today (arm controlled with ALS, monitoring IR transmission) - only 2 IR FSRs were scanned, but there should be sufficient information in there to extract the modulation depth and mode matching - can we use Kaustubh's/Naomi's code?. The Y arm ALS needs to be touched up so I don't have a Y arm scan yet. Note that to get a good arm scan measurement, the High Gain Thorlabs PD should be used as the transmission PD.
There are three methods we (will soon) have available to evaluate the round-trip dissipative losses in the arms that do not suffer from the ITM loss dominance:
The DC method comparing reflectivities has been used in the past and is relatively easy to do. After the recent vacuum troubles the first step should be to re-perform these as CDS permits (needs some ASS functionality and of course the MC to behave). It wouldn't hurt to know the parameters this depends on, aka mode overlap and modulation depths with better certainty. Maybe the SURF scripts for mode-spectroscopy can be applied?
With the new CCD cameras calibrated, pre-vent we can determine the magnitude of the large-angle scatter loss (assuming isotropic scatter) of ETMX and possibly ETMY. Can we look past ETMX/ETMY from the viewports? Then we can probably also look at the small angle scatter of ITMX and ITMY. If not, once we open one of the chambers there's the option of installing mirrors as close as possible to the main beam path. The easiest is probably to look at ITMX, since there is plenty of space in the XEND chamber, and the camera is already installed.
This requires a lot of up-front work. We decided to use the spare 200mW NPRO. It will be placed on the PSL table and injected into an optical fiber, which terminates on the AS table. The again free space beam there needs to be sort-of mode-matched into the SRC ("sort-of" because mode-spectroscopy). We want to be able to phaselock this secondary beam to the PSL with at least a couple kHz bandwidth and also completely extinguish the beam on time-scales of a few microseconds. We will likely need to purchase a few components that we can salvage from other labs, I'm still going through the inventory and will know more soon (more detailed post to follow). We need to settle for the polarization we want to send in from the back.
At Rolf/Rich Abbott's request, we performed a check of the UPS today.
Steve believed that the UPS was functioning as it should, and the recent accidental vent was because the UPS batteries were insufficiently charged when the test was performed. Today, we decided to try testing the UPS.
We first closed V1, VM1 and VA6 using the MEDM screen. We prepared to pull power on all these valves by loosening the power connections (but not detaching them). [During this process, I lost the screw holding the power cord fixed to the gate valve V1 - we are looking for a replacement right now but it seems to be an odd size. It is cable tied for now.]
The battery charge indicator LEDs on the UPS indicated that the batteries were fully charged.
Next, we hit the "Test" button on the UPS - it has to be held down for ~3 seconds for the test to be actually initiated, seems to be a safety feature of the UPS. Once the test is underway, the LED indicators on the UPS will indicate that the loading is on the UPS batteries. The test itself lasts for ~5seconds, after which the UPS automatically reverts to the nominal configuration of supplying power from the main line (no additional user input is required).
In this test, one of the five battery charge indicator LEDs went off (5 ON LEDs indicate full charge).
So on the basis of this test, it would seem that the UPS is functioning as expected. It remains to be investigated if the various hardware/software interlocks in place will initiate the right sequence of valve closures when required.
Never hit O on the Vacuum UPS !
Note: the " all off " configuration should be all valves closed ! This should be fixed now.
In case of emergency you can close V1 with disconnecting it's actuating power as shown on Atm3 if you have peumatic pressure 60 PSI
In the aftermath of the accidental vent, it looks like the RGA was shutdown.
We followed the instructions in this elog to restart the RGA.
Seems to be working now, Steve says we just need to wait for it to warm up before we can collect a reliable scan.
We have good RGA scan now. There was no scan for 3 months.
On Friday, I cleaned up the circuit so that there are only three connections needed (+15V, -15V, GND) and a BNC connector for reading the output. Today, I added in bypass capacitors. The small yellow ones are 0.1 microF ceramic, and the large ones are 100 microF electrolytic. They are used to stabilize the +15V and -15V inputs to the OP amp and minimize fluctuations, since it doesn't have a regulator for stability. I have also attached the circuit diagram for the OP amp only, where 1 are the electrolytic and 2 are the ceramic. The temperature is still about 2 degrees off, but if that difference is constant for all temperatures in our range we can just calibrate it later.
Here is a helpful link on bypass capacitors (thanks to Kevin for sending it to me).
As a note, the electrolytic capacitors do have a polarity, so it is important to place them correctly (the negative side is towards the lower voltage potential, and not always towards ground).
Got it to work. One of the connections was faulty. I decided to check the temperature measured against a thermometer. The sensor showed 26.1 C, but the thermometer showed 25.8 C after I let them both cool down after heating them up. The temperature of the thermometer was dropping at the time of measurement, but the temperature of the sensor was not. This is still a rough version of the final sensor, so I'm not sure what exactly causes this discrepancy.
Tried taking the circuit from the breadboard to the PCB. I attached all the components to adapters that would allow them to be connected to the PCB. From the first picture, the first component is AD586, the second is AD590, and the third is LT1012, along with a resistor across it. I then soldered the connections between the components, as can be seen in the second picture. When I tested out this version of the circuit by hooking it up to the DC source, I got a reading of ~-15V. I will have to check all the connections to make sure there is contact where there should be one, and no contact where there shouldn't be. I had issues attaching the tiny AD590 and LT1012 to its adaptor, so the issue may lie there as well. I'll also check that each component is in working order as well.
Once I figure out where my error is, my plan is to build two more of these and place a metal object such that it contacts only the surface of the AD590s. This would allow me to compare the three values to the actual temperature of the metal, which would then tell me how accurate this setup is.
Note on the resistor: I measured all the resistors and chose three that had exactly 10.00k Ohm. The voltage detected is dependent on the resistor, so if we are to take three identical copies, I ensured that there would be no error due to the resistors being a little different.
They are synchronised tiny glitches. They are not mechanical.
My motivation tonight was to get an up-to-date spectrum of a calibrated measurement of the out-of-loop displacement of an arm locked on ALS (using the PDH signal as the out-of-loop sensor) to compare the performance of ALS control noise with the Izumi et al green locking paper.
I was able to fish out the PSD from the paper from the 40m svn, but the comparison as plotted looks kind of fishy. I don't see why the noise from 10-60Hz should be so different/worse. We updated the POX counts to meters conversion by looking at the Hz-calibrated ALSX signal and a ~800Hz line injected on ETMX.
Leaving LSC mode OFF for now while CDS is still under investigation
Not really related to this work: We saw that the safe.snap file for c1oaf seems to have gotten overwritten at some point. I restored the EPICS values from a known good time, and over-wrote the safe.snap file.
I spent some time today trying to debug this issue.
Jamie and I had opened up the c1sus frontend to try and replace the RFM card before we realized that the problem was in the RCG code generator. During this process, we had disconnected all of the back-panel cabling to this machine (2 ethernet cables, dolphin cable, and RFM cables/fibers). I thought I may have accidentally returned the cables to the wrong positions - but all the status indicator lights indicate that everything is working as it should, and I also confirmed that the cabling is as it is in the pictures of the rack on the wiki page.
Looking at the SimuLink model diagram (see Attachment #1 for example), it looks like (at least some of) these channels are actually on the dolphin network, and not the RFM network (with which we were experiencing problems). This suggests that the problem is something deeper. Although I did see nans in some of the ETMX ASC channels as well, for which the channels are piped over the RFM network. Even more puzzling is that the ASC MEDM screen (Attachment #3) and the SimuLink diagram (Attachment #2) suggest that there is an output matrix in between the input signals and the output angular control signals to the suspensions. As Attachment #4 shows, the rows corresponding to ITMX PIT and YAW are zero (I confirmed using z read <matrixElement>). Attachment #3 shows that the output of all the servo banks except CARM_YAW is zero, but CARM_YAW has no matrix element going to the ITMs (also confirmed with z read <servoOutputChannel>). So 0 x 0 should be 0, but for some reason the model doesn't give this output?
z read <matrixElement>
z read <servoOutputChannel>
GV Edit: As EricQ just pointed out to me, nan x 0 is still nan, which probably explains the whole issue. Poking a little further, it seems like this is an SDF issue - the SDF table isn't able to catch differences for this hold output channel.
As I was writing this elog, I noticed that, as mentioned above, the CARM_YAW output was "nan". When I restart the model (thankfully this didn't crash c1lsc!), it seems to default to this state. Opening up the filter module, I saw that the "hold output" was enabled.
All the points above stand - CARM_YAW output shouldn't have been going anywhere as per the output matrix, but it seems to have been responsible? Seems like a bug in any case if a model restarts with a field as "nan".
Anyways the problem seems to have been resolved so I'm going to try locking and dither aligning the arms now.
Rolf mentioned that a simple update could fix several of the CDS issues we are facing (e.g. inability to open up testpoints), but he didn't seem to have any insight into this particular issue. Jamie will try and recompile all the models and then we have to see if that fixes the remaining problems.
I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.
After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.
The JetStor RAID unit that we had been using for frame writing before the fb meltdown has some archived frames from DRFPMI locks that I want to get at. I spent some time today trying to mount it on optimus with no success
The unit was connected to fb via a SCSI cable to a SCSI-to-PCI card inside of fb. I moved the card to optimus, and attached the cable. However, no mountable device corresponding to the RAID seems to show up anywhere.
The RAID unit can tell that it's hooked up to a computer, because when optimus restarts, the RAID event log says "Host Channel 0 - SCSI Bus Reset."
The computer is able to get some sort of signals from the RAID unit, because when I change the SCSI ID, the syslog will say 'detected non-optimal RAID status'.
The PCI card is ID'd fine in lspci as "06:01.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev c1)"
'lsssci' does not list anything related to the unit
Using 'mpt-status -p', which is somehow associated with this kind of thing returns the disheartening output:
Checking for SCSI ID:0
Checking for SCSI ID:1
Checking for SCSI ID:2
Checking for SCSI ID:3
Checking for SCSI ID:4
Checking for SCSI ID:5
Checking for SCSI ID:6
Checking for SCSI ID:7
Checking for SCSI ID:8
Checking for SCSI ID:9
Checking for SCSI ID:10
Checking for SCSI ID:11
Checking for SCSI ID:12
Checking for SCSI ID:13
Checking for SCSI ID:14
Checking for SCSI ID:15
Nothing found, contact the author
that's why the Autolocker clears the outputs; we don't want to be holding the offsets from the last ms of lock when it was all messed up; instead it would be best to have a slow (~mHz) relief script that takes the WFS controls and puts them onto the MC SUS sliders. This would then re-align the MC to the input beam rather than the input to the MC. Which is not the best idea.
Seems like this modification didn't really work.
Seems like this modification didn't really work. There were several large MC1 glitches, and one of them misaligned MC1 so much that the IMC didn't relock for the last ~6 hours. I re-aligned MC1 manually, and now it is locked fine.
Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.
MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.
I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.
The PSL HEPA was running noisy at 100V The bearing is wearing out. I turned it down to 30V It is quiet there.
Today, with Johannes' help, I cleaned the fiber tips of the photodiodes. The effect of the cleaning was dramatic - see Attachments #1-4, which are X Beat PD, axial illumination, X Beat PD, oblique illumination, Y beat PD, axial illumination, Y beat PD, oblique illumination. They look much cleaner now, and the feature that looked like a scratch has vanished.
The cleaning procedure followed was:
I will repeat this procedure for all fiber connections once I start putting the box back together - I'm almost done with the new box, just waiting on some hardware to arrive.
Today, I borrowed the fiber microscope from Johannes and took a look at the fibers coupled to the PDs. The PD labelled "BEAT PD AUX Y" has an end that seems scratched (Attachments #1 and #2). The scratch seems to be on (or at least very close to) the core. The other PD (Attachments #3 and #4) doesn't look very clean either, but at least the area near the core seems undamaged. The two attachments for each PD corresponds to the two available lighting settings on the fiber microscope.
I have not attempted to clean them yet, though I have also borrowed the cleaning supplies to facilitate this from Johannes. I also plan to inspect the ends of all other fiber connections before re-installing them.
I'm not sure if this has something to do with the model restarts / new RCG, but while I was re-enabling the MC watchdogs, I noticed the RMS sensor voltage channels on ITMX hovering around ~100mV, even though local damping was on (in which configuration I would expect <1mV if everything is working normally). I was confused by this behaviour, and after staring at the ITMX suspension screen for a while, I noticed that the input to the "ASCP" and "ASCY" servos were "-nan", and the outputs were 10^20 cts (see Attachment #1).
Digging a little deeper, I found that the same problem existed on ITMY, ETMX, ETMY, PRM (but not BS or SRM) - reasons unknown for now.
gedit 28 Oct 0026: Seems like this problem is seen at the sites as well. I wonder if the problem is related.
In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.
The remaining issues are:
RFM network is back! Everything green again.
Use of RFM has been turned off in adLigoRTS trunk in favor of the new long-range PCIe networking being developed for the sites. Rolf provided a single-line patch that re-enables it:
controls@c1sus:/opt/rtcds/rtscore/trunk 0$ svn diff
--- src/epics/util/feCodeGen.pl (revision 4447)
+++ src/epics/util/feCodeGen.pl (working copy)
@@ -122,7 +122,7 @@
$diagTest = -1;
$flipSignals = 0;
$virtualiop = 0;
-$rfm_via_pcie = 1;
+$rfm_via_pcie = 0;
$edcu = 0;
$casdf = 0;
$globalsdf = 0;
This patched was applied to RTS source checkout we're using for the FE builds (/opt/rtcds/rtscore/trunk, which is r4447, and is linked to /opt/rtcds/rtscore/release). The following models that use RFM were re-compiled, re-installed, and re-started:
The re-compiled models now see the RFM cards (dmesg log from c1ioo):
[24052.203469] c1x03: Total of 4 I/O modules found and mapped
[24052.203471] c1x03: ***************************************************************************
[24052.203473] c1x03: 1 RFM cards found
[24052.203474] c1x03: RFM 0 is a VMIC_5565 module with Node ID 180
[24052.203476] c1x03: address is 0xffffc90021000000
[24052.203478] c1x03: ***************************************************************************
This cleared up all RFM transmission error messages.
CDS upstream are working to make this RFM usage switchable in a reasonable way.
We also need to copy chiara's root. What is the best way to get the full image of the root FS?
We may need to restore these root images to a different disk with a different capacity.
Is the dump command good for this?
What's the current backup situation?
Good question. We need to figure something out. fb1 root is on a RAID1, so there is one layer of safety. But we absolutely need a full backup of the fb1 root filesystem. I don't have any great suggestions, other than just getting an external disk, 1T or so, and just copying all of root (minus NFS mounts).
The CDS system has now been up moved to a supposedly more stable real-time-patched linux kernel (3.2.88-csp) and RCG r4447 (roughly the head of trunk, intended to be release 3.4). With one major and one minor exception, everything seems to be working:
Issues that have been fixed:
The last entry I found relating to ref cavity was 2011 Aug 19
[johannes, gautam, jamie]
The setting when we found it was "GPS", which seems logical enough. However, when we switched it to "UTC" the time as shown on the front panel was correct, now with "UTC" vertically to the right of the time, and fb1 was then showing the correct GPS time.
From Keith Thorne:
Soooo, "UTC" is the correct mode for the GPS receiver.
Tested to make sure that even when only the AD586 was heated that there was no change in the reading. I did so by placing the AD586 away from the rest of the circuit and blowing hot air only on it. There was, in fact, no change.
I didn't realize that the LT1012 needed an additional input to function. I added in +15V and -15V to pins 7 and 4, respectively and placed a 10k resistor and the numbers make more sense now. The voltage showed a negative value, but it became more negative as I heated it up (it's negative due to how a transimpedance amplifier works).
I have attached the new setup and the value it shows (~-3V). It became more negative by about 0.4V, which translates to about a 40K increase in temperature, which makes sense.
In addition, I have attached an updated sketch of the circuit. I will need to do more testing to determine how accurate this is. The next step would be to calculate how much noise there is currently and figure out how to remove this circuit from the breadboard and use a PCB or something like that for final testing in an insulated container.
The reason I chose AD743 initially for the OP amp is because at low frequencies (which is what we are working with), a FET amp such as AD743 will have a low current noise at high impedance, which is what we have in this case. While a FET amp has high voltage noise compared to other OP amps, the current noise becomes more important at high impedance, so it will work better. According to Zach's graphs, the AD743 is best at high impedances, followed by LT1012.
Decided to try adding in an OP amp just to see if it would work. Added LT1012 and a 100k resistor to the circuit (I originally wanted to do AD743 as it seems to be the best choice according to Zach's elog here, but it said that they are very precious so I went with LT1012 for testing purposes). When heating it with a heating gun, the output voltage went down by a few 0.01V. The maximum voltage was 0.686V. Similar thing happened when I switched to a 10k resistor, where the maximum was 0.705V and it also went down by a few 0.01V upon heating.
I've attached a few pictures showing the circuit.
For the final packaging/mounting of the sensor to the seismometer, I have thought of two options.
1. Attach circuit to a PCB board and place it inside the can, while leaving the AD590 open to the air inside the can.
2. Attach the AD590 to a copper plate with thermal paste and put it into a pomona box.
If anyone has input on which method is preferred or any additional options that we may have, I would appreciate it.
q = k A dT / s
For copper, k = 401 W/mK, x = 1.27 mm, A = 2.66x10^-3 m^2 (for the particular copper plate I measured), dT = 1K (assume). Thus the heat transfer will be 839 J/s.
I'm not completely sure what to do with this yet, but it could help us decide whether the copper plate option will be useful for us.
BL-FS300C-PH-LE was replaced after 2,904 hrs It did not explode this time. The 4 monts life period is actually pritty good because this is a $73. cheap bulb. The best-high priced warranty is 5 months.
PS: future option_bulbless laser projector
HITACHI LP-WU3500 PROJECTOR $2,549.00
DLP, WUXGA, 3500 LUMENS, HDMI, 20K HR LASER, 5YR WARRANTY, 1.36-2.34:1 LENS, 24/7 DUTY CYCLE
45x72 IMAGE FROM 8' TO 13'10" LENS TO SCREEN, AND AT 10' APPROX. 154FL OF BRIGHTNESS ON A 1.0 GAIN SCREEN
This would give you everything you are requesting, plus a lamp-less design and 5yr warranty. Ground shipping would be free anywhere in the lower 48, and we would not charge sales tax on orders billing/shipping outside of AZ. If you have any questions or if you would like to order... just let me know!
Today we saw a weird issue with the GPS receiver (EndRun Technologies Tempus LX). GPS timing on fb1 (which is handled via IRIG-B connection to the receiver with a spectracom card) was off by +18 seconds. We tried resetting the GPS receiver and it still came up with +18 second offset. To be clear, the GPS receiver unit itself was showing a time on it's front panel that looked close enough to 24-hour UTC, but was off by +18s. The time also said "GPS" vertically to the right of the time.
We started exploring the settings on the GPS receiver and found this menu item:
Clock -> "Time Mode" -> "UTC"/"GPS"/"Local"
From the manual:
The fact that moving to "UTC" fixed the problem, even though that is supposed to remove the leap second correction, might indicate that there's another bug in the symmetricom driver...
I don't think we can say for sure. I was just talking to EricQ about this, he said the glitches were often seen when changing the alignment offsets when aligning the arm. I am pretty sure I have seen the ETMX alignment change abruptly since the Ruby Standoff replacement (the Oplev spot just slides across the MEDM display rapidly), but I can't find an elog where I've put in details. I also haven't done a whole lot of work with the arm cavities where I would have noticed this problem. There is this test that Eric did, and it didn't throw up any red flags. But the suspension can be well behaved for weeks at a time before this problem pops up again.
There was also the flaky power connection to the timing card on the ETMX expansion chassis which was fixed only recently, after which there has been no systematic investigation of the status of ETMX.
If it is true that these events are caused by strain building up in the suspension wire, I wonder how we can take systematic steps to avoid it. From what I remember of the SOS assembly procedure, the (unglued) standoff is slid along the optic with the wire under slight tension until the wire slips into the groove on the standoff. Then the tension in the wire is adjusted till the optic is pitch balanced and at the desired height. But it is easy to imagine imprinting some torsional stresses in the (40 um?) wire during this process of looping it around under the optic and placing it in the groove. But perhaps this mechanism makes a negligible contribution to the effect we are seeing, and some other mechanism is responsible in this case.
We used to have similar suspension excursion at ETMX. This was the motivation to replace the stand-offs from Al ones to ruby ones. Did the replacement solve the issue at ETMX?
I'm upgrading the linux kernel for all the front ends to one that is supposedly more stable and won't freeze when we unload RTS models (linux-image-3.2.88-csp). Since it's a different kernel version it requires rebuilds of all kernel-related support stuff (mbuf, symmetricom, mx, open-mx, dolphin) and all the front end models. All the support stuff has been upgraded, but we're now waiting on the front end rebuilds, which takes a while.
Initial testing indicates that the kernel is more stable; we're mostly able to unload/reload RTS modules without the kernel freezing. However, the c1iscey host seems to be oddly problematic and has frozen twice so far on module unloads. None of the other hosts have frozen on unload (yet), though, so still not clear.
We're now seeing some timing errors between the front ends and daqd, resulting in a "0x4000" status message in the 'C1:DAQ-DC0_*_STATUS' channels. Part of the problem was an issue with the IRIG-B/GPS receiver timing unit, which I'll log in a separate post. Another part of the problem was a bug in the symmetricom driver, which has been resolved. That wasn't the whole problem, though, since we're still seeing timing errors. Working with Jonathan to resolve.
Last week, we were talking about reviving the Fiber ALS box. Right now, it's not in great shape. Some changes to be made:
Previous elog thread about work done on this box: elog11650