40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 265 of 355  Not logged in ELOG logo
ID Dateup Author Type Category Subject
  13291   Tue Sep 5 02:07:49 2017 gautamUpdateLSCLow Noise DRMI attempt

Summary:

Tonight, I was able to lock the DRMI, turn on the whitening filters for the sensing PDs, and also turn on the coil de-whitening filters for ITMX, ITMY and BS. However, I didn't see the expected improvement in the MICH spectrum between ~50-300 Hz sad. Sad.

Details:

I basically went through the list of tasks I made in the previous elog. Some notes:

  • The UGF servos suggested that I had to lower the SRCL gain. I lowered it from -0.055 to -0.025. OLTF measurement using In1/In2 method suggested UGF ~120Hz. I don't know why this should be. Plot to be uploaded later.
  • Since we aren't actuating on the ITMs, I was able to leave their coils de-whitened all the time.
  • For the BS, it was trickier - I had to play around a little with the "Tolerance" setting in Foton while looking at transients (using DTT, not a scope for now) while switching the filters.
  • This transition isn't so robust yet - but eventually I found a setting that worked, and I was able to successfully turn on the de-whitening thrice tonight (but also failed about the same number of times). [GV Oct 6 2017: Remember that the PD whitening has to be turned on for this transition to be successful - otherwise the RMS from the high frequencies saturate the DAC.]
  • The locks were pretty stable. One was ~10mins, one was ~15mins, and I broke both deliberately because I was out of ideas as to why the part of the MICH error signal spectrum that I thought was due to DAC noise didn't improve.
  • I've made a bunch of shell scripts to help with the various switchings - but now that I think of it, I should make these python scripts.

Attachment #1: Comparison of MICH_ERR with and without the BS de-whitening. Note that the two ITMs have their coils de-whitened in both sets of traces.

Attachment #2: Spectra of MICH output and one of the BS coil outputs in both states. The DAC RMS increases by ~30x when the de-whitening is engaged, but is still well within limits.

So it looks like the switching of paths is happening correctly. The "CDS BIO STATUS" MEDM screen also shows the appropriate bits toggling when I turn the de-whitening on/off. There is no broadband coherence with MCF between 50-300 Hz so it seems unlikely that this could be frequency noise.

Clearly I am missing something. But anyways I have a good amount of data, may be useful to put together the post CDS/electronics modification DRMI noise budget. More analysis to follow.

 

Attachment 1: MICH_err_comp.pdf
MICH_err_comp.pdf
Attachment 2: deWhitenedCoil.pdf
deWhitenedCoil.pdf
  13292   Tue Sep 5 09:47:34 2017 KiraSummaryPEMheater circuit calculations

I decided to calculate the fluctuation in power that we will have in the heater circuit. The resistors we ordered have 50 ppm/C and it would be useful to know what kind of fluctuation we would expect. For this, I assumed that the heater itself is an ideal resistor that has no temperature variation. The circuit diagram is found in Kevin's elog here. At saturation, the total resistance (we will have a 1\Omega resistor instead of 6\Omega for our new design) will be R_{tot}=R+R_{h}=1\Omega +24\Omega =25\Omega. Therefore, with a 24V input, the saturation current should be I=\frac{V_{in}}{R_{tot}}=\frac{24V}{25\Omega}=0.96A.  Therefore, the power in the heater should be (in the ideal case) P=I^2R{_{h}}=22.1184W

Now, in the case where the resistor is not ideal, let's assume the temperature of the resistor changes by 10C (which is about how much we would like to heat the whole thing). Therefore, the resistor will have a new value of R_{new}=R+50ppm/C\times 10C\times 10^{-6}=1.0005\Omega. The new current will then be I_{new}=\frac{V_{in}}{R_{new}}=0.95998A and the new power will be P_{new}=I_{new}^{2}R_{h}=22.1175W. So the difference in power going through the heater is about 0.00088W.

We can use this power difference to calculate how much the temperature of the metal can we wish to heat up will change. \Delta T=\Delta P\times (1/\kappa) /x where \kappa is the thermal conductivity and x is the thickness of the material. For our seismometer, I calculated it to be 0.012K.

  13293   Tue Sep 5 14:41:58 2017 gautamUpdateCDSNDS2 server restarted on megatron

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

  13294   Tue Sep 5 16:37:47 2017 GabrieleSummaryLSCImproved PRMI deep learning reconstruction

This is an update on the results already presented earlier (refer to elog 13274 for more introductory details). I improved significantly the results with the following tricks:

  • I retuned the demodulation phase of AS55, this time ensuring that the (alleged) MICH motion is visible mostly in Q when crossing a carrier resonance. Further fine tunings of phases will be possible once we have a measurement of the length optical matrix

  • I fine tuned the netwrok by training it again using the real data. The ides is the following. I started with the network trained on the simulated data, and froze the parameters of the input recurrent layers. I fed the real signal to the network, computed the reconstructed PRCL/MICH, and fed them to my PRMI model to compute simulated signals. I allowed some of the parameters of the models to vary (expecially demodulation phases). I then trained again the network by trying to match the model predicted signals with the real input signals. I allowed only the parameters of the fully connected layers to vary (mostly for technical reasons, I'm working on re-training also the recurrent layers)

An example of time domain reconstruction is visible below. It already looks better than the old results:

As before, to better evaluate the performance I plotted averaged values of the real signals as a function of the reconstructed MICH and PRCL positions. The results are compared with simulation below. They match quite well (left real data, right simualtion expectation)

One thing to better understand is that MICH seems to be somewhat compressed: most of the output values are between -100 and +100 nm, instead of the expected -lambda/4, lambda/4. The reason is still unclear to me. It might be a bug that I haven't been able to track down yet.

  13295   Tue Sep 5 17:18:17 2017 KiraUpdatePEMtemp sensor update

Today, I stuck on the sensors to a metal block using a flag, rubber bands, and some thermal paste (1st attachment). I then wrapped the whole thing in about 4 layers of insulation and a lot of tape (2nd attachment). The only things leading out of the box were the three connections to the sensors and a thermometer. I then connected the wires to their respective places on the board of the sensor. To get the readings out we would need to use an ADC. Gautam and I checked to make sure the ADC we have inside the lab goes from -10V to 10V so that it would be able to measure the 3V value the sensor typically measures. We then tried to connect all three sensors to a DC source simultaneously, but unfortunately one of them seems to have disconnected somewhere during the process, as it only showed 1.2V instead of 3V. I plan to fix this tomorrow morning so that we can hopefully set this up soon.

Quote:

I took off the AD590 and attached it to two long wires leading out from the board. This will allow us to attach the sensor to a metal block and not have to stick the whole board to it. I have also completed three identical copies of this and it's pretty much ready to be tested. According to Craig and Andrew's elog here, the sensor is very noisy and they added in a low pass filter to fix that, so that's something to consider for the final version of the circuit. I'll test what I have so far and see how that goes. We still need to figure out how to get readings from the sensors.

To attach the sensor to the metal block, I'll use some thermal paste and fasteners. I'll also put a thermometer on the block to record the actual temperature. I'll then wrap it in some insulation we have in the lab and have only some wires leading out of it to make measurements. I'll leave this setup overnight and record the outputs for about a full day. The fluctuations between the sensors will then indicate the noise of each individual sensor.

 

Attachment 1: IMG_20170905_144924.jpg
IMG_20170905_144924.jpg
Attachment 2: IMG_20170905_165042.jpg
IMG_20170905_165042.jpg
  13296   Tue Sep 5 17:52:06 2017 KiraUpdatePEMtemp sensor update

to get the sensors to read the same values they have to be in direct thermal contact with the metal block - there can't be any adapter board in-between

for the 2nd attempt, I also recommend encasing it in a metal block rather than just one side. You can drill some 7-10 mm diameter holes in an aluminum or copper block. Then put the sensors in there and plug it up with some thermal paste.

  13297   Tue Sep 5 23:02:37 2017 gautamUpdateCDSslow machine bootfest

MC autolocker was not working - PCdrive was railed at its upper rail for ~2 hours judging by the wall StripTool trace. I tried restarting the init processes on megatron, but that didn't fix the problem. The reason seems to have been related to c1iool0 failing - after keying the crate, autolocker came back fine and MC caught lock almost immediately.

Additionally, c1susaux, c1auxex,c1auxey and c1iscaux are also down. I'm not planning on using the IFO tonight so I am not going to reboot these now.

 

  13298   Tue Sep 5 23:13:44 2017 johannesUpdatePSLPSL table auxiliary NPRO

I used Gautam's mode measurement of the auxiliary NPRO (w=127.3um, z=82mm) for the spacing of the optics on the PSL table for the fiber injection and light modulation. As mentioned in previous posts, for the time being there is no Faraday isolator and no broadband EOM installed, but they're accounted for in the mode propagation and they have space reserved if desired/required/available.

The coupler used for the injection is a Thorlabs F220APC-1064, which allegedly collimates the beam from the fiber type we use to 2.4mm diameter, which I used as the target for the mode calculations. I coupled the first order diffracted beam to a ~60m fiber, which is a tad long but the only fiber I could locate that was long enough. The coupling efficiency from free-space to fiber is 47.5%, and we can currently get up to 63 mW out of the fiber.

Tomorrow Steve and I are going to pull the fiber through protective tubing and bring it to the AS port. The next step is then characterizing the beam out of the collimator to match it into the interferometer.

As far as the switching itself is concerned: I confirmed that the exponential decay is still present when looking at the fiber output. I located the DEI Pulser unit in the QIL lab, and also found several more AOMs, including a 200MHz Crystal Technologies one, same brand that the PSL has, where the ringdown was not observed. According to past elogs, with good polarizers we can expect an extinction ratio of ~200 from the Pockels cell, which should be fine, but it's going to be tradeoff switching speed <-> extinction (if the alternate AOM doesn't show this ringdown behavior).

Attachment 1: PSL_IR.pdf
PSL_IR.pdf
Attachment 2: psl_aux_laser.pdf
psl_aux_laser.pdf
  13299   Wed Sep 6 01:09:11 2017 johannesUpdateComputer Scripts / ProgramsNew set of loss measurements

I stumbled upon a faster way to stream data from the TDS3014 oscilloscopes to disk, which speeds the loss measurements up by a lot:  ftp://sprite.ssl.berkeley.edu/pub/sharris/MAVEN_LPW_Preamp/109_TDS3014B_control/tds3014b.py
This convenient(!) set of scripts contains a function that parses the scope's native binary format, for which the acquisition of 1 screenful of data takes <1s as opposed to ~20s, into readable data. I tested it for a bit and concluded that it does what it actually claims to do, but there's one weirdness: It get's the channel offset wrong. However this doesn't matter in our measurement because we're subtracting the dark level, which sees the same (wrong) offset. Other than that it seems okay.

So I started a new set of armloss measurements, and since the data acquisition is now much faster, I was able to squeeze a set of 20 individual measurements for each arm into ~30 minutes. This is the procedure I follow when I take these measurements for the XARM (symmetric under XARM <-> YARM):

  1. Dither-align the interferometer with both arms locked. Freeze outputs when done.
  2. Misalign ETMY + ITMY.
  3. ITMY needs to be misaligned further. Moving the slider by at least +0.2 is plentiful to not have the other beam interfere with the measurement.
  4. Start the script, which does the following:
    1. Resume dithering of the XARM
    2. Check XARM dither error signal rms with CDS. If they're calm enough, proceed.
    3. Freeze dithering
    4. Start a new set of averages on the scope, wait T_WAIT (5 seconds)
    5. Read data (= ASDC power and MC2 trans) from scope and save
    6. Misalign ETMX and wait 5s
    7. Read data from scope and save
    8. Repeat desired amount of times
  5. Close the PSL shutter and measure the PD dark levels

I will write a more comprehensive post describing the data acquisition and processing, let's just look at the results for now: The "uncertainties" reported by the individual measurements are on the order of 1-2 ppm (~1.9 for the XARM, ~1.3 for the YARM). This accounts for fluctuations of the data read from the scope and uncertainties in mode-matching and modulation depths in the EOM. I made histograms for the 20 datapoints taken for each arm: the standard deviation of the spread is a little over 2ppm. We end up with something like:

XARM: 49.3 +/- 2.1 ppm
YARM: 20.3 +/- 2.3 ppm

 

    

Attachment 1: XARM_20170905.pdf
XARM_20170905.pdf
Attachment 2: YARM_20170905.pdf
YARM_20170905.pdf
  13300   Wed Sep 6 23:06:30 2017 gautamUpdateLSCCoil de-whitening switching investigation

Summary:

Rana suggested checking if the coil de-whitening switching is actually happening in the analog path. I repeated the test detailed hereAttachments #1 and #2 suggest that all the coils for the BS and ITMs are indeed switching yes.

Details:

  • The motivation behind this test was the following - the analog path switching is done by applying some logic voltage to a switch, but if this voltage is common among many switches, the hypothesis was that perhaps individual switches were not getting the required voltage to engage the switching.
  • This time FM9 (simulated de-whitening) and FM10 (inverse de-whitening) in the coil output filter modules turned off, so as to maintain a flat TF in the digital domain, but engage the de-whitened analog path (turning off FM9 is supposed to do this).
  • There is poor coherence in the measurement above 40Hz so the data there should be neglected. It is hard to get a good measurement at higher frequencies because of the pendulum TF + heavy low pass filtering from the analog de-whitening path.
  • But between 10-40Hz, we already see the analog de-whitening TF in the measurement.
  • For comparison, I have plotted the measured pendulum TFs for one of the coils from an earlier test (all the coils were roughly at the same level).

So it would seem that there is some other noise which has a 1/f^2 shape and is at the same level we expected the DAC noise to be at. Rana suggested checking coherence with MC transmission to see if this could be laser intensity noise.

I also want to re-do the actuator calibrations for the vertex optics again before re-posting the revised noise budget.

Attachment 1: BScoils.pdf
BScoils.pdf
Attachment 2: ITMcoils.pdf
ITMcoils.pdf
  13301   Thu Sep 7 23:09:00 2017 johannesUpdatePSLPSL table auxiliary NPRO

I brought the DEI Pulser unit and a suitable Pockels cell over from Bridge today (I also found an identical Pockels cell already at the 40m on the SP table, now that I knew what to look for).

I also brought the 200MHz AOM (Crystal Technology 3200-1113) along which can achieve rise times of 10 ns(!). Before I start setting up the Pockels cell I wanted to try this different AOM and look at its switching behavior. It asks for a much smaller beam (<65 um diam.) than what's currently in the path to the fiber (500 um diam.), although it's clear aperture is technically big enough (~1mm diam.). So I still tried, and the result was a somewhat elliptical deflected beam, and the slower decay was again visible after switching the RF input.

I was using the big Fluke function generator for the 200MHz seed signal, a Mini Circuits ZASWA-2-50 switch and a Mini Circuits ZHL-5W-1 amplifier. For the last two I moved two power supplies (+/-5V for the switch and +24V for the amplifier) into the PSL enclosure. I started at low seed power on the Fluke, routing the amplified signal into a 20dB attenuator before measuring it with an RF power meter. The AOM saturates at 2.5W (34 dBm), which I determined is achieved with a power setting on the Fluke of -4 dBm. As expected, this AOM performed faster (~80ns fall time) but I again observed the slower decay.

This struck me as weird and I started swapping components other than the AOM, which I probably should have done before. It turned out that it was the PD I was using (the same PDA10CF Gautam had used for his MC ringdown investigations). When I changed it to a PDA10A (Si diode, 150MHz bandwidth) the slow decay vanished! One last round of crappy screenshots:

   

Rather than proceeding with the Pockels cell, tomorrow I will make the beam in the AOM smaller and hope that that takes care of the ellipticity. If it does: the AOM can theoretically switch on ~10ns timescale, same for the switch (5-15ns typical), and the amplifier is non-resonant and works up to 500MHz, so it shouldn't be a limiting factor either. If this doesn't work out, we can still have ~100ns switching times with the other AOMs.

  13302   Fri Sep 8 07:54:04 2017 SteveUpdatePEMM8.1 eq

Nothing tripped. No obvious damage.

Attachment 1: M8.1.png
M8.1.png
  13303   Fri Sep 8 10:22:30 2017 SteveUpdatePEMtemp sensor update

The weight of SS can with copper liner  is 12.2 kg

Is 1 Amp for the heating jacket going to be enough? We should have some headroom.

Quote:

to get the sensors to read the same values they have to be in direct thermal contact with the metal block - there can't be any adapter board in-between

for the 2nd attempt, I also recommend encasing it in a metal block rather than just one side. You can drill some 7-10 mm diameter holes in an aluminum or copper block. Then put the sensors in there and plug it up with some thermal paste.

 

  13304   Fri Sep 8 12:08:32 2017 GabrieleSummaryLSCGood reconstruction of PRMI degrees of freedom with deep learning

Introduction

This is an update of my previous reports on applications of deep learning to the reconstruction of PRMI degrees of freedom (MICH/PRCL) from real free swinging data. The results shown here are improved with respect to elog 13274 and 13294. The training is performed in two steps, the first one using simulated data, and the second one fine tuning the parameters on real data.

First step: training with simulation

This step is exactly the same already described in the previous entries and in my talks at the CSWG and LVC. For details on the DNN architecture please refer to G1701455 or G1701589. Or if you really want all the details you can look at the code. I used the following signals as input to the DNN: POPDC, POP22_Q, ASDC, REFL11_I/Q, REFL55_I/Q, AS55_I/Q. The network is trained using linear trajectories in the PRCL/MICH space, and signals obtained from a model that simulates the PRMI behavior in the plane wave approximation. A total of 150000 trajectories are used. The model includes uncertainties in all the optical parameters of the 40m PRMI configuration, so that the optical signals for each trajectory are actually computed using random optical parameteres, drwn from gaussian distributions with proper mean and width. Also, white random gaussian sensing noise is added to all signals with levels comparable to the measured sensing noise.

The typical performance on real data of a network pre-trained in this way was already described in elog 13274, and although being reasoble, it was not too good.

Second step: training with real data

Real free swinging data is used in this step. I fine tuned the demodulation phases of the real signals. Please note that due to an old mistake, my convention for phases is 90 degrees off, so for example REFL11 is tuned such that PRCL is maximized in Q instead of I. Regardless of this convention confusion, here's how I tuned the phases:

  • REFL11: PRCL is all in Q when crossing the carrier resonance
  • REFL55: PRCL is all in Q when crossing the carrier resonance
  • AS55: MICH is all in Q when crossing the PRCL carrier resonance
  • POP22: signal peaking in Q when crossing carrier or sideband resonances. Carrier resonance crossing gives positive sign

Then I built the following training architecture. The neural network takes the real signals and produces estimates of PRCL and MICH for each time sample. Those estimates are used as the input for the PRMI model, to produce the corresponding simulated optical signals. My cost function is the squared difference of the simulated versus real signals. The training data is generated from the real signals, by selection 100000 random 0.25s long chunks: the history of real signal over the whole 0.25s is used as input, and only the last sample is used for the cost function computation. The weights and biases of the neural network, as well as the model parameters are allowed to change during the learning process. The model parameters are regularized to suppress large deviations from the nominal values.

One side note here. At first sight it might seems weird that I'm actually fedding as input the last sample and at the same time using it as the reference for the loss function. However, you have to remember that there is no "direct" path from input to output: instead all goes through the estimated MICH/PRCL degrees of freedom, and the optical model. So this actually forces the network to tune the reconstruction to the model. This approach is very similar to the auto-encoder architectures used in unsupervised feature learning in image recognition.

Results

After trainng the network with the two previous steps, I can produce time domain plots like the one below, which show MICH and PRCL signals behaving reasonably well:

To get a feeling of how good the reconstruction is, I produced the 2d maps shown below. I divided the MICH/PRCL plane in 51x51 bins, and averaged the real optical signals with binning determined by the reconstructed MICH and PRCL degrees of freedom. For comparison the expected simulation results are shown. I would say that reconstructed and simulated results match quite well. It looks like MICH reconstruction is still a bit "compressed", but this should not be a big issue, since it should still work for lock acquisition.

Next steps

There a few things that can be done to futher tune the network. Those are mostly details, and I don't expect significant improvements. However, I think the results are good enough to move on to the next step, which is the on-line implementation of the neural network in the real time system.

  13305   Mon Sep 11 09:47:53 2017 SteveUpdateGeneralWIMA caps refilled

Instock WIMA caps refilled to a minimum 50 pieces each.

Attachment 1: WIMA.png
WIMA.png
  13306   Mon Sep 11 12:40:32 2017 johannesUpdatePSLPSL table auxiliary NPRO

I changed the PSL table auxiliary laser setup to the 200 MHz AOM and put the light back in the fiber. Coupling efficiency is again ~50%, giving us up to about 75 mW of auxiliary laser light on the AS table. The 90% to 10% fall time of the light power out of the fiber when switched off is 16.5 ns with this AOM on the PDA10A, which will be sufficient for the ringdown measurements.

  13307   Mon Sep 11 12:56:40 2017 johannesUpdateComputer Scripts / Programslossmap attempts

I was trying to get a lossmap measurement over the weekend but had some trouble first with the IMC and then with the PMC.

For the IMC: It was a bit too misaligned to catch and maintain lock, but I had a hard time improving the alignment by hand. Fortunately, turning on the WFS quickly once it was locked restored the transmission to nominal levels and made it maintain the lock for longer, but only for several minutes, not enough for a lossmap scan (can take up to an hour). Using the WFS information I manually realigned the IMC, which made locking easier but wouldn't help with staying locked.

For the PMC: The PZT feedback signal had railed and the PMC had been unlocked for 8+ hours. The PMC medm screen controls were generally responsive (I could see the modes on the CCDs changing) but I just couldn't get it locked. c1psl was responding to ping but refusing telnet so I keyed the crate, followed by a burt restore and finally it worked.

After the PMC came back the IMC has already maintained lock for more than an hour, so I'm now running the first lossmap measurements.

  13308   Mon Sep 11 15:58:02 2017 SteveUpdateGeneralNPRO for repair

This NPRO has a tripping power output******

 

" Hi Eric,

I checked with the Engineer as Vincent is travelling.

“The lasers have serial number below 2000 which we cannot repair them, we only can repair NPRO laser has serial number 2000 or later.”

Thanks,

Betty-Ann Watt

Customer Service Professional
Global Customer Service/Communication & Commercial Optical Products "

www.lumentum.com

 

 

Attachment 1: NPRO_tripping.jpg
NPRO_tripping.jpg
  13309   Mon Sep 11 16:46:09 2017 SteveUpdatePEMearthquakes
Quote:

I was trying to get a lossmap measurement over the weekend but had some trouble first with the IMC and then with the PMC.

For the IMC: It was a bit too misaligned to catch and maintain lock, but I had a hard time improving the alignment by hand. Fortunately, turning on the WFS quickly once it was locked restored the transmission to nominal levels and made it maintain the lock for longer, but only for several minutes, not enough for a lossmap scan (can take up to an hour). Using the WFS information I manually realigned the IMC, which made locking easier but wouldn't help with staying locked.

For the PMC: The PZT feedback signal had railed and the PMC had been unlocked for 8+ hours. The PMC medm screen controls were generally responsive (I could see the modes on the CCDs changing) but I just couldn't get it locked. c1psl was responding to ping but refusing telnet so I keyed the crate, followed by a burt restore and finally it worked.

After the PMC came back the IMC has already maintained lock for more than an hour, so I'm now running the first lossmap measurements.

Southern Mexio is still shaking..... so as we

Attachment 1: M5.4eq.png
M5.4eq.png
  13310   Mon Sep 11 23:31:50 2017 johannesUpdateCameraspost-vent camera capture comparison

The latest pre-unintended vent captures of the test mass face cameras were taken on June 2nd, 2017. Only exposures for ITMYF, ETMYF, and ETMXF exist in /users/sensoray/SensorayCaptures/. I took new captures for those three after locking the arms and having the dither-alignment on for 5+ minutes (exposures were taken after turning the dithering off). The capture script is choking on ITMXF, saying the channel can't lock on. Maybe that's why there's also no reference image for it. Capturing QUAD3, which shows ITMXF in the lower right corner, works, but we don't have a capture for reference. I also recorded dark fields after closing the PSL shutter. Naturally, these don't subtract out as well for the three-month old pictures, but it's actually not terrible and qualitatively one can still compare the subtracted images

Visually, ITMYF and ETMYF do not show a dramatic difference between then and now. ETMXF however, does. To get a numerical estimate for the difference in counts, I worked with the subtracted images and placed an aperture about 1.5x the size of the visible beam blob. I summed up the pixel values inside and subtracted the sum of the pixel values of an equally sized area from the upper left corner of the respective image, which looks free of subtraction artifacts and looks qualitatively similar to the background in the central region.

The pixel sum has gone up by about 50% between the exposures. I still have to do the same for the YARM optics but don't expect such a large discrepancy. Unfortunately we're missing those ITMYF expsures...

All pictures are organized in this format:

Pre-vent exposure Post-vent exposure
Pre-vent subtracted Post-vent subtracted

 

ITMYF

   

   

ETMYF

   

   

ETMXF

   

   

Attachment 11: ETMXF_pre_sub.bmp
  13311   Tue Sep 12 11:44:16 2017 KiraUpdatePEMtemp sensor update

Got it to work. A cable was broken and the AD586 also broke at the same time so it took a while to find the problem. I had to create a makeshift cable out of three parts so once I replace it for an actual cable, it will be good to go for a test.

Quote:

Today, I stuck on the sensors to a metal block using a flag, rubber bands, and some thermal paste (1st attachment). I then wrapped the whole thing in about 4 layers of insulation and a lot of tape (2nd attachment). The only things leading out of the box were the three connections to the sensors and a thermometer. I then connected the wires to their respective places on the board of the sensor. To get the readings out we would need to use an ADC. Gautam and I checked to make sure the ADC we have inside the lab goes from -10V to 10V so that it would be able to measure the 3V value the sensor typically measures. We then tried to connect all three sensors to a DC source simultaneously, but unfortunately one of them seems to have disconnected somewhere during the process, as it only showed 1.2V instead of 3V. I plan to fix this tomorrow morning so that we can hopefully set this up soon.

Quote:

I took off the AD590 and attached it to two long wires leading out from the board. This will allow us to attach the sensor to a metal block and not have to stick the whole board to it. I have also completed three identical copies of this and it's pretty much ready to be tested. According to Craig and Andrew's elog here, the sensor is very noisy and they added in a low pass filter to fix that, so that's something to consider for the final version of the circuit. I'll test what I have so far and see how that goes. We still need to figure out how to get readings from the sensors.

To attach the sensor to the metal block, I'll use some thermal paste and fasteners. I'll also put a thermometer on the block to record the actual temperature. I'll then wrap it in some insulation we have in the lab and have only some wires leading out of it to make measurements. I'll leave this setup overnight and record the outputs for about a full day. The fluctuations between the sensors will then indicate the noise of each individual sensor.

 

 

  13312   Fri Sep 15 15:54:28 2017 gautamUpdateCDSFB wiper script

A wiper script is not yet set up for our new Frame-Builder. The disk usage is ~80% now, so I think we should start running a wiper script that manages overall disk usage and deletes old frame files to this end.

From what I could find on the elog, the way this was done was by running a cron job on FB. There is a perl script, /opt/rtcds/caltech/c1/target/fb/wiper.pl, which from what I could understand, runs a bunch of du commands on different directories to determine if there is a need to delete any files.

I copied this script over to /opt/rtcds/caltech/c1/target/daqd/wiper.pl. This is the directory in which all the new FB stuff resides. Conveniently, the script has a "dry-run" option, which I tried running on FB1. However, I get the following error message:

Fri Sep 15 15:44:45 PDT 2017
Dry run, will not remove any files!!!
You need to rerun this with --delete argument to really delete frame files
Directory disk usage:
 /frames/trend/minute_rawk
Combined 0k or 0m or 0Gb
Illegal division by zero at ./wiper.pl line 98.


So it would seem that for some reason, the du commands aren't working. From what I could tell, there aren't any directory paths specific to the old FB machine that need to be changed. I believe the script was working prior to the FB disk crash - unfortunately it doesn't look like this script was under version control but I don't think any changes have been made to this script.

Before I go down a Perl rabbit hole, has anyone seen such an error or is aware of some reason why this might not work on the new FB? Am I even using the correct scripts?

  13313   Fri Sep 15 16:00:33 2017 gautamUpdateLSCSensing measurement

I've been working on analyzing the data from the DRMI locks last week.

Here are the results of the sensing measurement.

Details:

  1. The sensing measurement is done by using the existing sensing matrix infrastructure to drive the actuators for the various DoFs at specific frequencies (notches at these frequencies are turned on in the control loops during the measurement).
  2. All the analysis is done offline - I just note down the times at which the sensing lines are turned on and then download the data later. The amplitudes of the oscillators are chosen by looking at the LSC PD error signal spectra "live" in DTT, and by increasing the amplitude until the peak height is ~10x above the nominal level around that frequency. This analysis was done on ~600seconds of data.
  3. The actual sensing elements in the various PDs are calculated as follows:
    • Calculate the Fourier coefficients at the excitation frequency using the definition of the complex DFT in both the LSC PD signal and the actuator signal (both are in counts). Windowing is "Tukey", and FFT length used is 1 second.
    • Take their ratio
    • Convert to suitable units (in this case V/m) knowing (i) The actuator discriminant in cts/m and (ii) the cts/V ADC calibration factor. Any whitening gain on the PD is taken into account as well.
    • If required, we can convert this to W/m as well, knowing (i) the PD responsivity and (ii) the demodulation chain gain.
    • Most of this stuff has been scripted by EricQ and is maintained in the pynoisesub git repo.

The plotting utility is a work in progress - I've basically adapted EricQs scripts and added a few features like plotting the uncertainties in magnitude and phase of the calculated sensing elements. Possible further stuff to implement:

  • Only plot those elements which have good coherence in the measurement data. At present, the scripts check the coherence and prompt the user if there is poor coherence in a particular channel, but no vetos are done.
  • The uncertainty calculation is done rather naively now - it is just the standard deviation in the fourier coefficient determined from various bins. I am told that Bendat and Piersol has the required math. It would be good to also incorporate the uncertainties in the actuator calibration. These are calculated using the python uncertainties package for now.
  • Print a summary of the parameters used in the calculation, as well as sensing elements + uncertainty in cts/m, V/m and W/m, on a separate page.
  • Some aesthetics can be improved - I've had some trouble getting the tick intervals to cooperate so I left it as is for the moment.

Also, the value I've used for the BS actuator calibration is not a measured one - rather, I estimated what it will be by scaling the old value by the same ratio which the ITMs have changed by post de-whitening board mods. The ITM actuator coefficients were recently  measured here. I will re-do the BS calibrations over the weekend.

Noise budgeting to follow - it looks like I didn't set the AS55 demod phase to the previously determined optimal value of -82degrees, I had left it at -42 degrees. To be fixed for the next round of locking.

Attachment 1: DRMI1f_Sep5.pdf
DRMI1f_Sep5.pdf
  13314   Fri Sep 15 17:08:58 2017 gautamUpdateLSCCoil de-whitening switching investigation

I downloaded a segment of data from the time when the DRMI was locked with the BS and ITM coil driver de-whitening switched on, and looked at coherence between MC transmission and the MICH error signal. Attachment #1 doesn't show any broadband high coherence between 60-300Hz, so it cannot explain the noise in the full range between 60-300Hz. 

The DQ channel for the MC transmission is recorded at 1024 kHz, so to calculate the coherence, I had to decimate the 16K MICH data. 

Since we have the AOM installed, I suppose we can actually measure the intensity noise coupling to MICH by driving a line in the AOM. 

I also checked for coherence in the 60-300Hz band between MICH/PRCL and MICH/SRCL, and didn't see any appreciable coherence. Need to think about this more.

Quote:

 Rana suggested checking coherence with MC transmission to see if this could be laser intensity noise.

Attachment 1: DRMI_IntensityNoise.pdf
DRMI_IntensityNoise.pdf
  13315   Sat Sep 16 10:56:19 2017 ranaUpdateLSCCoil de-whitening switching investigation

The absence of evidence is not evidence of absence.

  13316   Mon Sep 18 15:00:15 2017 rana, gautamFrogsComputer Scripts / Programsgateway PWD change

We implemented the post-SURF-season nodus password change today.

New password can be found at the usual location.

  13317   Mon Sep 18 17:17:49 2017 gautamUpdateCDSFB wiper script

After trying to debug this issue using the Perl debugger, I concluded that the problem is in the part of the code that splits the output of the "du" command into directory and disk usage. For whatever, reason, this isn't working. The version of perl running on the new FB1 machine is 5.20.2, whereas I suspect the version running on the old FB machine was 5.14.2 (which is the version on all the Ubuntu 12 workstations and megatron). Unclear whether downgrading the Perl version is the right way to go.

The FB1 disk is now getting close to full, the usage is up to 85% today.

Quote:

Before I go down a Perl rabbit hole, has anyone seen such an error or is aware of some reason why this might not work on the new FB? Am I even using the correct scripts?

 

  13318   Mon Sep 18 17:30:54 2017 ChrisUpdateCDSFB wiper script

Attached is the version of the wiper script we use on the CryoLab cymac. It works with perl v5.20.2. Is this different from what you have?

Attachment 1: wiper.pl
#!/usr/bin/perl
use File::Basename;

print "\n" .  `date` . "\n";
# Dry run, do not delete anything
$dry_run = 1;

if ($ARGV[0] eq "--delete") { $dry_run = 0; }

print "Dry run, will not remove any files!!!\n" if $dry_run;
... 184 more lines ...
  13319   Mon Sep 18 17:51:26 2017 gautamUpdateCDSFB wiper script

It is a little different - specifically, the way the splitting of the output of the "du" command into disk usage and directory is different (see Attachment #1). Apart from this, some of the parameters (e.g. what percentage to keep free) are different.

I changed the percentages to match what we had here, and edited a couple of other lines to print out the files that will be deleted. The dry run seemed to work okay, it produced the output below. Not sure why "df -h" reports a different use percentage though...

Since the script seems to be working now, I am going to set it up on FB1's crontab. Thanks Chris!.

controls@fb1:/opt/rtcds/caltech/c1/target/daqd 0$ ./wiper.pl
Mon Sep 18 17:47:06 PDT 2017
Dry run, will not remove any files!!!
You need to rerun this with --delete argument to really delete frame files
Directory disk usage:
/frames/trend/minute_raw 47126124k
/frames/trend/minute 22900668k
/frames/trend/second 760359168k
/frames/full 19337278516k
Combined 20167664476k or 19694984m or 19233Gb
/frames size 25097525144k at 80.36%
/frames is below keep value of 85.00%
Will not delete any files
df reported usage 80.36%
controls@fb1:/opt/rtcds/caltech/c1/target/daqd 0$ df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/sda4                         2.0T  1.7T  152G  92% /
udev                               10M     0   10M   0% /dev
tmpfs                              13G  177M   13G   2% /run
tmpfs                              32G     0   32G   0% /dev/shm
tmpfs                             5.0M     0  5.0M   0% /run/lock
tmpfs                              32G     0   32G   0% /sys/fs/cgroup
/dev/sda2                          19G  3.7G   14G  21% /var
/dev/sda1                         461M   65M  373M  15% /boot
/dev/sdb1                          24T   19T  3.5T  85% /frames
192.168.113.104:/home/cds/rtcds   2.0T  1.6T  291G  85% /opt/rtcds
192.168.113.104:/home/cds/rtapps  2.0T  1.6T  291G  85% /opt/rtapps
tmpfs                             6.3G     0  6.3G   0% /run/user/1001
Quote:

Attached is the version of the wiper script we use on the CryoLab cymac. It works with perl v5.20.2. Is this different from what you have?

 

Attachment 1: perlDiff.png
perlDiff.png
  13320   Mon Sep 18 18:40:34 2017 gautamUpdateCDSFB wiper script

I did a further check on the wiper script by changing the "percent_keep" from 85.0 to 75.0, and running the script in "dry_run" mode again. The script then output to console the names of all the files it would delete in order to free up the required amount of space (but didn't actually delete any files as it was a dry run). Seemed to be sensible.

To set up the cron job, I did the following on FB1:

  • crontab -e opened up the crontab
  • Copied over a script called "wiper.cron" from /opt/rtcds/caltech/c1/target/fb to /opt/rtcds/caltech/c1/target/daqd. This essentially contains a bunch of instructions to run the wiper script with the --delete flag, and write the console output to a log file.
  • Added the following line: 33 3 * * * /opt/rtcds/caltech/c1/target/daqd/wiper.cron. So the cron job should be executed at 3:33AM everyday.
  • The cron daemon seems to be running - sudo systemctl status cron.service yields the following output:
    controls@fb1:~ 0$ sudo systemctl status cron.service
    ● cron.service - Regular background program processing daemon
       Loaded: loaded (/lib/systemd/system/cron.service; enabled)
       Active: active (running) since Mon 2017-09-18 18:16:58 PDT; 27min ago
         Docs: man:cron(8)
     Main PID: 30183 (cron)
       CGroup: /system.slice/cron.service
               └─30183 /usr/sbin/cron -f
    Sep 18 18:16:58 fb1 cron[30183]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
    Sep 18 18:17:01 fb1 CRON[30205]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:17:01 fb1 CRON[30206]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
    Sep 18 18:17:01 fb1 CRON[30205]: pam_unix(cron:session): session closed for user root
    Sep 18 18:25:01 fb1 CRON[30820]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:25:01 fb1 CRON[30821]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Sep 18 18:25:01 fb1 CRON[30820]: pam_unix(cron:session): session closed for user root
    Sep 18 18:35:01 fb1 CRON[31515]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:35:01 fb1 CRON[31516]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Sep 18 18:35:01 fb1 CRON[31515]: pam_unix(cron:session): session closed for user root

     

  • crontab -l on FB1 now shows the following:
    controls@fb1:~ 0$ crontab -l
    # Edit this file to introduce tasks to be run by cron.
    #
    # Each task to run has to be defined through a single line
    # indicating with different fields when the task will be run
    # and what command to run for the task
    #
    # To define the time you can provide concrete values for
    # minute (m), hour (h), day of month (dom), month (mon),
    # and day of week (dow) or use '*' in these fields (for 'any').#
    # Notice that tasks will be started based on the cron's system
    # daemon's notion of time and timezones.
    #
    # Output of the crontab jobs (including errors) is sent through
    # email to the user the crontab file belongs to (unless redirected).
    #
    # For example, you can run a backup of all your user accounts
    # at 5 a.m every week with:
    # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
    #
    # For more information see the manual pages of crontab(5) and cron(8)
    #
    # m h  dom mon dow   command
    33 3 * * * /opt/rtcds/caltech/c1/target/daqd/wiper.cron

Let's see if this works.

Quote:

Since the script seems to be working now, I am going to set it up on FB1's crontab. Thanks Chris!.

 

  13321   Tue Sep 19 11:05:26 2017 SteveUpdateVACRGA scan at day 334

 

 

Attachment 1: RGAscan334d.png
RGAscan334d.png
  13322   Tue Sep 19 14:00:41 2017 SteveUpdatePEMearthquakes

No sus tripped. Seimometers do not see the 5.3M  ?

Quote:

Southern Mexio is still shaking..... so as we

 

Attachment 1: 7.1MX5.3J.png
7.1MX5.3J.png
  13323   Wed Sep 20 15:49:26 2017 ranaOmnistructureComputersnew internet

Larry Wallace hooked up a new switch (Brocade FWS 648G) today which is our 40m lab interface to the outside world internet. Its faster.

He then, just now, switched over the cables which were going to the old one into the new one, including NODUS and the NAT Router. CDS machines can still connect to the outside world.

In the next week or two, he'll install a new NAT for us so that we can have high speed comm from CDS to the world.

  13324   Wed Sep 20 16:14:17 2017 gautamUpdateEquipment loanImpedance test kit borrowed from Downs

I borrowed the HP impedance test kit from Rich Abbott today. The purpose is to profile the impedance of the NPRO PZTs, as part of the AUX PDH servo investigations. It is presently at the X-end. I will do the test in the coming days.
 

  13325   Thu Sep 21 01:32:00 2017 gautamUpdateALSAUX X Innolight AM measurement running

[rana,gautam]

We set up a measurement of the AUX X laser AM today. Some notes:

  • PDA 55 that was installed as a power monitor for the AUX X laser has been moved into the main green beam path - it is just upstream of the green shutter for this measurement.
  • AUX X laser power into the doubling crystal was adjusted by rotating HWP upstream of IR Faraday (original angle was 100, now it is 120), until the DC level of the PDA 55 output was ~2.5V on a scope (high impedance).
  • BNC-T was installed at the PZT input of the Innolight - one arm of the T is terminated to ground via 50 ohms. The purpose of this is to always have the output of the power splitter from the network analyzer RF source drive a 50 ohm load.
  • The output of the Green PDH servo to the Innolight PZT was disconnected downstream of the summing Pomona box - it is now connected to one output of a power splitter (borrowed from SR function generator used to drive the PZT) connected to the RF source output of the AG4395.
  • Other output of power splitter connected to input R of AG4395.
  • PDA55 output has been disconnected from CH5 of the AA board. It is connected to input A of the AG4395 via DC block.

Attachment #1 shows a preliminary scan from tonight - we looked at the region 10kHz-10MHz, with an IF bandwidth of 100Hz, 16 averages, and 801 log-spaced frequencies. The idea was to get an idea of where some promising notches in the AM lie, and do more fine-bandwidth scans around those points. Data + code used to generate this plot in Attachment #2.

Rana points out that some of the AM could also be coming from beam jitter - so to put this hypothesis to test, we will put a lens to focus the spot more tightly onto the PD, repeat the measurement, and see if we get different results.

There were a whole bunch of little illegal things Rana spotted on the EX table which he will make a separate post about.

I am running 40 more scans with the same params for some statistics - should be done by the morning.

Quote:

I borrowed the HP impedance test kit from Rich Abbott today. The purpose is to profile the impedance of the NPRO PZTs, as part of the AUX PDH servo investigations. It is presently at the X-end. I will do the test in the coming days.
 


Update 12:00 21 Sep: Attachment #3 shows schematically the arrangement we use for the AM measurement. A similar sketch for the proposed PM measurement strategy to follow. After lunch, Steve and I will lay out a longish BNC cable from the LSC rack to the IOO rack, from where there is already a long cable running to the X end. This is to facilitate the PM measurement.

Update 18:30 21 Sep: Attachment #4 was generated using Craig's nice plotting utility. The TF magnitude plot was converted to RIN/V by dividing by the DC voltage of the PDA 55 of ~2.3V (assumption is that there isn't significant difference between the DC gain and RF transimpedance gain of the PDA 55 in the measurement band) The right-hand columns are generated by calculating the deviation of individual measurements from the mean value. We're working on improving this utility and aesthetics - specifically use these statistics to compute coherence, this is a work in progress. Git repo details to follow.

There are only 23 measurements (I was aiming for 40) because of some network connectivity issue due to which the script stalled - this is also something to look into. But this sample already suggests that these measurement parameters give consistent results on repeated measurements above 100kHz.

TO CHECK: PDA 55 is in 0dB gain setting, at which it has a BW of 10MHz (claimed in datasheet).


Some math about relation between coherence \gamma_{xy}(f) and standard deviation of transfer function measurements:

\mathrm{SNR}(f) = \sqrt{\frac{\gamma_{xy}^{2}(f)}{1-\gamma_{xy}^{2}(f)}}

\sigma_{xy}^{2} = \frac{1-\gamma_{xy}^{2}(f)}{2N\gamma_{xy}^{2}(f)}|H(f)|^2  --- relation to variance in TF magnitude. We estimate the variance using the usual variance estimator, and can then back out the coherence using this relation.

\sigma_{\theta_{xy}} = \mathrm{tan}^{-1}\left [ \sqrt{\frac{1-\gamma_{xy}^{2}(f)}{2N\gamma_{xy}^{2}(f)}} \right ] --- relation to variance in TF phase. Should give a coherence profile that is consistent with that obtained using the preceeding equation.

It remains to code all of this up into Craig's plotting utility.

Attachment 1: Innolight_AM.pdf
Innolight_AM.pdf
Attachment 2: Innolight_AM.tar.gz
Attachment 3: IMG_7599.JPG
IMG_7599.JPG
Attachment 4: 20170921_203741_TFAG4395A_21-09-2017_115547_FourSquare.pdf
20170921_203741_TFAG4395A_21-09-2017_115547_FourSquare.pdf
  13326   Thu Sep 21 01:55:16 2017 ranaUpdateALSX End table of Shame

Image #1: No - we do not use magnetic mounts for beam dumps. Use a real clamp. It has to be rigid. "its not going anywhere" is a nonsense statement; this is about vibration amplitude of nanometers.

Image #2: No - we do not use sticky tape to put black glass beam dumps in place ever, anywhere. Rigid dumps only.

Image #3: Please do not ruin our nice black glass with double sticky tape. We want to keep the surfaces clean. This one and a few of the other Mickey Mouse black glass dumps on this table were dirty with fingerprints and so very useless.

Image #4: This one was worst of all: a piece of black glass was sticky taped to the wall. Shameful.

Please do not do any work on this table without elogging. Please never again do any of these type of beam dumping - they are all illegal. Better to not dump beams than to do this kind of thing.

All dumps have to be rigidly mounted. There is no finger contacting black glass or razor dumps - if you do, you might as well throw it in the garbage.

Attachment 1: 20170921_003143.jpg
20170921_003143.jpg
Attachment 2: 20170921_002430.jpg
20170921_002430.jpg
Attachment 3: 20170921_002243.jpg
20170921_002243.jpg
Attachment 4: 20170921_001906.jpg
20170921_001906.jpg
  13327   Thu Sep 21 15:23:04 2017 gautamOmnistructureALSLong cable from LSC->IOO

[steve,gautam]

We laid out a 45m long BNC cable from the LSC rack to the IOO rack via overhead cable trays. There is ~5m excess length on either side, which have been coiled up and cable-tied for now. The ends are labelled "TO LSC RACK" and "TO IOO RACK" on the appropriate ends. This is to facilitate hooking up the output of the DFD for making a PM measurement of the AUX X laser. There is already a long cable that runs from the IOO rack to the X end.

  13328   Fri Sep 22 18:12:27 2017 gautamUpdateLSCDAC noise measurement (again)

I've been working on setting up some scripts for measuring the DAC noise.

In all the DRMI noise budgets I've posted, the coil-driver noise contribution has been based on this measurement, which could be improved in a couple of ways:

  • The measurement was made at the output of the AI board - we can make the measurement at the output of the coil driver board, which will be a closer reflection of the actual current noise at the OSEM coils.
  • The measurement was made by driving the DAC with shaped random noise - but we can record the signal to the coils during a lock and make the noise measurement by using awg to drive the coil with this signal, with elliptic bandstops at appropriate frequencies to reveal the electronics noise.
  1. The IN1 signals to the coils aren't DQ-ed, but ideally this is the signal we want to inject into the coil_EXC channel for this measurement - so I re-locked the DRMI a couple of nights ago and downloaded the coil IN1 channel data for ~5mins for the ITMs and BS.
  2. AWGGUI supposedly has a feature that allows you to drive an EXC channel with an arbitrary signal - but I couldn't figure out how to get this working. I did find some examples of this kind of application using the Python awg packages, so I cobbled together some scripts that allows me to drive some channels and place elliptic bandstop filters as I required. 
  3. I wasted quite a bit of time trying to implement these signals in Python using available scipy functions, on account of me being a DSP n00b frown. When trying to design discrete-time filters, of course numerical precision errors become important. Initially I was trying to do everything in the "Transfer function (numerator-denominator)" basis, but as Rana pointed out, the way to go is using SOSs. Fortunately, this is a simple additional argument to the relevant python functions, after which elliptic bandstop filter design was trivial.
  4. The actual test was done as follows:
    • Save EPICS PIT/YAW offsets, set them to 0, disable Oplev servos, and then shut down optic watchdog once the optic is somewhat damped. This is to avoid the optics getting a large kick when disconnecting the DB15 connector from the coil driver board output.
    • Disconnect above-mentioned DB15 connector from the appropriate coil driver board output.
    • Turn off inputs to coils in filter module EPICs screens. Since the full signal (local damping servo output + Oplev servo output + LSC servo output) to the coil during a DRMI lock will be injected as an excitation, we don't need any other input. 
    • Use scripts (which I will upload to a git repo soon) to set up the appropriate excitation.
    • To measure the spectrum, I used a DB15 breakout board with test-points soldered on and some mini-grabber-to-BNC adaptors, in order to interface the SR785 to the coil driver output. We can use the two input channels of the SR785 to simultaneously measure two coil driver board output channels to save some time.
    • Take a measurement of the SR785 noise (at the appropriate "Input Range" setting) with inputs terminated to estimate the analyzer noise floor.
    • Just for kicks, I made the measurement with the de-whitening both OFF/ON.

I only managed to get in measurements for the BS and ITMX today. ITMY to be measured later, and data/analysis to follow.

The ITMX and BS alignments have been restored after this work in case anyone else wants to work with the IFO.


Some slow machine reboots were required today - c1susaux was down, and later, the MC autolocker got stuck because of c1iool0 being unresponsive. I thought we had removed all dependency of the autolocker on c1iool0 when we moved the "IFO-STATE" EPICS variable to the c1ioo model, but clearly there is still some dependancy. To be investigated.

  13329   Sun Sep 24 20:47:15 2017 ranaUpdateComputer Scripts / ProgramsRF TF Uncertainties

I have made several changes to Craig's script for better pythonism. Its more robust with different libraries and syntaxes and makes a tarball by default (w/o a command line flag). These kinds of general util scripts will be going into a general use folder in the git.ligo.org/40m/ team area so that it can be used throughout the LSC.

I don't think we need/want a coherence calculation, so I have not included it. Usually, we use coherence to estimate the uncertainty, and here we are just plotting it directly from the dist of the sweeps so coherence seems superfluous.

Attachment 1: TFAG4395A_21-09-2017_115547_FourSquare.pdf
TFAG4395A_21-09-2017_115547_FourSquare.pdf
Attachment 2: TFAG4395A_21-09-2017_115547.tgz
  13330   Mon Sep 25 17:56:33 2017 johannesUpdateComputer Scripts / Programstransmitted power during lossmap

I had to do a reboot + burt restore of c1psl today. It was unresponsive and I couldn't get the PMC to lock. I also had to slightly realign the PMC, and the IMC was too misaligned for the autolocker to catch lock. Adjusting it manually, it was predominantly MC1 PIT that was off. The YARM locked on a 10 mode and had to be aligned manually as well.

I left a script running on Donnatella that tilts ETMX and thus moves the beam on ITMX. I'm monitoring the transmitted power to evaluate sane thresholds for the demodulation offsets in a lossmap measurement. The script will return the IFO to normal after it is done and will take <2 hours to complete (no real clue, but there's no way it takes longer than that for ~50 datapoints).

  13331   Tue Sep 26 13:40:45 2017 gautamUpdateCDSNDS2 server restarted on megatron

Gabriele reported problems with the nds2 server again. I restarted it again.

update: had to do it again at 1730 today - unclear why nds2 is so flaky. Log files don't suggest anything obvious to me...

Quote:

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

 

  13332   Tue Sep 26 15:55:20 2017 gautamUpdateCDS40m files backup situation

Backups of the root filesystems of chiara and nodus are underway right now. I am backing them up to the 1 TB LaCie external hard drives we recently acquired.

I first initialized the drives by hooking them up to my computer and running the setup.app file. After this, plugging the drive into the respective machine and running lsblk, I was able to see the mount point of the external drive. To actually initialize the backup, I ran the following command from a tmux session called ddBackupLaCie:

sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync

Here, /dev/sda is the disk with the root filesystem, and /dev/sdb is the external hard-drive. The installed version of dd is 8.13, and from version 8.21 onwards, there is a progress flag available, but I didn't want to go through the exercise of upgrading coreutils on multiple machines, so we just have to wait till the backup finishes.

We also wanted to do a backup of the root of FB1 - but I'm not sure if dd will work with the external hard drive, because I think it requires the backup disk size (for us, 1TB) to be >= origin disk size (which on FB1, according to df -h, is 2TB). Unsure why the root filesystem of FB is so big, I'm checking with Jamie what we expect it to be. Anyways we have also acquired 2TB HGST SATA drives, which I will use if the LaCie disks aren't an option.

 

  13333   Tue Sep 26 19:10:13 2017 gautamUpdateALSFiber ALS setup neatened

[steve, gautam]

The Fiber ALS box has been installed on the existing shelf on the PSL table. We had to re-arrange some existing cabling to make this possible, but the end result seems okay (to me). The box lid was also re-installed.

Some stuff that still needs to be fixed:

  1. Power supply to ZHL amplifiers - it is coming from a table-top DC supply currently, we should hook these up to the Sorensens.
  2. We should probably extend the corrugated fiber protection tubing for the three fibers all the way up to the shelf. 

Beat spectrum post changes to follow.

Quote:

Is it better to mount the box in the PSL under the existing shelf, or in a nearby PSL rack?

Quote:

 

Further characterization needs to be done, but the results of this test are encouraging. If we are able to get this kind of out of loop ALS noise with the IR beat, perhaps we can avoid having to frequently fine-tune the green beat alignment on the PSL table. It would also be ideal to mount this whole 1U setup in an electronics rack instead of leaving it on the PSL table

 

 

Attachment 1: IMG_7605.JPG
IMG_7605.JPG
  13334   Tue Sep 26 22:11:08 2017 johannesUpdateCameraspost-vent camera capture comparison

I configured the remaining GigE-Camera to work on the 40m network. We currently have 3 operational Basler cameras:

The 120gm's have been assigned the IPs 192.168.113.152  (was already configured) and 192.168.113.153 (freshly configured) and have been labeled accordingly. Note that it was not necessary to connect the out-of-the-box camera directly to a dedicated ethernet adapter whose IP was set manually to 169.254.0.XXX as pointed out in earlier posts - a few seconds after connecting the camera to the control room switch (with PoE adapter to power it) the camera showed up in the configuration software tool which is launched via

/opt/rtcds/caltech/c1/scripts/GigE/pylon5/bin/./IpConfigurator

and can be assigned a corrected, static IP.

We have a plethora of 2" tubes for the lens assembly, but not a great variety of focal lengths for 2" lenses. Present with the camera gear were two f=250 mm and one f=150 mm 2" lenses with a NIR broadband AR coating

To determine the lens positions relativ to the sensor I assumed that the camera we're setting up looks at its test mass from a distance of 1m. Using the two available focal lengths we can look for solutions which have reasonable lens separations <~10cm and suitable magnification. We primarily want to image the central mirror area onto a 1/4" sized sensor, which can be achieved with a magnification of ~1/8.

I chose a lens separation of 6cm, which gives a theoretical magnification of -.12 and a sensor-lens 2 distance of 7.95 cm. I placed the lenses accordingly in the tubes and checked the focusing with Gautam's help:

       

It's pretty close to what we would expect. We will do the calibration using the auxiliary laser on the PSL table. For this I temporarily routed a fiber from the PSL enclosure to the SP table. Since the main cable hole is sort of cramped it's going in through a gap near the ceiling instead.  

 

Attachment 1: lens_distance.pdf
lens_distance.pdf
  13335   Wed Sep 27 00:20:19 2017 gautamUpdateALSMore AM sweeps

Attachment #1: Result of AM sweeps with EX laser crystal at nominal operating temperature ~ 31.75 C.

Attachment #2: Tarball of data for Attachment #1.

Attachment #3: Result of AM sweeps with EX laser crystal at higher operating temperature ~ 40.95 C.

Attachment #4: Tarball of data for Attachment #2.


Remarks:

  • Confirmed that PDA 55 is in the "0dB" setting - the actual dial is unmarked, and has 5 states. I guessed that the left-most one is 0dB, and checked that if I twiddled the dial by one state to the right, the DC level on the scope increased by 10dB as advertized. Didn't check all the states.
  • DC level is ~2.3V on a high-impedance scope. So it will be ~1.15V to a 50ohm load, which is what the DC block is. The inverse of this value is used to calibrate the vertical axis of the TF measurement to RIN/V.
  • Input R (split RF source signal) attenuation: 20dB. Input A (PDA55 output) attenuation: 0dB.
  • Main problem is still network hangups when trying to do many sweeps.
  • Seems to persist even when I connect the GPIB box to one of the network switches - so don't think we can blame the WiFi.
  • Need to explore possibility of speedup - takes >2hours to run ~50scans!

To-do:

  • Overlay median and uncertainty plots for the two temp. settings. There is a visible diference in both the locations and depths/heights of various notches/peaks in the AM profile.
  • Repeat test with a fast focusing lens to focus the beam more tightly on the PD active area to confirm that the measured AM is indeed due to the PZT drive and not from beam-jitter (presently, spot diameter is ~0.5x active area diameter, to eye).
  • Get the PM data.
  • Depending on what the PM data looks like, do a more fine-grained scan around some promising AM notches / PM peaks.
Attachment 1: TFAG4395A_26-09-2017_202344_FourSquare.pdf
TFAG4395A_26-09-2017_202344_FourSquare.pdf
Attachment 2: lowTemp.tgz
Attachment 3: TFAG4395A_26-09-2017_231630_FourSquare.pdf
TFAG4395A_26-09-2017_231630_FourSquare.pdf
Attachment 4: highTemp.tgz
  13336   Wed Sep 27 22:25:21 2017 gautamUpdateLSCDAC noise measurement (again)

Attachment #1: Summary of results of measurements made on Friday. There is a lot in this plot, here is a breakdown:

  • I drove the excitation points of the coil output filter banks with raw time-series data downloaded during a DRMI lock with pyawg. Today during the meeting, Rana pointed out that we could just acquire median (as opposed to mean since the former is more immune to glitches during the averaging process) spectra during a lock, and then do the ifft in python to generate time series data for pyawg. Another advantage of doing it this way is that we don't need to store a large (~200MB in my case) file of 16k data for numerous channels. But since I already had this file, I decided against changing the methodology for this round of tests. Time series plots of the signals do not show any large glitches.
  • The SR785 was used in dual channel mode to acquire spectra from 2 coil driver outputs simultaneously, in the interest of saving time. Input range was set to -32dbVpk, AC coupled, which was the smallest value that worked for the given signal profile. Spectra were taken from DC-200Hz, with 801 points, 25 averages. The DB15 output of the coil driver board was connected to a DB15 breakout board, and then a BNC->Pomona mini-grabber adapter was used to connect to the SR785 input. The newly acquired linear power supplies for the GPIB box mean that spurious 60Hz harmonics were not present. 
  • Initially, I had planned to enable various bandstops from 20Hz-200Hz, to get a more complete profile of the noise. But in the end, I only used two elliptic bandstops (6th order, 60dB stopband attenuation): 60-90Hz, for which data is plotted in red and 90-200Hz, for which data is plotted in green
  • I've used the same noise model as I used here, plotted in dashed grey (summed with SR785 noise at the above input range, with input terminated via 50ohm terminator) - but had to tweak the parameters to get the curve to line up with the measurement. It looks like there is considerable variation between DAC channels, and certainly between the ITMX channels and the BS channels as groups.
  • I took the measurement in two conditions - with the coil de-whitening off (left column) and coil de-whitening on (right column). Note that the input to the excitation was acquired at the IN1 of the relevant filter bank, and since the de-whitening happens downstream of this, we don't have to do anything special.
  • In the right column, I have also plotted the LISO modelled noise, which was shown to match well with the measured curve, admittedly only for one channel (for the coil driver alone, so I am not taking into account the noise of the de-whitening board - I will fix this once I dig up that data).

Some remarks:

  1. According to this measurement, the de-whitening filters are the same on the ITMX channels and BS channels. So I don't understand the difference in the right column for BS and ITMX channels.
  2. While there is considerable variation between channels and also between ITMX and BS, there is certainly >6dB of reduction in the DAC noise when the de-whitening is engaged. However, no improvement was seen in the MICH error signal spectrum between 60-300Hz. So we have to continue to investigate other noises that can explain the noise in that band.
  3. Also, the realized improvement in DAC noise by turning on the coil de-whitening seems marginal - the low pass has gain of ~-80dB at 100Hz, but we seem to be hitting some sort of electronics noise in all channels at the level of ~100nV/rtHz (assuming the actual DAC noise doesn't degrade significantly when the digital simulated de-whitening filter is engaged).
  4. It remains to do the test for the ITMY channels.
  5. It would be useful to visualize the incoherent sum of all these channels - this is what should go into the MICH displacement NB. To be added.
  6. I'm currently loading pyawg from my user directory. Need to figure out a place to put this and add it to $PATH.

Data + code for this plot will be attached later.

Attachment 1: coilNoises.pdf
coilNoises.pdf
  13337   Wed Sep 27 23:44:45 2017 gautamUpdateALSProposed PM measurement setup

Attachment #1 is a sketch of the proposed setup to measure the PM response of the EX NPRO. Previously, this measurement was done via PLL. In this approach, we will need to calibrate the DFD output into units of phase, in order to calibrate the transfer function measurement into rad/V. The idea is to repeat the same measurement technique used for the AM - take ~50 1 average measurements with the AG4395, and look at the statistics. 

Some more notes:

  • Delay line box is passive, just contains a length of cable.
  • IQ Demodulation is done using an aLIGO 1U chassis unit, with the actual demod board electronics being D0902745
  • The RF beatnote amplitude out of the IR beat PD is ~ -8dBm.
  • The ZHL-3A amplifiers have gain of 24dB, so the amplified beat should be ~16dBm
  • At the LSC rack, the amplified beat is split into two - one path goes to the LO input of D0902745 (so at most 13dBm), the other goes through the delay line.
  • On the demod board, the LO signal is amplified with a AP1053, rated at 10dB gain, max output of 26dBm, so the signal levels should be fine for us, even though the schematic says the nominal LO level is 10dBm - moreover, I've ignored cable losses, insertion losses etc so we should be well within spec.
  • The mixer is PE4140. The datasheet quotes LO levels of 17dBm for all the "nominal" tests, we should be within a couple of dBm of this number.
  • There is no maximum value specified for the RF input signal level to the mixer on the datasheet, but I expect it to be <10dBm.
  • We should park the beatnote around 30MHz as this should be well within the operational ranges for the various components in the signal chain.
Attachment 1: IMG_7609.JPG
IMG_7609.JPG
  13338   Thu Sep 28 06:35:44 2017 ChrisUpdateLSCDAC noise measurement (again)

Since you're monitoring two channels simultaneously, you could try subtracting them, as an alternative to carving out bandstops.

  1. Drive both channels with the same time series
  2. Tweak the filter module gains if needed to equalize the analog outputs
  3. Apply SR785 user math to subtract the two channels (or A-B mode of a single input, if you're not using that already?)
  4. Measure the residual, which should be the incoherent part containing DAC + coil driver noise

Subtraction can conceal certain annoying effects (like numerical noise or level crossing glitches) that remain coherent for two identical outputs. It might be worth experimenting with a differential offset or sinusoid, to try to break up that kind of coherence if it exists.

  13339   Thu Sep 28 10:33:46 2017 gautamUpdateCDS40m files backup situation

After consulting with Jamie, we reached the conclusion that the reason why the root of FB1 is so huge is because of the way the RAID for /frames is setup. Based on my googling, I couldn't find a way to exclude the nfs stuff while doing a backup using dd, which isn't all that surprising because dd is supposed to make an exact replica of the disk being cloned, including any empty space. So we don't have that flexibility with dd. The advantage of using dd is that if it works, we have a plug-and-play clone of the boot disk and root filesystem which we can use in the event of a hard-disk failure.

  1. One option would be to stop all the daqd processes, unmount /frames, and then do a dd backup of the true boot disk and root filesystem.
  2. Another option would be to use rsync to do the backup - this way we can selectively copy the files we want and ignore the nfs stuff. I suspect this is what we will have to do for the second layer of backup we have planned, which will be run as a daily cron job. But I don't think this approach will give us a plug-and-play replacement disk in the event of a disk failure.
  3. Third option is to use one of the 2TB HGST drives, and just do a dd backup - some of this will be /frames, but that's okay I guess.

I am trying option 3 now. dd however does requrie that the destination drive size be >= source drive size - I'm not sure if this is true for the HGST drives. lsblk suggests that the drive size is 1.8TB, while the boot disk, /dev/sda, is 2TB. Let's see if it works.

Backup of chiara is done. I checked that I could mount the external drive at /mnt and access the files. We should still do a check of trying to boot from the LaCie backup disk, need another computer for that.

nodus backup is still not complete according to the console - there is no progress indicator so we just have to wait I guess.

Quote:

Backups of the root filesystems of chiara and nodus are underway right now. I am backing them up to the 1 TB LaCie external hard drives we recently acquired.

We also wanted to do a backup of the root of FB1 - but I'm not sure if dd will work with the external hard drive, because I think it requires the backup disk size (for us, 1TB) to be >= origin disk size (which on FB1, according to df -h, is 2TB). Unsure why the root filesystem of FB is so big, I'm checking with Jamie what we expect it to be. Anyways we have also acquired 2TB HGST SATA drives, which I will use if the LaCie disks aren't an option.

 

 

  13340   Thu Sep 28 11:13:32 2017 jamieUpdateCDS40m files backup situation

 

Quote:

After consulting with Jamie, we reached the conclusion that the reason why the root of FB1 is so huge is because of the way the RAID for /frames is setup. Based on my googling, I couldn't find a way to exclude the nfs stuff while doing a backup using dd, which isn't all that surprising because dd is supposed to make an exact replica of the disk being cloned, including any empty space. So we don't have that flexibility with dd. The advantage of using dd is that if it works, we have a plug-and-play clone of the boot disk and root filesystem which we can use in the event of a hard-disk failure.

  1. One option would be to stop all the daqd processes, unmount /frames, and then do a dd backup of the true boot disk and root filesystem.
  2. Another option would be to use rsync to do the backup - this way we can selectively copy the files we want and ignore the nfs stuff. I suspect this is what we will have to do for the second layer of backup we have planned, which will be run as a daily cron job. But I don't think this approach will give us a plug-and-play replacement disk in the event of a disk failure.
  3. Third option is to use one of the 2TB HGST drives, and just do a dd backup - some of this will be /frames, but that's okay I guess.

This is not quite right.  First of all, /frames is not NFS.  It's a mount of a local filesystem that happens to be on a RAID.  Second, the frames RAID is mounted at /frames.  If you do a dd of the underlying block device (in this case /dev/sda*, you're not going to copy anything that's mounted on top of it.

What i was saying about /frames is that I believe there is data in the underlying directory /frames that the frames RAID is mounted on top of.  In order to not get that in the copy of /dev/sda4 you would need to unmount the frames RAID from /frames, and delete everything from the /frames directory.  This would not harm the frames RAID at all.

But it doesn't really matter because the backup disk has space to cover the whole thing so just don't worry about it.  Just dd /dev/sda to the backup disk and you'll just be copying the root filesystem, which is what we want.

ELOG V3.1.3-