40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 73 of 337  Not logged in ELOG logo
ID Date Author Type Category Subject
  13317   Mon Sep 18 17:17:49 2017 gautamUpdateCDSFB wiper script

After trying to debug this issue using the Perl debugger, I concluded that the problem is in the part of the code that splits the output of the "du" command into directory and disk usage. For whatever, reason, this isn't working. The version of perl running on the new FB1 machine is 5.20.2, whereas I suspect the version running on the old FB machine was 5.14.2 (which is the version on all the Ubuntu 12 workstations and megatron). Unclear whether downgrading the Perl version is the right way to go.

The FB1 disk is now getting close to full, the usage is up to 85% today.

Quote:

Before I go down a Perl rabbit hole, has anyone seen such an error or is aware of some reason why this might not work on the new FB? Am I even using the correct scripts?

 

  13316   Mon Sep 18 15:00:15 2017 rana, gautamFrogsComputer Scripts / Programsgateway PWD change

We implemented the post-SURF-season nodus password change today.

New password can be found at the usual location.

  13315   Sat Sep 16 10:56:19 2017 ranaUpdateLSCCoil de-whitening switching investigation

The absence of evidence is not evidence of absence.

  13314   Fri Sep 15 17:08:58 2017 gautamUpdateLSCCoil de-whitening switching investigation

I downloaded a segment of data from the time when the DRMI was locked with the BS and ITM coil driver de-whitening switched on, and looked at coherence between MC transmission and the MICH error signal. Attachment #1 doesn't show any broadband high coherence between 60-300Hz, so it cannot explain the noise in the full range between 60-300Hz. 

The DQ channel for the MC transmission is recorded at 1024 kHz, so to calculate the coherence, I had to decimate the 16K MICH data. 

Since we have the AOM installed, I suppose we can actually measure the intensity noise coupling to MICH by driving a line in the AOM. 

I also checked for coherence in the 60-300Hz band between MICH/PRCL and MICH/SRCL, and didn't see any appreciable coherence. Need to think about this more.

Quote:

 Rana suggested checking coherence with MC transmission to see if this could be laser intensity noise.

Attachment 1: DRMI_IntensityNoise.pdf
DRMI_IntensityNoise.pdf
  13313   Fri Sep 15 16:00:33 2017 gautamUpdateLSCSensing measurement

I've been working on analyzing the data from the DRMI locks last week.

Here are the results of the sensing measurement.

Details:

  1. The sensing measurement is done by using the existing sensing matrix infrastructure to drive the actuators for the various DoFs at specific frequencies (notches at these frequencies are turned on in the control loops during the measurement).
  2. All the analysis is done offline - I just note down the times at which the sensing lines are turned on and then download the data later. The amplitudes of the oscillators are chosen by looking at the LSC PD error signal spectra "live" in DTT, and by increasing the amplitude until the peak height is ~10x above the nominal level around that frequency. This analysis was done on ~600seconds of data.
  3. The actual sensing elements in the various PDs are calculated as follows:
    • Calculate the Fourier coefficients at the excitation frequency using the definition of the complex DFT in both the LSC PD signal and the actuator signal (both are in counts). Windowing is "Tukey", and FFT length used is 1 second.
    • Take their ratio
    • Convert to suitable units (in this case V/m) knowing (i) The actuator discriminant in cts/m and (ii) the cts/V ADC calibration factor. Any whitening gain on the PD is taken into account as well.
    • If required, we can convert this to W/m as well, knowing (i) the PD responsivity and (ii) the demodulation chain gain.
    • Most of this stuff has been scripted by EricQ and is maintained in the pynoisesub git repo.

The plotting utility is a work in progress - I've basically adapted EricQs scripts and added a few features like plotting the uncertainties in magnitude and phase of the calculated sensing elements. Possible further stuff to implement:

  • Only plot those elements which have good coherence in the measurement data. At present, the scripts check the coherence and prompt the user if there is poor coherence in a particular channel, but no vetos are done.
  • The uncertainty calculation is done rather naively now - it is just the standard deviation in the fourier coefficient determined from various bins. I am told that Bendat and Piersol has the required math. It would be good to also incorporate the uncertainties in the actuator calibration. These are calculated using the python uncertainties package for now.
  • Print a summary of the parameters used in the calculation, as well as sensing elements + uncertainty in cts/m, V/m and W/m, on a separate page.
  • Some aesthetics can be improved - I've had some trouble getting the tick intervals to cooperate so I left it as is for the moment.

Also, the value I've used for the BS actuator calibration is not a measured one - rather, I estimated what it will be by scaling the old value by the same ratio which the ITMs have changed by post de-whitening board mods. The ITM actuator coefficients were recently  measured here. I will re-do the BS calibrations over the weekend.

Noise budgeting to follow - it looks like I didn't set the AS55 demod phase to the previously determined optimal value of -82degrees, I had left it at -42 degrees. To be fixed for the next round of locking.

Attachment 1: DRMI1f_Sep5.pdf
DRMI1f_Sep5.pdf
  13312   Fri Sep 15 15:54:28 2017 gautamUpdateCDSFB wiper script

A wiper script is not yet set up for our new Frame-Builder. The disk usage is ~80% now, so I think we should start running a wiper script that manages overall disk usage and deletes old frame files to this end.

From what I could find on the elog, the way this was done was by running a cron job on FB. There is a perl script, /opt/rtcds/caltech/c1/target/fb/wiper.pl, which from what I could understand, runs a bunch of du commands on different directories to determine if there is a need to delete any files.

I copied this script over to /opt/rtcds/caltech/c1/target/daqd/wiper.pl. This is the directory in which all the new FB stuff resides. Conveniently, the script has a "dry-run" option, which I tried running on FB1. However, I get the following error message:

Fri Sep 15 15:44:45 PDT 2017
Dry run, will not remove any files!!!
You need to rerun this with --delete argument to really delete frame files
Directory disk usage:
 /frames/trend/minute_rawk
Combined 0k or 0m or 0Gb
Illegal division by zero at ./wiper.pl line 98.


So it would seem that for some reason, the du commands aren't working. From what I could tell, there aren't any directory paths specific to the old FB machine that need to be changed. I believe the script was working prior to the FB disk crash - unfortunately it doesn't look like this script was under version control but I don't think any changes have been made to this script.

Before I go down a Perl rabbit hole, has anyone seen such an error or is aware of some reason why this might not work on the new FB? Am I even using the correct scripts?

  13311   Tue Sep 12 11:44:16 2017 KiraUpdatePEMtemp sensor update

Got it to work. A cable was broken and the AD586 also broke at the same time so it took a while to find the problem. I had to create a makeshift cable out of three parts so once I replace it for an actual cable, it will be good to go for a test.

Quote:

Today, I stuck on the sensors to a metal block using a flag, rubber bands, and some thermal paste (1st attachment). I then wrapped the whole thing in about 4 layers of insulation and a lot of tape (2nd attachment). The only things leading out of the box were the three connections to the sensors and a thermometer. I then connected the wires to their respective places on the board of the sensor. To get the readings out we would need to use an ADC. Gautam and I checked to make sure the ADC we have inside the lab goes from -10V to 10V so that it would be able to measure the 3V value the sensor typically measures. We then tried to connect all three sensors to a DC source simultaneously, but unfortunately one of them seems to have disconnected somewhere during the process, as it only showed 1.2V instead of 3V. I plan to fix this tomorrow morning so that we can hopefully set this up soon.

Quote:

I took off the AD590 and attached it to two long wires leading out from the board. This will allow us to attach the sensor to a metal block and not have to stick the whole board to it. I have also completed three identical copies of this and it's pretty much ready to be tested. According to Craig and Andrew's elog here, the sensor is very noisy and they added in a low pass filter to fix that, so that's something to consider for the final version of the circuit. I'll test what I have so far and see how that goes. We still need to figure out how to get readings from the sensors.

To attach the sensor to the metal block, I'll use some thermal paste and fasteners. I'll also put a thermometer on the block to record the actual temperature. I'll then wrap it in some insulation we have in the lab and have only some wires leading out of it to make measurements. I'll leave this setup overnight and record the outputs for about a full day. The fluctuations between the sensors will then indicate the noise of each individual sensor.

 

 

  13310   Mon Sep 11 23:31:50 2017 johannesUpdateCameraspost-vent camera capture comparison

The latest pre-unintended vent captures of the test mass face cameras were taken on June 2nd, 2017. Only exposures for ITMYF, ETMYF, and ETMXF exist in /users/sensoray/SensorayCaptures/. I took new captures for those three after locking the arms and having the dither-alignment on for 5+ minutes (exposures were taken after turning the dithering off). The capture script is choking on ITMXF, saying the channel can't lock on. Maybe that's why there's also no reference image for it. Capturing QUAD3, which shows ITMXF in the lower right corner, works, but we don't have a capture for reference. I also recorded dark fields after closing the PSL shutter. Naturally, these don't subtract out as well for the three-month old pictures, but it's actually not terrible and qualitatively one can still compare the subtracted images

Visually, ITMYF and ETMYF do not show a dramatic difference between then and now. ETMXF however, does. To get a numerical estimate for the difference in counts, I worked with the subtracted images and placed an aperture about 1.5x the size of the visible beam blob. I summed up the pixel values inside and subtracted the sum of the pixel values of an equally sized area from the upper left corner of the respective image, which looks free of subtraction artifacts and looks qualitatively similar to the background in the central region.

The pixel sum has gone up by about 50% between the exposures. I still have to do the same for the YARM optics but don't expect such a large discrepancy. Unfortunately we're missing those ITMYF expsures...

All pictures are organized in this format:

Pre-vent exposure Post-vent exposure
Pre-vent subtracted Post-vent subtracted

 

ITMYF

   

   

ETMYF

   

   

ETMXF

   

   

Attachment 11: ETMXF_pre_sub.bmp
  13309   Mon Sep 11 16:46:09 2017 SteveUpdatePEMearthquakes
Quote:

I was trying to get a lossmap measurement over the weekend but had some trouble first with the IMC and then with the PMC.

For the IMC: It was a bit too misaligned to catch and maintain lock, but I had a hard time improving the alignment by hand. Fortunately, turning on the WFS quickly once it was locked restored the transmission to nominal levels and made it maintain the lock for longer, but only for several minutes, not enough for a lossmap scan (can take up to an hour). Using the WFS information I manually realigned the IMC, which made locking easier but wouldn't help with staying locked.

For the PMC: The PZT feedback signal had railed and the PMC had been unlocked for 8+ hours. The PMC medm screen controls were generally responsive (I could see the modes on the CCDs changing) but I just couldn't get it locked. c1psl was responding to ping but refusing telnet so I keyed the crate, followed by a burt restore and finally it worked.

After the PMC came back the IMC has already maintained lock for more than an hour, so I'm now running the first lossmap measurements.

Southern Mexio is still shaking..... so as we

Attachment 1: M5.4eq.png
M5.4eq.png
  13308   Mon Sep 11 15:58:02 2017 SteveUpdateGeneralNPRO for repair

This NPRO has a tripping power output******

 

" Hi Eric,

I checked with the Engineer as Vincent is travelling.

“The lasers have serial number below 2000 which we cannot repair them, we only can repair NPRO laser has serial number 2000 or later.”

Thanks,

Betty-Ann Watt

Customer Service Professional
Global Customer Service/Communication & Commercial Optical Products "

www.lumentum.com

 

 

Attachment 1: NPRO_tripping.jpg
NPRO_tripping.jpg
  13307   Mon Sep 11 12:56:40 2017 johannesUpdateComputer Scripts / Programslossmap attempts

I was trying to get a lossmap measurement over the weekend but had some trouble first with the IMC and then with the PMC.

For the IMC: It was a bit too misaligned to catch and maintain lock, but I had a hard time improving the alignment by hand. Fortunately, turning on the WFS quickly once it was locked restored the transmission to nominal levels and made it maintain the lock for longer, but only for several minutes, not enough for a lossmap scan (can take up to an hour). Using the WFS information I manually realigned the IMC, which made locking easier but wouldn't help with staying locked.

For the PMC: The PZT feedback signal had railed and the PMC had been unlocked for 8+ hours. The PMC medm screen controls were generally responsive (I could see the modes on the CCDs changing) but I just couldn't get it locked. c1psl was responding to ping but refusing telnet so I keyed the crate, followed by a burt restore and finally it worked.

After the PMC came back the IMC has already maintained lock for more than an hour, so I'm now running the first lossmap measurements.

  13306   Mon Sep 11 12:40:32 2017 johannesUpdatePSLPSL table auxiliary NPRO

I changed the PSL table auxiliary laser setup to the 200 MHz AOM and put the light back in the fiber. Coupling efficiency is again ~50%, giving us up to about 75 mW of auxiliary laser light on the AS table. The 90% to 10% fall time of the light power out of the fiber when switched off is 16.5 ns with this AOM on the PDA10A, which will be sufficient for the ringdown measurements.

  13305   Mon Sep 11 09:47:53 2017 SteveUpdateGeneralWIMA caps refilled

Instock WIMA caps refilled to a minimum 50 pieces each.

Attachment 1: WIMA.png
WIMA.png
  13304   Fri Sep 8 12:08:32 2017 GabrieleSummaryLSCGood reconstruction of PRMI degrees of freedom with deep learning

Introduction

This is an update of my previous reports on applications of deep learning to the reconstruction of PRMI degrees of freedom (MICH/PRCL) from real free swinging data. The results shown here are improved with respect to elog 13274 and 13294. The training is performed in two steps, the first one using simulated data, and the second one fine tuning the parameters on real data.

First step: training with simulation

This step is exactly the same already described in the previous entries and in my talks at the CSWG and LVC. For details on the DNN architecture please refer to G1701455 or G1701589. Or if you really want all the details you can look at the code. I used the following signals as input to the DNN: POPDC, POP22_Q, ASDC, REFL11_I/Q, REFL55_I/Q, AS55_I/Q. The network is trained using linear trajectories in the PRCL/MICH space, and signals obtained from a model that simulates the PRMI behavior in the plane wave approximation. A total of 150000 trajectories are used. The model includes uncertainties in all the optical parameters of the 40m PRMI configuration, so that the optical signals for each trajectory are actually computed using random optical parameteres, drwn from gaussian distributions with proper mean and width. Also, white random gaussian sensing noise is added to all signals with levels comparable to the measured sensing noise.

The typical performance on real data of a network pre-trained in this way was already described in elog 13274, and although being reasoble, it was not too good.

Second step: training with real data

Real free swinging data is used in this step. I fine tuned the demodulation phases of the real signals. Please note that due to an old mistake, my convention for phases is 90 degrees off, so for example REFL11 is tuned such that PRCL is maximized in Q instead of I. Regardless of this convention confusion, here's how I tuned the phases:

  • REFL11: PRCL is all in Q when crossing the carrier resonance
  • REFL55: PRCL is all in Q when crossing the carrier resonance
  • AS55: MICH is all in Q when crossing the PRCL carrier resonance
  • POP22: signal peaking in Q when crossing carrier or sideband resonances. Carrier resonance crossing gives positive sign

Then I built the following training architecture. The neural network takes the real signals and produces estimates of PRCL and MICH for each time sample. Those estimates are used as the input for the PRMI model, to produce the corresponding simulated optical signals. My cost function is the squared difference of the simulated versus real signals. The training data is generated from the real signals, by selection 100000 random 0.25s long chunks: the history of real signal over the whole 0.25s is used as input, and only the last sample is used for the cost function computation. The weights and biases of the neural network, as well as the model parameters are allowed to change during the learning process. The model parameters are regularized to suppress large deviations from the nominal values.

One side note here. At first sight it might seems weird that I'm actually fedding as input the last sample and at the same time using it as the reference for the loss function. However, you have to remember that there is no "direct" path from input to output: instead all goes through the estimated MICH/PRCL degrees of freedom, and the optical model. So this actually forces the network to tune the reconstruction to the model. This approach is very similar to the auto-encoder architectures used in unsupervised feature learning in image recognition.

Results

After trainng the network with the two previous steps, I can produce time domain plots like the one below, which show MICH and PRCL signals behaving reasonably well:

To get a feeling of how good the reconstruction is, I produced the 2d maps shown below. I divided the MICH/PRCL plane in 51x51 bins, and averaged the real optical signals with binning determined by the reconstructed MICH and PRCL degrees of freedom. For comparison the expected simulation results are shown. I would say that reconstructed and simulated results match quite well. It looks like MICH reconstruction is still a bit "compressed", but this should not be a big issue, since it should still work for lock acquisition.

Next steps

There a few things that can be done to futher tune the network. Those are mostly details, and I don't expect significant improvements. However, I think the results are good enough to move on to the next step, which is the on-line implementation of the neural network in the real time system.

  13303   Fri Sep 8 10:22:30 2017 SteveUpdatePEMtemp sensor update

The weight of SS can with copper liner  is 12.2 kg

Is 1 Amp for the heating jacket going to be enough? We should have some headroom.

Quote:

to get the sensors to read the same values they have to be in direct thermal contact with the metal block - there can't be any adapter board in-between

for the 2nd attempt, I also recommend encasing it in a metal block rather than just one side. You can drill some 7-10 mm diameter holes in an aluminum or copper block. Then put the sensors in there and plug it up with some thermal paste.

 

  13302   Fri Sep 8 07:54:04 2017 SteveUpdatePEMM8.1 eq

Nothing tripped. No obvious damage.

Attachment 1: M8.1.png
M8.1.png
  13301   Thu Sep 7 23:09:00 2017 johannesUpdatePSLPSL table auxiliary NPRO

I brought the DEI Pulser unit and a suitable Pockels cell over from Bridge today (I also found an identical Pockels cell already at the 40m on the SP table, now that I knew what to look for).

I also brought the 200MHz AOM (Crystal Technology 3200-1113) along which can achieve rise times of 10 ns(!). Before I start setting up the Pockels cell I wanted to try this different AOM and look at its switching behavior. It asks for a much smaller beam (<65 um diam.) than what's currently in the path to the fiber (500 um diam.), although it's clear aperture is technically big enough (~1mm diam.). So I still tried, and the result was a somewhat elliptical deflected beam, and the slower decay was again visible after switching the RF input.

I was using the big Fluke function generator for the 200MHz seed signal, a Mini Circuits ZASWA-2-50 switch and a Mini Circuits ZHL-5W-1 amplifier. For the last two I moved two power supplies (+/-5V for the switch and +24V for the amplifier) into the PSL enclosure. I started at low seed power on the Fluke, routing the amplified signal into a 20dB attenuator before measuring it with an RF power meter. The AOM saturates at 2.5W (34 dBm), which I determined is achieved with a power setting on the Fluke of -4 dBm. As expected, this AOM performed faster (~80ns fall time) but I again observed the slower decay.

This struck me as weird and I started swapping components other than the AOM, which I probably should have done before. It turned out that it was the PD I was using (the same PDA10CF Gautam had used for his MC ringdown investigations). When I changed it to a PDA10A (Si diode, 150MHz bandwidth) the slow decay vanished! One last round of crappy screenshots:

   

Rather than proceeding with the Pockels cell, tomorrow I will make the beam in the AOM smaller and hope that that takes care of the ellipticity. If it does: the AOM can theoretically switch on ~10ns timescale, same for the switch (5-15ns typical), and the amplifier is non-resonant and works up to 500MHz, so it shouldn't be a limiting factor either. If this doesn't work out, we can still have ~100ns switching times with the other AOMs.

  13300   Wed Sep 6 23:06:30 2017 gautamUpdateLSCCoil de-whitening switching investigation

Summary:

Rana suggested checking if the coil de-whitening switching is actually happening in the analog path. I repeated the test detailed hereAttachments #1 and #2 suggest that all the coils for the BS and ITMs are indeed switching yes.

Details:

  • The motivation behind this test was the following - the analog path switching is done by applying some logic voltage to a switch, but if this voltage is common among many switches, the hypothesis was that perhaps individual switches were not getting the required voltage to engage the switching.
  • This time FM9 (simulated de-whitening) and FM10 (inverse de-whitening) in the coil output filter modules turned off, so as to maintain a flat TF in the digital domain, but engage the de-whitened analog path (turning off FM9 is supposed to do this).
  • There is poor coherence in the measurement above 40Hz so the data there should be neglected. It is hard to get a good measurement at higher frequencies because of the pendulum TF + heavy low pass filtering from the analog de-whitening path.
  • But between 10-40Hz, we already see the analog de-whitening TF in the measurement.
  • For comparison, I have plotted the measured pendulum TFs for one of the coils from an earlier test (all the coils were roughly at the same level).

So it would seem that there is some other noise which has a 1/f^2 shape and is at the same level we expected the DAC noise to be at. Rana suggested checking coherence with MC transmission to see if this could be laser intensity noise.

I also want to re-do the actuator calibrations for the vertex optics again before re-posting the revised noise budget.

Attachment 1: BScoils.pdf
BScoils.pdf
Attachment 2: ITMcoils.pdf
ITMcoils.pdf
  13299   Wed Sep 6 01:09:11 2017 johannesUpdateComputer Scripts / ProgramsNew set of loss measurements

I stumbled upon a faster way to stream data from the TDS3014 oscilloscopes to disk, which speeds the loss measurements up by a lot:  ftp://sprite.ssl.berkeley.edu/pub/sharris/MAVEN_LPW_Preamp/109_TDS3014B_control/tds3014b.py
This convenient(!) set of scripts contains a function that parses the scope's native binary format, for which the acquisition of 1 screenful of data takes <1s as opposed to ~20s, into readable data. I tested it for a bit and concluded that it does what it actually claims to do, but there's one weirdness: It get's the channel offset wrong. However this doesn't matter in our measurement because we're subtracting the dark level, which sees the same (wrong) offset. Other than that it seems okay.

So I started a new set of armloss measurements, and since the data acquisition is now much faster, I was able to squeeze a set of 20 individual measurements for each arm into ~30 minutes. This is the procedure I follow when I take these measurements for the XARM (symmetric under XARM <-> YARM):

  1. Dither-align the interferometer with both arms locked. Freeze outputs when done.
  2. Misalign ETMY + ITMY.
  3. ITMY needs to be misaligned further. Moving the slider by at least +0.2 is plentiful to not have the other beam interfere with the measurement.
  4. Start the script, which does the following:
    1. Resume dithering of the XARM
    2. Check XARM dither error signal rms with CDS. If they're calm enough, proceed.
    3. Freeze dithering
    4. Start a new set of averages on the scope, wait T_WAIT (5 seconds)
    5. Read data (= ASDC power and MC2 trans) from scope and save
    6. Misalign ETMX and wait 5s
    7. Read data from scope and save
    8. Repeat desired amount of times
  5. Close the PSL shutter and measure the PD dark levels

I will write a more comprehensive post describing the data acquisition and processing, let's just look at the results for now: The "uncertainties" reported by the individual measurements are on the order of 1-2 ppm (~1.9 for the XARM, ~1.3 for the YARM). This accounts for fluctuations of the data read from the scope and uncertainties in mode-matching and modulation depths in the EOM. I made histograms for the 20 datapoints taken for each arm: the standard deviation of the spread is a little over 2ppm. We end up with something like:

XARM: 49.3 +/- 2.1 ppm
YARM: 20.3 +/- 2.3 ppm

 

    

Attachment 1: XARM_20170905.pdf
XARM_20170905.pdf
Attachment 2: YARM_20170905.pdf
YARM_20170905.pdf
  13298   Tue Sep 5 23:13:44 2017 johannesUpdatePSLPSL table auxiliary NPRO

I used Gautam's mode measurement of the auxiliary NPRO (w=127.3um, z=82mm) for the spacing of the optics on the PSL table for the fiber injection and light modulation. As mentioned in previous posts, for the time being there is no Faraday isolator and no broadband EOM installed, but they're accounted for in the mode propagation and they have space reserved if desired/required/available.

The coupler used for the injection is a Thorlabs F220APC-1064, which allegedly collimates the beam from the fiber type we use to 2.4mm diameter, which I used as the target for the mode calculations. I coupled the first order diffracted beam to a ~60m fiber, which is a tad long but the only fiber I could locate that was long enough. The coupling efficiency from free-space to fiber is 47.5%, and we can currently get up to 63 mW out of the fiber.

Tomorrow Steve and I are going to pull the fiber through protective tubing and bring it to the AS port. The next step is then characterizing the beam out of the collimator to match it into the interferometer.

As far as the switching itself is concerned: I confirmed that the exponential decay is still present when looking at the fiber output. I located the DEI Pulser unit in the QIL lab, and also found several more AOMs, including a 200MHz Crystal Technologies one, same brand that the PSL has, where the ringdown was not observed. According to past elogs, with good polarizers we can expect an extinction ratio of ~200 from the Pockels cell, which should be fine, but it's going to be tradeoff switching speed <-> extinction (if the alternate AOM doesn't show this ringdown behavior).

Attachment 1: PSL_IR.pdf
PSL_IR.pdf
Attachment 2: psl_aux_laser.pdf
psl_aux_laser.pdf
  13297   Tue Sep 5 23:02:37 2017 gautamUpdateCDSslow machine bootfest

MC autolocker was not working - PCdrive was railed at its upper rail for ~2 hours judging by the wall StripTool trace. I tried restarting the init processes on megatron, but that didn't fix the problem. The reason seems to have been related to c1iool0 failing - after keying the crate, autolocker came back fine and MC caught lock almost immediately.

Additionally, c1susaux, c1auxex,c1auxey and c1iscaux are also down. I'm not planning on using the IFO tonight so I am not going to reboot these now.

 

  13296   Tue Sep 5 17:52:06 2017 KiraUpdatePEMtemp sensor update

to get the sensors to read the same values they have to be in direct thermal contact with the metal block - there can't be any adapter board in-between

for the 2nd attempt, I also recommend encasing it in a metal block rather than just one side. You can drill some 7-10 mm diameter holes in an aluminum or copper block. Then put the sensors in there and plug it up with some thermal paste.

  13295   Tue Sep 5 17:18:17 2017 KiraUpdatePEMtemp sensor update

Today, I stuck on the sensors to a metal block using a flag, rubber bands, and some thermal paste (1st attachment). I then wrapped the whole thing in about 4 layers of insulation and a lot of tape (2nd attachment). The only things leading out of the box were the three connections to the sensors and a thermometer. I then connected the wires to their respective places on the board of the sensor. To get the readings out we would need to use an ADC. Gautam and I checked to make sure the ADC we have inside the lab goes from -10V to 10V so that it would be able to measure the 3V value the sensor typically measures. We then tried to connect all three sensors to a DC source simultaneously, but unfortunately one of them seems to have disconnected somewhere during the process, as it only showed 1.2V instead of 3V. I plan to fix this tomorrow morning so that we can hopefully set this up soon.

Quote:

I took off the AD590 and attached it to two long wires leading out from the board. This will allow us to attach the sensor to a metal block and not have to stick the whole board to it. I have also completed three identical copies of this and it's pretty much ready to be tested. According to Craig and Andrew's elog here, the sensor is very noisy and they added in a low pass filter to fix that, so that's something to consider for the final version of the circuit. I'll test what I have so far and see how that goes. We still need to figure out how to get readings from the sensors.

To attach the sensor to the metal block, I'll use some thermal paste and fasteners. I'll also put a thermometer on the block to record the actual temperature. I'll then wrap it in some insulation we have in the lab and have only some wires leading out of it to make measurements. I'll leave this setup overnight and record the outputs for about a full day. The fluctuations between the sensors will then indicate the noise of each individual sensor.

 

Attachment 1: IMG_20170905_144924.jpg
IMG_20170905_144924.jpg
Attachment 2: IMG_20170905_165042.jpg
IMG_20170905_165042.jpg
  13294   Tue Sep 5 16:37:47 2017 GabrieleSummaryLSCImproved PRMI deep learning reconstruction

This is an update on the results already presented earlier (refer to elog 13274 for more introductory details). I improved significantly the results with the following tricks:

  • I retuned the demodulation phase of AS55, this time ensuring that the (alleged) MICH motion is visible mostly in Q when crossing a carrier resonance. Further fine tunings of phases will be possible once we have a measurement of the length optical matrix

  • I fine tuned the netwrok by training it again using the real data. The ides is the following. I started with the network trained on the simulated data, and froze the parameters of the input recurrent layers. I fed the real signal to the network, computed the reconstructed PRCL/MICH, and fed them to my PRMI model to compute simulated signals. I allowed some of the parameters of the models to vary (expecially demodulation phases). I then trained again the network by trying to match the model predicted signals with the real input signals. I allowed only the parameters of the fully connected layers to vary (mostly for technical reasons, I'm working on re-training also the recurrent layers)

An example of time domain reconstruction is visible below. It already looks better than the old results:

As before, to better evaluate the performance I plotted averaged values of the real signals as a function of the reconstructed MICH and PRCL positions. The results are compared with simulation below. They match quite well (left real data, right simualtion expectation)

One thing to better understand is that MICH seems to be somewhat compressed: most of the output values are between -100 and +100 nm, instead of the expected -lambda/4, lambda/4. The reason is still unclear to me. It might be a bug that I haven't been able to track down yet.

  13293   Tue Sep 5 14:41:58 2017 gautamUpdateCDSNDS2 server restarted on megatron

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

  13292   Tue Sep 5 09:47:34 2017 KiraSummaryPEMheater circuit calculations

I decided to calculate the fluctuation in power that we will have in the heater circuit. The resistors we ordered have 50 ppm/C and it would be useful to know what kind of fluctuation we would expect. For this, I assumed that the heater itself is an ideal resistor that has no temperature variation. The circuit diagram is found in Kevin's elog here. At saturation, the total resistance (we will have a 1\Omega resistor instead of 6\Omega for our new design) will be R_{tot}=R+R_{h}=1\Omega +24\Omega =25\Omega. Therefore, with a 24V input, the saturation current should be I=\frac{V_{in}}{R_{tot}}=\frac{24V}{25\Omega}=0.96A.  Therefore, the power in the heater should be (in the ideal case) P=I^2R{_{h}}=22.1184W

Now, in the case where the resistor is not ideal, let's assume the temperature of the resistor changes by 10C (which is about how much we would like to heat the whole thing). Therefore, the resistor will have a new value of R_{new}=R+50ppm/C\times 10C\times 10^{-6}=1.0005\Omega. The new current will then be I_{new}=\frac{V_{in}}{R_{new}}=0.95998A and the new power will be P_{new}=I_{new}^{2}R_{h}=22.1175W. So the difference in power going through the heater is about 0.00088W.

We can use this power difference to calculate how much the temperature of the metal can we wish to heat up will change. \Delta T=\Delta P\times (1/\kappa) /x where \kappa is the thermal conductivity and x is the thickness of the material. For our seismometer, I calculated it to be 0.012K.

  13291   Tue Sep 5 02:07:49 2017 gautamUpdateLSCLow Noise DRMI attempt

Summary:

Tonight, I was able to lock the DRMI, turn on the whitening filters for the sensing PDs, and also turn on the coil de-whitening filters for ITMX, ITMY and BS. However, I didn't see the expected improvement in the MICH spectrum between ~50-300 Hz sad. Sad.

Details:

I basically went through the list of tasks I made in the previous elog. Some notes:

  • The UGF servos suggested that I had to lower the SRCL gain. I lowered it from -0.055 to -0.025. OLTF measurement using In1/In2 method suggested UGF ~120Hz. I don't know why this should be. Plot to be uploaded later.
  • Since we aren't actuating on the ITMs, I was able to leave their coils de-whitened all the time.
  • For the BS, it was trickier - I had to play around a little with the "Tolerance" setting in Foton while looking at transients (using DTT, not a scope for now) while switching the filters.
  • This transition isn't so robust yet - but eventually I found a setting that worked, and I was able to successfully turn on the de-whitening thrice tonight (but also failed about the same number of times). [GV Oct 6 2017: Remember that the PD whitening has to be turned on for this transition to be successful - otherwise the RMS from the high frequencies saturate the DAC.]
  • The locks were pretty stable. One was ~10mins, one was ~15mins, and I broke both deliberately because I was out of ideas as to why the part of the MICH error signal spectrum that I thought was due to DAC noise didn't improve.
  • I've made a bunch of shell scripts to help with the various switchings - but now that I think of it, I should make these python scripts.

Attachment #1: Comparison of MICH_ERR with and without the BS de-whitening. Note that the two ITMs have their coils de-whitened in both sets of traces.

Attachment #2: Spectra of MICH output and one of the BS coil outputs in both states. The DAC RMS increases by ~30x when the de-whitening is engaged, but is still well within limits.

So it looks like the switching of paths is happening correctly. The "CDS BIO STATUS" MEDM screen also shows the appropriate bits toggling when I turn the de-whitening on/off. There is no broadband coherence with MCF between 50-300 Hz so it seems unlikely that this could be frequency noise.

Clearly I am missing something. But anyways I have a good amount of data, may be useful to put together the post CDS/electronics modification DRMI noise budget. More analysis to follow.

 

Attachment 1: MICH_err_comp.pdf
MICH_err_comp.pdf
Attachment 2: deWhitenedCoil.pdf
deWhitenedCoil.pdf
  13290   Mon Sep 4 18:18:29 2017 ranaUpdateLSCdewhite switching: FOTON settings

not immediately necessary, since you have already got it sort of working, but one of these days we should optimize this for real. In the past, we used to do this by putting a o'scope on the coil Vmon during the switching to catch the transient w/ triggering. We download the data/picture via ethernet. Run for loop on tolerance to see what's what.

  1. Went into the Foton filter banks for all the coil output filters, and modified the "Output" settings to be on "Input crossing", with a "Tolerance" of 10 and a "Timeout" of 3 seconds. These settings are to facilitate smooth transition between the two signal paths (without and with coil-dewhitening). The parameters chosen were arbitrary and not optimized in any systematic manner.

 

  13289   Mon Sep 4 16:30:06 2017 gautamUpdateLSCOplev loop tweaking

Now that the DRMI locking seems to be repeatable again, I want to see if I can improve the measured MICH noise. Recall that the two dominant sources of noise were

  1. BS Oplev loop A2L - this was the main noise between 30-60Hz.
  2. DAC noise - this dominated between ~60-300Hz, since we were operating with the de-whitening filters off.

In preparation for some locking attempts today evening, I did the following:

  1. Added steeper elliptic roll-off filters for the ITMX and ITMY Oplevs. This is necessary to allow the de-whitening filters to be turned on without railing the DAC.
  2. Modified the BS Oplev loop to also have steeper high-frequency (>30Hz) roll off. The roll-off between 15-30Hz is slightly less steep as a result of this change.
  3. Measured all Oplev loop TFs - UGFs are between 4 Hz and 5 Hz, phase margin is ~30degrees. I did not do any systematic optimization of this for today.
  4. Went into the Foton filter banks for all the coil output filters, and modified the "Output" settings to be on "Input crossing", with a "Tolerance" of 10 and a "Timeout" of 3 seconds. These settings are to facilitate smooth transition between the two signal paths (without and with coil-dewhitening). The parameters chosen were arbitrary and not optimized in any systematic manner.
  5. After making the above changes, I tried engaging the de-whitening filters on ITMX, ITMY and BS with the arms locked. In the past, I was unable to do this because of a number of issues - Oplev loop shapes and Foton settings among them. But today, the switching was smooth, the single arm locks weren't disturbed when I engaged the coil de-whitening.

Hopefully, I can successfully engage a similar transition tonight with the DRMI locked. The main difference compared to this daytime test is going to be that the MICH control signal is also going to be routed to the BS.

Tasks for tonight, if all goes well:

  1. Lock DRMI.
  2. Use UGF servos to set the overall loop gains for DRMI DoFs.
  3. Reduce PRCL->MICH and SRCL->MICH coupling.
  4. Measure loop shapes of all DRMI DoFs.
  5. Make sensing matrix measurement.
  6. Engage coil-dewhitening, download data, make NB.

Unrelated to this work: the PMC was locked near the upper rail of the PZT, so I re-locked it closer to the middle of the range.

Quote:

Surprisingly, there was no evidence of REFL55 behaving weirdly tonight, and I was able to easily lock the DRMI on 1f error signals using the recipe I've been using in the last few months.

  13288   Fri Sep 1 19:15:40 2017 gautamUpdateALSFiber ALS noise measurement

Summary:

I did some work today to see if I could use the IR beat for ALS control. Initial tests were encouraging.

I will now embark on the noise budgeting.

Details:

  • For this test, I used the X arm
  • I hooked up the X-arm + PSL IR beat to the X-arm DFD channel, and used the Y-arm DFD channels to simultaneously monitor the X-arm green beat.
  • I then transitioned to ALS control and used POX as an out-of-loop sensor for the ALS noise.
  • Attachment #1 shows a comparison of the measurements. In red is the IR beat, while the green traces are from the test EricQ and I did a couple of nights ago using the green beat.
  • I also wanted to do some arm cavity scans with the arm under ALS control with the IR beat - but was unsucessful. The motivation was to fix the ALS model counts->Hz calibration factors.
  • I did however manage to do a 10 FSR scan using the green beatnote - however, towards the end of this scan, the green beat frequency (read off the control room analyzer) was ~140MHz, which I believe is outside (or at least on the edge) of the bandwidth of the Green BBPDs. The fiber coupled IR beat photodiodes have a much larger (1GHz) spec'd bandwidth.

I am leaving the green beat electronics on the PSL table in the switched state for further testing...

 

Attachment 1: IR_ALS_noise.pdf
IR_ALS_noise.pdf
  13287   Fri Sep 1 16:55:27 2017 gautamUpdateComputersTestpoints now accessible again

Thanks to Jonathan Hanks, it appears we can now access test-points again using dataviewer.

I haven't done an exhaustive check just yet, but I have loaded a few testpoints in dataviewer, and ran a script that use testpoint channels (specifically the ALS phase tracker UGF setting script), all seems good.

So if I remember correctly, the major CDS fix now required is to solve the model unloading issue.

Thanks to Jamie/Jonathan Hanks/KT for getting us back to this point! Here are the details:

After reading logs and code, it was a simple daqdrc config change.

The daqdrc should read something like this:

...
set master_config=".../master";
configure channels begin end;
tpconfig ".../testpoint.par";
...


What had happened was tpconfig was put before the configure channels
begin end.  So when daqd_rcv went to configure its test points it did
not have the channel list configured and could not match test points to
the right model & machine.  Dave and I suspect that this is so that it
can do an request directly to the correct front end instead of a general
broadcast to all awgtpman instances.

Simply reordering the config fixes it.

I tested by opening a test point in dataviewer and verifiying that
testpoints had opened/closed by using diag -l.  Xmgr/grace didn't seem
to be able to keep up with the test point data over a remote connection.

You can find this in the logs by looking for entries like the following
while the daqd is starting up.  When we looked we saw that there was an
entry for every model.

Unable to find GDS node 35 system c1daf in INI fiels
  13286   Fri Sep 1 16:27:39 2017 gautamUpdateSUSMC1 glitching

I re-enabled the MC SUS damping and IMC locking for some IFO work just now.

Quote:

MC1, MC2 and MC3 damping turned off to see glitching action at 9:57am

 

  13285   Fri Sep 1 15:46:12 2017 KiraUpdatePEMtemp sensor update

I took off the AD590 and attached it to two long wires leading out from the board. This will allow us to attach the sensor to a metal block and not have to stick the whole board to it. I have also completed three identical copies of this and it's pretty much ready to be tested. According to Craig and Andrew's elog here, the sensor is very noisy and they added in a low pass filter to fix that, so that's something to consider for the final version of the circuit. I'll test what I have so far and see how that goes. We still need to figure out how to get readings from the sensors.

To attach the sensor to the metal block, I'll use some thermal paste and fasteners. I'll also put a thermometer on the block to record the actual temperature. I'll then wrap it in some insulation we have in the lab and have only some wires leading out of it to make measurements. I'll leave this setup overnight and record the outputs for about a full day. The fluctuations between the sensors will then indicate the noise of each individual sensor.

Attachment 1: IMG_20170901_144729.jpg
IMG_20170901_144729.jpg
  13284   Fri Sep 1 08:25:08 2017 SteveUpdateSUSMC1 glitching

MC1, MC2 and MC3 damping turned off to see glitching action at 9:57am

Quote:

There was a pretty large glitch in MC1 about an hour ago. The misalignment was so large that the autolocker wasn't able to lock the IMC. I manually re-aligned MC1 using the bias sliders, and now IMC locks fine. Attached is a 90 second plot of 2K data from the OSEMs showing the glitch. Judging from the wall StripTool, the IMC was well behaved for ~4 hours before this glitch - there is no evidence of any sort of misalignment building up, judging from the WFS control signals.

 

Attachment 1: MC1glitching.png
MC1glitching.png
Attachment 2: MC1kicks.png
MC1kicks.png
  13283   Thu Aug 31 21:40:24 2017 gautamUpdateGeneralMC1 kicked again

There was a pretty large glitch in MC1 about an hour ago. The misalignment was so large that the autolocker wasn't able to lock the IMC. I manually re-aligned MC1 using the bias sliders, and now IMC locks fine. Attached is a 90 second plot of 2K data from the OSEMs showing the glitch. Judging from the wall StripTool, the IMC was well behaved for ~4 hours before this glitch - there is no evidence of any sort of misalignment building up, judging from the WFS control signals.

Attachment 1: MC1_glitch.png
MC1_glitch.png
  13282   Thu Aug 31 18:36:23 2017 gautamUpdateCDSrevisiting Acromag

Current status:

  • There is a single Acromag ADC unit installed in 1X4
  • It is presently hooked up to the PSL NPRO diagnostic connector channels
  • I had (re)-started the acquisiton of these channels on August 16 - but for reasons unknown, the tmux session that was supposed to be running the EPICS server on megatron seems to have died on August 22 (judging by the trend plot of these channels, see Attachment #1)
  • I had not set up an upstart job that restarts the server automatically in such an event. I manually restarted it for now, following the same procedure as linked in my previous elog.
  • While I was at it, I also took the opportunity to edit the Acromag channel names to something more appropriate - all channels previously prefixed with C1:ACRO- have now been prefixed with C1:PSL-

Plan of action:

  1. Hardware - we have, in the lab, in addition to the installed ADC unit
    • 3x 8 channel differential input ADC units
    • 2x 8 channel differential output DAC units
    • 1x 16 channel BIO unit
    • 2U chassis + connectors + breakout boards + other misc hardware that I think Johannes and Lydia procured with the original plan to replace the EX slow controls.
    • Some relevant elogs: Panel designs, breakout design, sketch for proposed layout, preliminary channel list.
      So on the hardware side, it would seem that we have everything we need to go ahead with replacing the EX slow controls with an Acromag system, although Johannes probably knows more about our state of readiness from a hardware PoV.
  2. Software
    • We probably want to get a dedicated machine that will handle the EPICS channel serving for the Acromag system
    • Have to figure out the networking arrangement for such a machine
    • Have to figure out how to set up the EPICS server protocol in such a way that if it drops for whatever reason, it is automatically restarted

 

Attachment 1: Acromag_EPICS.png
Acromag_EPICS.png
  13281   Thu Aug 31 03:31:15 2017 gautamUpdateLSCDRMI re-locked!

After our Demod/Whitening electronics investigations suggested nothing obviously wrong, I decided to give DRMI locking another go tonight.

Surprisingly, there was no evidence of REFL55 behaving weirdly tonight, and I was able to easily lock the DRMI on 1f error signals using the recipe I've been using in the last few months.

Not sure what to make of all this frown.

I got in a ~15 minute lock, but I wasn't prepared to do any sort of characterization/ sensing / attempt to turn on coil-dewhitening, and I'm too tired to try again tonight. I was however able to whiten the error signals, as I have been able to do in the past. There is a ~45Hz bump in MICH that I haven't seen in the past.

I'll try and do some characterization tomorrow eve, but it's encouraging to at least get back to the pre-FB-failure state of locking.

Attachment 1: DRMI_1f.png
DRMI_1f.png
Attachment 2: DRMI_relocked.pdf
DRMI_relocked.pdf
  13280   Thu Aug 31 00:52:52 2017 gautamUpdateLSCREFL55 whitening board debugging

[rana,gautam]

We did an ingenious checkup of the whitening board tonight.

  • The board is D990694
  • We made use of a tip-tilt DAC channel for this test (specifically TT1 UL, which is channel 1 on the AI board). We disconnected the cable going from the AI board to the TT coil driver board.
    • as opposed to using a function generator to drive the whitening filter, this approach allows us to not have to worry the changing offsets as we switch the whitening gain.
    • By using the CDS system to generate the signal and also demodulate it, we also don't have to worry about the drive and demod frequencies falling out of sync with each other.
  • The test was done by injecting a low frequency (75.13 Hz, amplitude=0.1) excitation to this DAC channel, and using the LSC sensing matrix infrastructure to demodulate REFL55 I and Q at this frequency. Demod phases in these servos were adjusted such that the Q phase demodulated signal was minimized.
  • An excitation was injected using awggui into TT1 UL exc channel.
  • We then stepped the whitening gains for REFL55_I and REFL55_Q in 3dB steps, waiting 5 seconds for each step. Syntax is z step -s 5 C1:LSC-REFL55_I_WhiteGain +1.0,15 C1:LSC-REFL55_Q_WhiteGain +1.0,15
  • Attachment #1 suggests that the whitening filter board is working as expected (each step is indeed 3dB and all steps are equal to the eye).
  • Data + script used to generate this plot is in Attachment #2.

I've restored all connections at that we messed with at the LSC rack to their original positions.

The TT alignment seems to be drifting around more than usual after we disconnected one of the channels - when I came in today afternoon, the spot on the AS camera had drifted by ~1 spot diameter so I had to manually re-align TT1. 

Quote:
 

Based on my tests, everything on the Demod board seems to work as expected. I need to think more about what else could be happening here - specifically do a more direct test on the whitening board.

Attachment 1: REFL55_whtCheck.pdf
REFL55_whtCheck.pdf
Attachment 2: REFL55_whtChk.tar.gz
  13279   Thu Aug 31 00:46:57 2017 ranaSummaryCDSallegra -> Scientific Linux 7.3

I made a 'LiveCD' on a 16 GB USB stick using this command after the GUIs didn't work and looking at some blog posts:

sudo dd if=SL-7.3-x86_64-2017-01-20-LiveCD.iso of=/dev/sdf

Quote:

Debian doesn't like EPICS. Or our XY plots of beam spots...Sad!

Quote:
Quote:

No, not confused on that point. We just will not be testing OS versions at the 40m or running multiple OS's on our workstations. As I've said before, we will only move to so-called 'reference' systems once they've been in use for a long time.

Ubuntu16 is not to my knowledge used for any CDS system anywhere.  I'm not sure how you expect to have better support for that.  There are no pre-compiled packages of any kind available for Ubuntu16.  Good luck, you big smelly doofuses. Nyah, nyah, nyah.

K Thorne recommends that we use SL7.3 with the 'xfce' window manager instead of the Debian family of products, so we'll try it out on allegra and rossa to see how it works for us. Hopefully the LLO CDS team will be the tip of the spear on solving the usual software problems we have when we "~up" grade.

  13278   Thu Aug 31 00:19:35 2017 rana[^r]UpdatePSLIMC/FSS FAST gain

nominal changed from 22 to 23 dB to minimize PC drive RMS

previous loop gain measurement is sort of bogus (made on SR785); need some 4395 loop measurements and checking of crossover and error point spectrum

  13277   Wed Aug 30 22:15:47 2017 ranaOmnistructureComputersUSB flash drives moved

I have moved the USB flash drives from the electronics bench back into the middle drawer of the cabinet next to the AC which is west of the fridge. Drawer re-enlabeled.

  13276   Wed Aug 30 19:49:33 2017 gautamUpdateLSCREFL55 demod board debugging

Summary:

Today I tried debugging the mysterious increase in REFL55 signal levels in the DRMI configuration. I focused on the demod board, because last week, I had tried routing these signals through different channels on the whitening board, and saw the same effect. 

Based on my tests, everything on the Demod board seems to work as expected. I need to think more about what else could be happening here - specifically do a more direct test on the whitening board.

Details:

  • The demod board is a modified D990511 (marked up schematic + high-res photo to follow).
  • Initially, I tried probing the LO signal levels at various points with the board in the eurocrate itself, with the help of an extender card.
  • But this wasn't very convenient, so I pulled the board out to the office area for more testing.
  • The 55MHz LO signal going into the board is ~0dBm (measured with Agilent network analyzer)
  • I used the active probe to check the LO levels at various points along the signal chain, which mostly consists of attenuators, ERA-5SM amplifiers, and some splitters/phase rotators.
  • Everything seemed consistent with the expected levels based on "typical" numbers for gains and insertion losses cited in the datasheets for these devices.
  • I couldn't directly measure the level at the LO input to the mixer, but measuring the input to the ERA-5SM immediately before the mixer, barring problems with this amplifier, the LO input of the mixer is being driven at >17dBm which is what it wants.
  • Next, I decided to check the gain, gain imbalance and orthogonality of the demodulation.
  • For this purpose, I restored the board to the Eurocrate, reconnected the LO input to the board, and used a second Marconi at a slightly offset frequency to drive the PD input at ~0dBm.
  • Attachment #1 - The measured outputs look pretty balanced and orthogonal. The gain is consistent with an earlier measurement I made some months ago, when things were "normal". More bullets added after Rana's questions:
    • 300 MHz bandwidth oscilloscope used to acquire the data
    • I and Q outputs were from the daughter board
    • Data was acquired via ethernet data download utility
    • 20 MHz low-pass filter turned on on the Oscilloscope while downloading the data
Quote:

I did a quick check by switching the output of the REFL55 demod board to the inputs normally used by AS55 signals on the whitening board. Setting the whitening gain to +18dB for these channels had the same effect - ADC overflow galore. So looks like the whitening board isn't to blame. I will have to check the demod board out.

 


All connections have been restored untill further debugging later in the evening.

Attachment 1: REFL55_demod_check.pdf
REFL55_demod_check.pdf
  13275   Wed Aug 30 15:00:06 2017 gautamUpdateGeneralEdgeswitch fiber swap

A couple of minutes ago, Larry W swapped the fibers to our 40m Edgeswitch (BROCADE FWS 648G) to a faster connection. This is the switch to which our gateway machine, NODUS, is connected. The actual swap itself happened at the core router in Bridge, and took only a few seconds. After the switch, I double checked that I was able to ssh into nodus from my laptop, and Larry informed me that everything is working as expected on his end.

Larry also tells us that the other edgeswitch at the 40m (Foundry Networks), to which most of our GC network machines are connected, is a 100MBPS switch, and so we should re-route the connections from this switch to the BROCADE switch at our convenience to take advantage of the faster connection.

  13274   Wed Aug 30 11:04:08 2017 GabrieleSummaryLSCFirst look at neural network reconstruction of PRMI motion

Introduction

I trained a deep neural network (DNN) to reconstruct MICH and PRCL degrees of freedom in the PRMI configuration. For details on the DNN architecture please refer to G1701455 or G1701589. Or if you really want all the details you can look at the code. I used the following signals as input to the DNN: POPDC, POP22_Q, ASDC, REFL11_I/Q, REFL55_I/Q, AS55_I/Q.

Gautam took some PRMI data in free swinging and driven configuration:

  • 1187819331 + 10mins: Free swinging PRMI (after first locking PRMI on carrier and dither aligning).
  • 1187820070 + 5mins: PRM driven at low freq.
  • 1187820446 + 5mins: BS driven at low freq.

In contrast to the Fabry-Perot cavity case, we don't have a direct measurement of the real PRCL/MICH degrees of freedom, so it's more difficult to assess if the DNN is working well.

Results

All MICH and PRCL values are wrapped into the unique region [-lambda/4, lambda/4]^2. It's even a bit more complicated than simpling wrapping. Indeed, MICH is periodic over [-lambda/2, lambda/2]. However, the Michelson interferometer reflectivity (as seen from PRC) in the first half of the segment is  the same as in the second half, except for a change in sign. This change of sign in Michelson reflectivity can be compensated by moving PRCL by lambda/4, thus generating a pi phase shift in the PRC round trip propagation that compensate for the MICH sign change. Therefore, the unit cell of unique values for all signals can be taken as [-lambda/4, lambda/4] x [-lambda/4, lambda/4] for MICH x PRCL. But when we hit the border of the MICH region, PRCL is also affected by addtion of lambda/4. Graphically, the square regions A B C below are all equivalent, as well as more that are not highlighted:

This makes it a bit hard to un-wrap the resonstructed signal, especially when you add in the factor that in the reconstruction the wrapping is "soft".

The plot below shows an example of the time domain reconstruction of MICH/PRCL during the free swinging period.

It's hard to tell if the positions look reasonable, with all the wrapping going on.

Two-dimensional maps of signals

Here's an attempt at validating the DNN reconstruction. Using the reconstructed MICH/PRCL signal, I can create a 2d map of the values of the optical signals. I binned the reconstructed MICH/PRCL in a 51x51 grid, and computed the mean value of all optical signals for each bin. The result is shown in the plot below, directly compared with the expectation from a simulation.

The power signals (POP_DC, AS_DC, PO22_Q) looks reasonably good. REFL11_I/Q also looks good (please note that due to an early mistake in my code, I reversed the convention for I/Q, so PRCL signal is maximized in Q instead than in I). The 55MHz signals look a bit less clear...

Steps forward

  • I'm quite confident in the tuning of demodulation phase and signs for REFL11 and POP22, but less so for REFL55 and not sure at all for AS55. So it would be useful to measure a full sensing matrix of PRCL and MICH against those signals, to compare with my simulation
  • I'm working on an idea to fine tune the DNN using the real interferometer data, more to follow when the idea crystallizes in a clear form.
  13273   Wed Aug 30 10:54:26 2017 gautamUpdateCDSslow machine bootfest

MC autolocker and FSS loops were stuck because c1psl was unresponsive. I rebooted it and did a burtrestore to enable PSL locking. Then the IMC locked fine.

c1susaux and c1iscaux were also unresponsive so I keyed those crates as well, after taking the usual steps to avoid ITMX getting stuck - but it still got stuck when the Sat. Box. connectors were reconnected after the reboot, so I had to shake it loose with bias slider jiggling. This is annoying and also not very robust. I am afraid we are going to knock the ITMX magnets off at some point. Is this problem indicative of the fact that the ITMX magnets were somehow glued on in a skewed way? Or can we make the situation better by just tweaking the OSEM-holding fixtures on the cage?

In any case, I've started listing stuff down here for things we may want to do when we vent next.

 

  13272   Wed Aug 30 06:45:32 2017 KevinSummaryPEMNew Heater Circuit

I changed the heater circuit described in this elog to a current sink. The new and old circuits are shown in the attachment. The heater is R_h and is currently 24Ω; the sense resistor R is currently 6Ω. The op-amp is still an OP27 and the MOSFET is still an IRF630.

The current through the old circuit was saturating because the gate voltage on the MOSFET was saturating at the op-amp supply rails. This is because the source voltage is relatively high: V_S = I(R + R_h).

In the new circuit the source voltage is lower and the op-amp can thus drive a large enough V_{GS} to draw more current (until the power supply saturates at 25V/30Ω = 0.8A in this case). The source and DAC voltages are equal in this caseV_{\mathrm{DAC}} = V_S and so the current is I = V_{\mathrm{DAC}}/R. Since this is the same current through the heater, the drain voltage is V_D = V_{cc} - IR_h. I observed this behavior in this circuit until the power supply saturated at 0.8A. Note that when this happens V_D = V_S and the gate voltage saturates at the supply rails in an attempt to supply the necessary current.

Attachment 1: circuit.pdf
circuit.pdf
  13271   Tue Aug 29 21:36:59 2017 johannesUpdatePSLPSL table auxiliary NPRO
Quote:

 there is a large (~1 cm) aperture Pockels cell that Frank Siefert was using for making pulses to damage photo diodes. There is a DEI Pulser unit near the entrance to the QIL in Bridge which can drive it.

I'll look for it tomorrow, but I haven't given up on the AOMs yet. I swapped in the ISOMET modulator today and saw the same behavior, both in 0th and 1st order. The fall time is pretty much identical. Gautam saw no such thing in the PSL AOM using the same photodetector.

1st order diffracted                                                          0th order

     

In the meantime I prepared the fiber mode-matching but realized in the process that I had mixed up some lenses. As a result the beam did not have a waist at the AOM location and thus didn't have the intended size, although I doubt that this would cause the slower decay. I'll fix it tomorrow, along with setting up the fiber injection, beat note with the PSL, and routing the fiber if possible.

  13270   Tue Aug 29 20:04:09 2017 ranaUpdatePSLPSL table auxiliary NPRO

I don't understand why the 1st order diffracted beam doesn't go to zero when you shut off the drive. My guess is that the standing acoustic wave in the AO crystal needs some time to decay: f = 40 MHz, tau = 1 usec... Q ~ 100. Perhaps, the crystal is damped by the PZT and ther output impedance of the mini-circuits switch is different from the AO driver.

In any case, if you need a faster shut off, or want something that more cleanly goes to zero, there is a large (~1 cm) aperture Pockels cell that Frank Siefert was using for making pulses to damage photo diodes. There is a DEI Pulser unit near the entrance to the QIL in Bridge which can drive it.

  13269   Tue Aug 29 15:41:17 2017 KiraSummaryPEMheater circuit

I worked with Kevin and Gautam to create a heater circuit. The first attachment is Kevin's schematic of the circuit. The OP amp connects to the gate of the power MOSFET, and the power supply connects to the drain, while the source goes into the heater. We set the power supply voltage to 22V and varied the voltage of the input to the OP amp. At 6V to the OP amp, we got a current of 0.35A flowing through the heater and resistor. This was the peak current we got due to the OP amp being saturated (an increase in either of the power supplies did not change the current), but when we increased the voltage of the supply rails of the OP amp from 15V to 20V, we got a current of 0.5A. We would want a higher current than this, so we will need to get a different OP amp with a higher max voltage rating, and a resistor that can take more power than this one (it currently takes 5W of power, and is the best one we could find).

Kevin and I created a simulation of this circuit using CircuitLab to understand why the current was so low (second attachment). The horizontal axis is the voltage we supply to the OP amp. The blue line shows the voltage at the point between the output of the OP amp and the gate of the MOSFET. The orange line is the voltage at the point between the source of the MOSFET and the heater. The brown line is the voltage at the point between the heater and resistor. Thus, we can see that saturation occurs at about 2.1V. At that point, the gate-source voltage is the difference between the blue curve and the orange curve, which is about 4V, which is what we measured. Likewise, the voltage across the heater is the difference between the orange curve and the brown curve, which comes out to around 8V, which is also what we measured. Lastly, the voltage across the resistor is the brown curve, which is about 2V, which matches our observations. The circuit works as it should, but saturates too soon to get a high enough current out of it.

Gautam noted that it is important to measure the current correctly. We can't just use an ammeter and place it across the resistor or heater, because the internal resistance of the ammeter (~0.5 ohm) is comparable to the resistance we want to measure, so the current gets split between the circuit and the ammeter and we get an equivalent resistance of 1/R = 1/R0 + 1/Ra, where R0 is the resistance of the part we want to measure the current across, and Ra is the ammeter resistance. Thus, the new resistance will be lower and the ammeter will show a higher current value than what is actually there. So to accurately measure the current, we must place the ammeter in series with the part we want to measure. We initially got a 1A reading on the heater, which was not correct, and our setup did not heat up at all basically. When we placed the ammeter in series with the heater, we got only 0.35A.

The last two images are the setup for testing of the heater. We wrapped it around an aluminum piece and covered it with a few layers of insulating material. We can stick a thermometer in between the insulation and heater to see the temperature change. In later tests, we may insulate the whole piece so that less heat gets dissipated. In addition, we used a heat sink and thermal paste to secure the MOSFET to it, as it got very hot.

Our next steps will be to get a resistor and an OP amp that are better suited for our purposes. We will also run simulations with components that we choose to make sure that it can provide the desired current of 1A (the maximum output of the power supply is 24V, and the heater is 24 ohm, so max current is 1A). Kevin is working on that now.

Attachment 1: heater_circuit.pdf
heater_circuit.pdf
Attachment 2: simulation.png
simulation.png
Attachment 3: heater_setup.jpg
heater_setup.jpg
Attachment 4: IMG_20170829_131126.jpg
IMG_20170829_131126.jpg
  13268   Tue Aug 29 15:35:19 2017 SteveUpdateVACvacuum pump specifications & manuals

RP1 and RP3 roughing pump manual of Leybold D30A oily rotory pump

Fore pump of TP2 & TP3 Varian SH-100 Dry Scroll

TP2 and TP3 small turbo drag pump  Varian 969-9361 

TP2 and TP3 turbo controller Varian 969-9505

TP1 magnetically suspended turbo pump  Osaka TG390MCAB, sn360 and controller TC010M  and note : this  pump  running on 208VAC single phaseIt is not on the UPS !  

                                                             Osaka Maglev Manual        and Osaka Controller Communication Wiring                                                                                                                                

VC1 cryo pump CTI-Cryogenics Cryo Torr 8 sn 8g23925  SAFETY note: compressor single phase 208VAC and the head driver 3 phase 208VAC Compressor and driver have each separate power cord!

Installed at 40m wiki  also

Quote:

The V1 gate valve specs installed at 40m wiki page. VAT model number 10846-UE44-0007 Our main volume pumping goes through this 8" id gate valve V1 to Maglev turbo  or Cryo pump to  VC1

The ion pumps have 6" id gate valves:VAT 10844-UE44-AAY1,  Pneumatic actuator with position indicator and double acting solenoid valve 115V 60Hz  Purchased 1999 Dec 22

UHV gate valves 2.5" id. VAT  10836-UE44    Pneumatic actuator with position indicator and double acting solenoid valve 115V 60 hz, IFO to RGA  VM1 &  RGA to Maglev  VM2

mini UHV gate valve 1.5" id.   VAT  01032-UE01      2016 cataloge page 14,   manual - no position indicator, VM4  next to manual adjustable fine leak valve to RGA

UHV angle valve 1.5" id, model VAT 28432-GE41, Viton plate seal, pneumatic actuator with position indicator & solenoid valve 115V & single acting closing spring  MEDM screen: VM3,VC2, V3,V4,V5,V6,VA6,V7 & annuloses Each chamber annulos has 2 valves.

UHV angle valve 1.5" id, model VAT 57132-GE05   go page 208,   Metal tip seal, manual actuating only with position indicator,   MEDM screen: roughing RV1 and venting VV1 hand wheel needed to close to torque spec

UHV angle valve  1.5" id. model VAT 28432-GE01           Viton plate seal, manual operation only at IT gauges Hornet & Super Bee and  ion pumps roughing  ports. These are not labeled.

                                                      

The Cryo pump interlock wiring was added too

Note: all moving valve plate seals are single.

 

ELOG V3.1.3-