40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 280 of 354  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  13007   Tue May 23 15:22:04 2017 ranaUpdateOptical LeversBeam Profiling Results
  1. Include several sources of error. Micrometer error is one, but you should be able to think of at least 3 more.
  2. There should be an error bar for the x and y axis.
  3. Also, use pdftk to put the PDFs all into a single file. Remove so much whitespace.
  4. Google 'beautiful plots python' and try to make your plots for the elog be more like publication quality for PRL or Nature.
  13008   Tue May 23 16:33:00 2017 SteveUpdateOptical LeversBeam Profiling Results

You may compare your results with this.

RXA: please no, that's not the right way

  13010   Tue May 23 22:58:23 2017 gautamUpdateGeneralDe-Whitening board noises

Summary:

I wanted to match a noise model to noise measurement for the coil-driver de-whitening boards. The main objectives were:

  1. Make sure the various poles/zeros of the Bi-Quad stages and the output stage were as expected from the schematics
  2. Figure out which components are dominating the noise contribution, so that these can be prioritized while swapping out the existing thick-film resistors on the board for lower noise thin-film ones
  3. Compare the noise performance of the existing configuration, which uses an LT1128 op-amp (max output current ~20mA) to drive the input of the coil-driver board, with that when we use a TLE2027 (max output current ~50mA) instead. This last change is motivated by the fact that an earlier noise-simulation suggested that the Johnson noise of the 1kohm input resistor on the coil driver board was one of the major noise contributors in the de-whitening board + coil driver board signal chain. Since the TLE2027 can drive an output current of up to 300mA, we could reduce the input impedance of the coil-driver board to mitigate this noise source to some extent. 

Measurement:

  • The back-plane pin controlling the MAX333A that determines whether de-whitening is engaged or not (P1A) was pulled to ground (by means of one of the new extender boards given to us by Ben Abbott). So two de-whitening stages were engaged for subsequent tests.
  • I first measured the transfer function of the signal path with whitening engaged, and then fit my LISO model to the measurement to tweak the values of the various components. This fitted file is what I used for subsequent noise analysis. 
  • ​For the noise measurement, I shorted the input of the de-whitening board (10-pin IDE connector) directly to ground.
  • I then measured the voltage noise at the front-panel SMA connector with the SR785
  • The measurements were only done for 1 channel (CH1, which is the UL coil) for 4 de-whitening boards (2 ITMs, BS, and SRM). The 2 ITM boards are basically identical, and the BS and SRM boards are similar. Here, only results for the board labelled "ITMX" are presented.
  • For this board, I also measured the output voltage noise when the LT1128 was replaced with a TLE2027 (SOIC package, soldered onto a SOIC-to-DIP adaptor). Steve has found (ordered?) some DIP variants of this IC, so we can compare its noise performance when we get it.

Results:

  • Attachment #1 shows the modeled and measured noises, which are in fairly good agreement.
  • The transfer function measurement/fitting (not attached) also suggests that the poles/zeros in the signal path are where we expect as per the schematic. I had already verified the various resistances, but now we can be confident that the capacitance values on the schematic are also correct. 
  • The LT1128 and TLE2027 show pretty much identical noise performance.
  • The SR785 noise floor was low enough to allow this measurement without any pre-amp in between. 
  • I have identified 3 resistors from the LISO model that dominate the noise (all 3 are in the Bi-Quad stages), which should be the first to be replaced. 
  • There are some pretty large 60 Hz harmonics visible. I thought I was careful enough avoiding any ground loops in the measurement, and I have gotten some more tips from Koji about how to better set up the measurement. This was a real problem when trying to characterize the Coil Driver noise.

Next steps:

  • I have data from the other 3 boards I pulled out, to be updated shortly.
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.
  • Once the coil driver board has been characterized, we need to decide what changes to make to these boards. Some things that come to mind at the moment:
    • Replace critical resistors (from noise-performance point of view) with low noise thin film ones.
    • Remove the "fast analog" path on the coil driver boards - these have potentiometers in series with the coil, which we should remove since we are not using this path anyways.
    • Remove all AD797s from both de-whitening and coil driver boards - these are mostly employed as monitor points that go to the backplane connector, which we don't use, and so can be removed.
    • Increase the series resistor at the output of the coil driver (currently, these are either 100ohm or 400ohm depending on the optic/channel). I need to double check the limits on the various LSC servos to make sure we can live with the reduced range we will have if we up these resistances to 1 kohm (which serves to reduce the current noise to the coils, which is ultimately what matters).
Attachment 1: ITMX_deWhite_ch1_noise.pdf
ITMX_deWhite_ch1_noise.pdf
  13011   Wed May 24 18:19:15 2017 KaustubhUpdateGeneralET-3010 PD Test

Summary:

In continuation to the previous test conducted on the ET-3040 PD,  I performed a similar test on the ET-3010 model. This model requires a fiber couple input for proper testing, but I tested it in free space without a fiber couple as the laser power was only 1.00 mW and there was not much danger of scattering of the laser beam. The Data Sheet can be found here

Procedure:

The schematic(attached below) and the procedure are the same as the previous time. The pump current was set to 19.5 mA giving us a laser beam of power 1.00mW at the fiber couple output. The measured voltage for the reference detector was 1.8V. For the DUT, the voltage is amplified using a low noise amplifier(model SR-560) with a gain of 100. Without any laser incidence on the DUT, the multimeter reads 120.6 mV. After alligning the laser with the DUT, the multimeter reads 348.5 mV, i.e. the voltage for the DUT is 227.9/100 ~ 2.28 mV. The DC transimpedance of the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75 A/W. Using this we calculate the power at the reference detector to be 0.24 mW. The DC transimpedance for the DUT is 50Ohm and the responsivity is around 0.85 A/W. Using this we calculate the power at the DUT to be 0.054 mW. After this we connect the the laser input to the Netwrok Analyzer(AG4395A) and give an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz.The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. I stored the data under the directory.../scripts/general/netgpibdata/data. The Bode Plot has been attached below and seeing it we observe that the cut-off frequency for the ET-3010 model is atleast over 500 MHz(stated as >1.5 GHz in the data sheet).

Result:

The bandwidth of the ET-3010 PD is atleast 500MHz, stated in the data sheet as >1.5GHz.

Precaution:

The ET-3010 PD has an internal power supply of 6V. Don't leave the PD connected to any instrument after the experimentation is done or else the batteries will get drained if there is any photocurrent on the PDs.

To Do:

Caliberate the vertical axis in the Bode Plot with transimpedance(Ohms) for the two PDs. Automate the procedure by making a Python script for taking multiple set of readings from the Netwrok Analyzer and aslo plot the error bands.

Attachment 1: PD_test_setup.png
PD_test_setup.png
Attachment 2: ET-3010_test.pdf
ET-3010_test.pdf
Attachment 3: ET-3010_test.zip
  13012   Thu May 25 12:22:59 2017 gautamUpdateCDSslow machine bootfest

After ~3months without any problems on the slow machine front, I had to reboot c1psl, c1susaux and c1iscaux today. The control room StripTool traces were not being displayed for all the PSL channels so I ran testSlowMachines.bash to check the status of the slow machines, which indicated that these three slow machines were dead. After rebooting the slow machines, I had to burt-restore the c1psl snapshot as usual to get the PMC to lock. Now, both PMC and IMC are locked. I also had to restart the StripTool traces (using scripts/general/startStrip.sh) to get the unresponsive traces back online.

Steve tells me that we probably have to do a reboot of the vacuum slow machines sometime soon too, as the MEDM screen for the Vacuum indicator channels are unresponsive.

Quote:

Had to reboot c1psl, c1susaux, c1auxex, c1auxey and c1iscaux today. PMC has been relocked. ITMX didn't get stuck. According to this thread, there have been two instances in the last 10 days in which c1psl and c1susaux have failed. Since we seem to be doing this often lately, I've made a little script that uses the netcat utility to check which slow machines respond to telnet, it is located at /opt/rtcds/caltech/c1/scripts/cds/testSlowMachines.bash.

 

 

  13013   Thu May 25 16:42:41 2017 jigyasaUpdateComputer Scripts / ProgramsMaking pylon installation on shared directory

I have been working on interfacing with the GigE’s. I went through Joe Be’s paper and the previous elogs and verified that the code files are installed.

I then downloaded and extracted a copy of the Pylon software onto my home directory on Allegra. Gautam helped me find installation instructions on Johannes’ directory so that I could make the installation on the shared directory.

So far , according to instructions, these commands need to be executed so that the installation takes place and the rules for camera permissions are set up.

sudo tar –C /opt/rtcds/caltech/c1/scripts/GigE –xzf pylon SDK*.tar.gz

followed by ./setup-usb.sh

The Pylon viewer can then be accessed with /scripts/GigE/pylon5/bin/PylonViewerApp 

Should I go ahead with the installation in the shared directory?

  13014   Thu May 25 18:37:11 2017 jigyasaUpdateComputer Scripts / ProgramsMaking pylon installation on shared directory

Gautam helped me execute the commands mentioned above and Pylon has now been installed on the shared directory. We extracted the pylon installation from Johannes's directory to the shared drive and executing the command tar –C /opt/rtcds/caltech/c1/scripts/GigE –xzf pylon SDK*.tar.gz created an unzipped pylon5 folder in /scripts. The ./setup-usb.sh set up the udev rules for the GigE.

The installation took place without any errors.

The Pylon viewer app can now be accessed at /opt/rtcds/caltech/c1/scripts/GigE/pylon5/bin followed by ./PylonViewerApp 

Quote:

Should I go ahead with the installation in the shared directory?

 

  13015   Thu May 25 19:27:29 2017 gautamUpdateGeneralCoil driver board noises

[Koji, Gautam]

Summary: 

  • Attachment #1 shows the measured/modeled noise of the coil driver board (labelled ITMX). 
  • Measurement was made with "TEST" input (which is what the DAC drives) is connected to ground via 50ohm terminator, and "BIAS" input grounded.
  • The model tells us to expect a noise of the order of 5nV/rtHz - this is comparable to (or below) the input noise of the SR785, and even the SR560. So this measurement only serves to place an upper bound on the coil driver board noise.
  • There is some excess noise below 40Hz, would be interesting to see if this disappears with swapping out thick-film resistors for thin film ones.
  • The LISO model says that the dominant contribution is from the voltage and input current noise of the two op-amps (LT1125) in the bias LP filter path. 
  • But if we can indeed realize this noise level of ~10-20nV/rtHz, we are already at the ~10^-17m/rtHz displacement noise for MICH at about 200Hz. I suspect there are other noises that will prevent us from realizing this performance in displacement noise.

Details:

This measurement has been troublesome - I was plagued by large 60Hz harmonics (see Attachment #1), the cause of which was unknown. I powered all electronics used in the measurement set up from the same power strip (one of the new surge-protecting ones Steve recently acquired for us), but these remained present. Yesterday, Koji helped me troubleshoot this issue. We did the various things, I try to put them here in the order we did them:

  1. Double check that all electronics were indeed being powered from the same power strip - OK, but harmonics remained present.
  2. Tried using a different DC power supply - no effect.
  3. Checked the signal with an oscilloscope - got no additional insight.
  4. I was using a DB25 breakout board + pomona minigrabbers to measure the output signal and pipe it to the SR785. Koji suggested using twisted ribbon wire + soldered BNC connector (recycled from some used ones lying around the lab). The idea was to minimize stray radiation pickup. We also disconnected the WiFi extender and GPIB box from the analyzer and also disconnected these from the power - this finaly had the desired effect, the large harmonics vanished. 

Today, I tried to repeat the measurement, with the newly made twisted ribbon cable, but the large 60Hz harmonics were back. Then I realized we had also disconnected the WiFi extender and GPIB box yesterday.

Turns out that connecting the Prologix box to the SR785 (even with no power) is the culprit! Disconnecting the Prologix box makes these harmonics go away. I was using the box labelled "Santuzza.martian" (192.168.113.109), but I double-checked with the box labelled "vanna.martian" (192.168.113.105, also a different DC power supply adapter for the box), the effect is the same. I checked various combinations like 

  • GPIB box connected but not powered
  • GPIB box connected with no network cable

but it looks like connecting the GPIB box to the analyzer is what causes the problem. This was reproducible on both SR785s in the lab. So to make this measurement, I had to do things the painful way - acquire the spectrum by manually pushing buttons with the GPIB box disconnected, then re-connect the box and download the data using SRmeasure --getdata. I don't fully understand what is going on, especially since if the input connector is directly terminated using a 50ohm BNC terminator, there are no harmonics, regardless of whether the GPIB box is connected or not. But it is worth keeping this problem in mind for future low-noise measurements. My elog searches did not reveal past reports of similar problems, has anyone seen something like this before?

It also looks like my previous measurement of the de-whitening board noises was plagued by the same problem (I took all those spectra with the GPIB boxes connected). I will repeat this measurement.

Next steps:

At the meeting this week, it was decided that

  • All AD797s would be removed from de-whitening boards and also coil-driver boards (as they are unused).
  • Thick film resistors with the most dominant noise contributions to be replaced with thin-film ones.
  • Gain of 3 on de-whitening board to be changed to gain of 1.

I also think it would be a good idea to up the 100-ohm resistors in the bias path on the ITM coil driver boards to 1kohm wire-wound. Since the dominant noise on the coil-driver boards is from the voltage noise of the Op-Amps in the bias path, this would definitely be an improvement. Looking at the current values of the bias MEDM sliders, a 10x increase in the resistance for ITMX will not be possible (the yaw bias is ~-1.5V), but perhaps we can go for a 4x increase?

The plan is to then re-install the boards, and see if we can 

  1. Turn on the whitening successfully (I checked with an extender board that the switching of the whitening stages works - turning OFF the "simDW" filter in the coil driver filter banks enables the analog de-whitening).
  2. Relize the promised improvement in MICH displacement noise with the existing whitening configuration.

We can then take a call on how much to up the series resistance in the DAC signal path. 

Now that I have figured out the cause of the harmonics, I will also try and measure the combined electronics noise of de-whitening board + coil driver board and compare it to the model.

Quote:
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.

 

Attachment 1: coilDriverNoises.pdf
coilDriverNoises.pdf
  13016   Sat May 27 10:26:28 2017 KaustubhUpdateGeneralTransimpedance Calibration

Using Alberto's paper LIGO-T10002-09-R titled "40m RF PDs Upgrade", I calibrated the vertical axis in the bode plots I had obtained for the two PDs ET-3010 and ET-3040.

I am not sure whether the values I have obtained are correct or not(i.e. whether the calibration is correct or not). Kindly review them.

EDIT: Attached the formula used to calculate transimpedance for each data point and the values of other paramaters.

EDIT 2: Updated the plots by changing the conversion for gettin ghte ratio of the transfer functions from 10^(y/10) to 10^(y/20).

Attachment 1: ET-3040_test_transimpedance.pdf
ET-3040_test_transimpedance.pdf
Attachment 2: ET-3010_test_transimpedance.pdf
ET-3010_test_transimpedance.pdf
Attachment 3: Formula_for_Transimpedance.pdf
Formula_for_Transimpedance.pdf
  13017   Mon May 29 16:47:38 2017 gautamUpdateGeneralCoil driver boards reinstalled

Yesterday, I reinstalled the de-whitening boards + coil driver boards into their respective Eurocrate slots, and reconnected the cabling. I then roughly re-aligned the ITMs using the green beams. 

I've given Steve a list of the thin-film resistors we need to implement the changes discussed in the preceeding elogs - but I figured it would be good to see if we can realize the projected improvement in MICH displacement noise just by fixing the BS Oplev loop shape and turning the existing whitening on. Before re-installing them however, I did make a few changes:

  • Removed the gain of x3 on all the signal paths on the De-Whitening boards, and made them gain x1. For the De-Whitened path, this was done by changing the feedback resistor in the final op-amp (OP27) from 7.5kohm to 2.49kOhm, while for the bypass path, the feedback resistor in the LT1125 stages were changed from 3.01kohm to 1kohm. 
  • To recap - this gain of x3 was originally implemented because the DACs were +/- 5V, while the coil driver electronics had supply voltage of +/- 15V. Now, our DACs are +/- 10V, and even though the supply voltage to the coil driver boards is +/- 15V, in reality, the op-amps saturate at around 12V, so we aren't really losing much in terms of range.
  • I also modified the de-whitening path in the BS de-whitening board to mimic the configuration on the ITM de-whitening boards. Mainly, this involved replacing the final stage AD797 with an OP27, and also implementing the passive pole-zero network at the output of the de-whitened path. I couldn't find capacitors similar to those used on the ITM de-whitening boards, so I used WIMA capacitors.
  • The SRM de-whitening path was not touched for now.
  • On all the boards, I replaced any AD797s that were being used with OP27s, and simply removed AD797s that were in DAQ paths.
  • I removed all the potentiometers on all the boards (FAST analog path on the coil driver boards, and some offset trim Pots on the BS and SRM de-whitening boards for the AD797s, which were also removed).
  • For one signal path on the coil driver board (ITMX ch1), I replaced all of the resistors with thin-film ones and re-measured the noise. However, the excess noise in the measurement below ~40Hz (relative to the model) remained.

Photos of all the boards were taken prior to re-installation, and have been uploaded to the 40m Google Photos page - I will update schematics + photos on the DCC page once other planned changes are implemented.

I also measured the transfer functions on the de-whitened signal paths on all the boards before re-installing them. I then fit everything using LISO, and updated the filter banks in Foton to match these measurements - the original filters were copied over from FM9 and FM10 to FM7 and FM8. The new filters are appended with the suffix "_0517", and live in FM9 and FM10 of the coil output filter banks. The measured TFs (for ITMs and BS) are summarized in Attachment #1, while Attachment #2 contains the data and LISO file used to do the fits (path to the .bod files in the .fil file will have to be changed appropriately). I used 2 complex pole pairs at ~10 Hz, two complex zero pairs at ~100Hz, real poles at ~15Hz and ~3kHz, and real zeros at ~100Hz and ~550Hz for the fits. The fits line up well with the measured data, and are close enough to the "expected" values (as calculated from component values) to be explained by tolerances on the installed components - I omit the plots here. 

After re-installing the boards in the Eurocrate, restoring rough alignment, and updating the filter banks with the most recent measured values, I wanted to see if I could turn the whitening on for one of the optics (ITMY) smoothly before trying to do so in the full DRMI - switching off the "SimDW_0517" filter (FM9) should switch the signal path on the de-whitening board from bypass to de-whitened, and I had confirmed last week with an extender board that the voltage at the appropriate backplane connector pin does change as expected when the FM9 MEDM button is toggled (for both ITMs, BS and SRM). But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further. 


Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?

Attachment 1: deWhitening_consolidated.pdf
deWhitening_consolidated.pdf deWhitening_consolidated.pdf deWhitening_consolidated.pdf
Attachment 2: deWhitening_measurements.zip
Attachment 3: ETMY_OL.png
ETMY_OL.png
  13018   Tue May 30 13:36:58 2017 SteveUpdateOptical LeversETMY Oplev HeNe is replaced

Finally I reallized what is killing the ETMY oplev laser. Wrong  power supply, it  was driving the HeNe laser by 600V higher voltage than recommended. Power supply 101T-2300Vdc replaced by 101T-1700Vdc ( Uniphase model 1201-1, sn 2712420 )

The laser head 1103P, sn P947049 lived for 120 days and it was replaced by sn P964431   New laser output 2.8 mW,  quadrant sum 19,750 counts

 

Attachment 1: oplevETMY120d.png
oplevETMY120d.png
  13019   Tue May 30 16:02:59 2017 gautamUpdateGeneralCoil driver boards reinstalled

I think the reason I am unable to engage the de-whitening is that the OL loop is injecting a ton of control noise - see Attachment #1. With the OL loop off (i.e. just local damping loops engaged for the ITMs), the RMS control signal at 100Hz is ~6 orders of magnitude (!) lower than with the OL loop on. So turning on the whitening was just railing the DAC I guess (since the whitening has something like 60dB gain at 100Hz).

The Oplev loops for the ITMs use an "Ellip15" low-pass filter to do the roll-off (2nd order Elliptic low pass filter with 15dB stopband atten and 2dB ripple). I confirmed that if I disable the OL loops, I was able to turn on the whitening for ITMY smoothly.

Now that the ETMY OL HeNe has been replaced, I restored alignment of the IFO. Both arms lock fine (I was also able to engage the ITMY Coil Driver whitening smoothly with the arm locked). However, something funny is going on with ASS - running the dither seems to inject huge offsets into the ITMY pit and yaw such that it almost immediately breaks the lock. This probably has to do with some EPICS values not being reset correctly since the recent slow-machine restarts (for instance, the c1iscaux restart caused all the LSC RFPD whitening gains to be reset to random values, I had to burt-restore the POX11 and POY11 values before I could get the arms to lock), I will have to investigate further.

GV edit 2pm 31 May: After talking to Koji at the meeting, I realized I did not specify what channel the attached spectra are for - it is  C1:SUS-ITMY_ULCOIL_OUT.

Quote:
 

But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further. 


Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?

 

Attachment 1: OL_noiseInjection.pdf
OL_noiseInjection.pdf
  13021   Tue May 30 18:31:54 2017 DhruvaUpdateOptical LeversBeam Profiling Results

​Updates in the He-Ne beam profiling experiment. 

  1. I've made intensity profile plots at two more points on the z-axis. The additition of this plots hasn't affected the earlier obtained beam waist significantly. 
  2. I have added other sources of error, such as the statisitical fluctuations on the oscilloscope(which is small compared to the least count error of the micrometer) and the least count of the z-axis scale.
  3. I have also calculated the error in the parameters obtained by fiiting by calculating the covariance matrix using the jacobian returned by the lsqcurvefit function in MATLAB. 
  4. I have also added horizontal error bars to all plots. 
  5. All plots are now in S.I. units 

 

 

Attachment 1: plots.pdf
plots.pdf plots.pdf plots.pdf plots.pdf plots.pdf plots.pdf plots.pdf
Attachment 2: spot_size_y.pdf
spot_size_y.pdf
  13022   Wed May 31 12:58:30 2017 Eric GustafsonUpdateLSCRunning the 40 m PD Frequency Response Fiber System; Hardware and Software

Overall Design

A schematic of the overall subsystem diagran in attachment.

RF and Optical Connections

Starting at the top left corner is the diode laser module.  This laser has an input which allows it to be amplitude modulated.  The output of the laser is coupled into an optical fiber which is connectorized with an FC/APC connector and is connected to the input port of a 1 by 16 Optical Fiber Splitter. The Splitter produces 16 optical fiber outputs dividing the input laser power into 16 roughly equal optical optical fiber outputs.  These optical fibers are routed to the Photodiode Receivers (PD) which are the devices under test. All of the PDs are illuminated simultaneously with amplitude modulated light. The Optical Fiber outputs each have a collimating fiber telescope which is used to focus the light onto the PDs. Optical Fiber CH1 is routed to a broadband flat response reference photodiode which is used to provide a reference to the HP-4395A Network Analyzer.  The other Channel outputs are connected to an RF switch which can be programmed to select one of 16 inputs as the output.  The selected outputs can then be sent into channel A of the RF Network Analyzer. 

 

RF Switch

The RF switch consists of two 8 by 1 Multiplexers (National Instruments PXI-254x) slotted into a PXI Chassis (National Instruments PXI-1033).  The Multiplexers have 8 RF inputs and one RF output and can be programmed through the PXI Chassis to select one and only one of the 8 inputs to be routed to the RF output.) The first 8 Channels are connected to the first 8 inputs of the first Multiplexer.  The first Multiplexer’s output is then connected to the Channel 1 input of the second Multiplexer. The remaining PD outputs are connected to the remaining inputs of the second Multiplexer. The output of the second Multiplexer is connected to the A channel of the RF Network Analyzer.  Thus it is possible to select any one of the PD RF outputs for analysis.

Software

Something on this tomorrow.

 

Attachment 1: Overall_schematic_D1300603-v2.pdf
Overall_schematic_D1300603-v2.pdf
  13023   Wed May 31 14:23:42 2017 jigyasaUpdateComputer Scripts / ProgramsEstablishing the EPICs channels for the GigE

To set up the EPICs channels for the GigE, Gautam and I followed the steps in the elog by him  8957 .

We copied the 11 required channels from scripts/GigE/SnapPy/example_camera.db to c1cam.db that we created, however due to conflicts with the existing CAM-AS_PORT channels, the channels could not be accessed.

We later changed the database file to Video.db and on restarting the slow machine, it was verified that the channels indeed could be written to and read from.

11 channels were added

C1: CAM-MC1_X (X centroid position)
C1: CAM-MC1_Y (Y centroid position)
C1: CAM-MC1_WX (Gaussian width in the X direction)
C1: CAM-MC1_WY (Gaussian width in the Y direction)
C1: CAM-MC1_XY (Gaussian width along XY line)
C1: CAM-MC1_SUM (Pixel sum)
C1: CAM-MC1_EXP (Exposure time in microseconds)
C1: CAM-MC1_SNAP (Control signal for taking snapshots)
C1: CAM-MC1_FILE(File name for image to saved to - time stamp automatically appended)
C1: CAM-MC1_RELOAD (Reloads configuration file)
C1: CAM-MC1_AUTO (1 means autoexposure on, 0 means autoexposure off)

The procedure followed –

  • Add the channel names to the file C0EDCU.ini (path = /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini).
  • Make a database (.db) file so that these channels are actually recorded (path = /cvs/cds/caltech/target/c1aux/Video.db).
  • Restarted the slow machine. FB
  • Verify that the channels indeed exist and can be read and written to using ezcaread and ezcawrite.

GV: Initially, I made a new directory called c1cam in /cvs/cds/caltech/target/ and made a .db file in there. However, the channels were not accessible after re-starting FB (attempting to read these channels threw up the "Channel does not exist" error). On digging a little further, I saw that there were already some "C1:CAM-AS_PORT" channels in C0EDCU.ini. The corresponding database records were defined inside /cvs/cds/caltech/target/c1aux/Video.db. So I just added the new records there. I also had to uncomment out the dummy channel in C0EDCU.ini to keep an even number of channels. Restarting FB still did not allow read/write access to the channels. Looking through the files in /cvs/cds/caltech/target/c1aux, I suspected that the epics database records are loaded when the machine is first booted up - so on a hunch I re-started c1aux by keying the crate, and this did the trick. The channels can now be read / written to (tested using Python cdsutils).

  13026   Thu Jun 1 00:10:15 2017 gautamUpdateGeneralCoil driver boards reinstalled

[Koji, Gautam]

We tried to debug the mysterious sudden failure of ASS - here is a summary of what we did tonight. These are just notes for now, so I don't forget tomorrow.

What are the problems/symptoms?

  • After re-installing the coil driver electronics, the ASS loops do not appear to converge - one or more loops seem to run away to the point we lose the lock.
  • For the Y-arm dithers, the previously nominal ITM PIT and YAW oscillator amplitudes (of ~1000cts each) now appears far too large (the fuzz on the Y arm transmission increases by x3 as viewed on StripTool).
  • The convergence problem exists for the X arm alignment servos too.

What are the (known) changes since the servos were last working?

  • Gain of x3 on the de-whitening boards for ITMX, ITMY, BS and SRM have been replaced with gain x1. But I had measurements for all transfer functions (De-White board input to De-White Board outputs) before and after this change, so I compensated by adding a filter of gain ~x3 to all the coil filter banks for these optics (the exact value was the ratio of the DC gain of the transfer functions before/after).
  • The ETMY Oplev has been replaced. I walked over to the endtable and there doesn't seem to be any obvious clipping of either the Oplev beam or the IR transmission.

Hypotheses plus checks (indented bullets) to test them:

  1. The actuation on the ITMs are ~x10 times stronger now (for reasons unknown).
    • I locked the Y-arm and drove a line in the channels C1:SUS-ETMY_LSC_EXC and C1:SUS-ITMY_LSC_EXC at ~100Hz and ~30Hz, (one optic at one frequency at a time), and looked at the response in the LSC control signal. The peaks at both frequencies for the ITMs and ETMs were within a factor of ~2. Seems reasonable.
    • We further checked by driving lines in C1:SUS-ETMY_ASCPIT_EXC and C1:SUS-ITMY_ASCPIT_EXC (and also the corresponding YAW channels), and looked at peak heights at the drive frequencies in the OL control signal spectra - the peak heights matched up well in both the ITM and ETM spectra (the drive was in the same number of counts).

      So it doesn't look like there is any strange actuation imbalance between the ITM and ETM as a result of the recent electronics work, which makes sense as the other control loops acting on the suspensions (local damping, Oplevs etc seem to work fine). 
  2. The way the dither servo is set up for the Y-arm, the tip-tilts are used to set the input axis to the cavity axis, while actuation to the ITM and ETM takes care of the spot centering. The problem lies with one of these subsystems.
    • We tried disabling the ASS servo inputs to all the spot-centering loops - but even with just actuation on the TTs, the arm transmission isn't maximized.
    • We tried the other combination - disable actuation path to TTs, leave those to ITM and ETM on - same result, but the divergence is much faster (lock lost within a couple of seconds, large offsets appear in the ETM_PIT_L / ETM_YAW_L error signals.
    • Tried turning on loops one at a time - but still the arm transmission isn't maximized.
  3. Something is funny with the IR transmon QPD / ETMY Oplev.
    • I quickly measured Oplev PIT and YAW OLTFs, they seem normal with upper UGFs around 5Hz and phase margins of ~30 degrees.
    • We had no success using either of the two available Transmon QPDs
    • Looking at the QPD quadrants, the alignment isn't stellar but we get roughly the same number of counts on all quadrants, and the spot isn't drastically misaligned in either PIT or YAW.

For whatever reasons, it appears that dithering the cavity mirrors at frequencies with amplitudes that worked ~3 weeks ago is no longer giving us the correct error signals for dither alignment. We are out of ideas for tonight, TBC tomorrow...

 

  13027   Thu Jun 1 15:33:39 2017 jigyasaUpdateCamerasGigE installation in the IFO area

I tried to capture some images with the GigE inside the Interferometer area in the 40m today. For that, I connected the POE injector to the Netgear Switch in 1x6 and connected it to the GigE. I then tried to access the Pylon Viewer App through Paola but that seemed to have some errors. When trying to connect to the Basler, quite a few errors were encountered in establishing connection and trying to capture the image. There were a few errors with single shot capture but the continuous shot could not even be started. To locate the problem, I tried running the Pylon installation through Allegra in the control room and everything seemed to work fine there.

Few error messages encountered

createPylondevice error :Failed to read memory at 0xc0000000, 0xd800 bytes. Timeout. No message received.
Failed to stop the camera; stopgrab: Exception Occurred: Control Channel not open


Eventually I connected Paola to the Switch with an Ethernet cable and over this wired connection, the errors were resolved and I was able to capture some images in Continuous shot mode at 103.3 fps without any problem.

In the afternoon, Steve and I tried to install the camera near MC2 and get some images of the mirrors. Due to a restricted field of view of the lens on the camera, after many efforts to focus on the optic, we were able to get this image. MC2 was unlocked so this image captures some resonating higher order mode.

With MC2 locked, I will get some images of the mirror at different exposure times and try to get an HDR image.  
As per Rana's suggestion, I am also looking up which compression format would be the best to save the images in.

 

Attachment 1: HOMMC2.pdf
HOMMC2.pdf
  13028   Thu Jun 1 15:37:01 2017 gautamUpdateCDSslow machine bootfest

Steve alerted me that the IMC wouldn't lock. Reboots for c1susaux, c1iool0 today. I tried using the reset button instead of keying the crates. This worked for c1iool0, but not for c1susaux. So I had to key the latter crate. The machine took a good 5-10 minutes before coming back up, but eventually it did. Now IMC locks fine.

  13029   Thu Jun 1 16:14:55 2017 jigyasaUpdateCamerasGigE installation in the IFO area

Thanks to Steve and Gautam, the IMC was locked.

I was able to capture images with the Rainbow 50 mm lens at exposure times of 100, 300, 1000, 3000, 10000 and 30 microseconds.(The pictures are in the same order). These pictures were taken at a gain of 300 and black level 64.

Special credits to Steve spent a lot of time help me a with setting up the hardware and focusing on the beam spot with the camera. 
I can't thank you enough Steve! :) 

Quote:

In the afternoon, Steve and I tried to install the camera near MC2 and get some images of the mirrors. Due to a restricted field of view of the lens on the camera, after many efforts to focus on the optic, we were able to get this image. MC2 was unlocked so this image captures some resonating higher order mode.

With MC2 locked, I will get some images of the mirror at different exposure times and try to get an HDR image.  
 

Attachment 1: MC2.pdf
MC2.pdf MC2.pdf MC2.pdf MC2.pdf MC2.pdf MC2.pdf
  13030   Thu Jun 1 16:21:55 2017 SteveUpdateSUS wire standoffs update

Ruby wire standoff received from China. I looked one of them with our small USB camera.  They did a good job. The  long edges of the prism are chipped.

The v-groove cutter must avoid them. Pictures will follow.

 

  13031   Thu Jun 1 20:16:11 2017 ranaUpdateCamerasGigE installation in the IFO area

Good installation. I think the images are still out of focus, so try to resolve into some small dots at the low exposure setting.

  13032   Fri Jun 2 00:54:08 2017 KojiUpdateASSXarm ASS restoration work

While Gautam is working the restoration of Yarm ASS, I worked on Xarm.

Basically, I have changed the oscillator freqs and amps so as to have linear signals to the misalignment of the mirrors.
Also reduced the complexity of the input/output matrices to avoid any confusion.

Now the ITM dither takes care of the ITM alignment, and the ETM dither takes care of the ETM alignment.
The cavity alignment servos (4dofs) are running fine although the control band widths are still low (<0.1Hz).
The ETM spot positions should be controlled by the BS alignment, but it seems that these loops have suspicion about the signal quality.

While Gautam wa stouching the input TTs, we occasionally saw anomalously high transmission of the arm cavities (~1.2).
We decided to use this beam as this could have indicated partial clipping of the beam somewhere in the input optics chain.

Then the arm cavity was aligned to have reasonably high transmission for the green beam. i.e. Use the green power mon PD as a part of the alignment reference.

This resulted very stable transmission of both the IR and green beams. We liked them. We decide to use this a reference beam at least for now.

Attachment1: GTRX image at the end of the work.

Attachment2: ASSX screen shot

Attachment3: ASSX servo screen shot

Attachment4: Green ASX servo screen shot

Attachment 5: Screen shot of the ASS X strip tool

Attachment 6: Screen shot of the ASS X input matrix

Attachment 7: Screen shot of the ASS X output matrix

Attachment 1: GTRX.jpeg
GTRX.jpeg
Attachment 2: 54.png
54.png
Attachment 3: 37.png
37.png
Attachment 4: 16.png
16.png
Attachment 5: 26.png
26.png
Attachment 6: 41.png
41.png
Attachment 7: 01.png
01.png
  13033   Fri Jun 2 01:22:50 2017 gautamUpdateASSASS restoration work

I started by checking if shaking an optic in pitch really moves it in pitch - i.e. how much PIT to YAW coupling is there. The motivation being if we aren't really dithering the optics in orthogonal DoFs, the demodulated error signals carry mixed information which the dither alignment servos get confused by. First, I checked with a low frequency dither (~4Hz) and looked at the green transmission on the video monitors. The spot seemed to respond reasonably orthogonally to both pitch and yaw excitations on either ITMY or ETMY. But looking at the Oplev control signal spectra, there seems to be a significant amount of cross coupling. ITMY YAW, ETMY PIT, and ETMY YAW have the peak in the orthogonal degree of freedom at the excitation frequency roughly 20% of the height of the DoF being driven. But for ITMY PIT, the peaks in the orthogonal DoFs are almost of equal height. This remains true even when I changed the excitation frequencies to the nominal dither alignment servo frequencies.

I then tried to see if I could get parts of the ASS working. I tried to manually align the ITM, ETM and TTs as best as I could. There are many "alignment references" - prior to the coil driver board removal, I had centered all Oplevs and also checked that both X and Y green beams had nominal transmission levels (~0.4 for GTRY, ~0.5 for GTRX). Then there are the Transmon QPDs. After trying various combinations, I was able to get good IR transmission, and reasonable GTRY.

Next, I tried running the ASS loops that use error signals demodulated at the ETM dither frequencies (so actuation is on the ITM and TT1 as per the current output matrix which I did not touch for tonight). This worked reasonably well - Attachment #1 shows that the servos were able to recover good IR transmission when various optics in the Y arm were disturbed. I used the same oscillator frequencies as in the existing burt snapshot. But the amplitudes were tweaked.

Unfortunately I had no luck enabling the servos that demodulate the ITM dithers.

The plan for daytime work tomorrow is to check the linearity of the error signals in response to static misalignment of some optics, and then optimize the elements of the output matrix.

I am uploading a .zip file with Sensoray screen-grabs of all the test-masses in their best aligned state from tonight (except ITMX face, which for some reason I can't grab).

And for good measure, the Oplev spot positions - Attachment #3.

Quote:

While Gautam is working the restoration of Yarm ASS, I worked on Xarm.

 

Attachment 1: ASS_Y_recovery.png
ASS_Y_recovery.png
Attachment 2: ASS_Repairs.zip
Attachment 3: OLs.png
OLs.png
  13034   Fri Jun 2 12:32:16 2017 gautamUpdateGeneralPower glitch

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

  13035   Fri Jun 2 16:02:34 2017 gautamUpdateGeneralPower glitch

Today's recovery seems to be a lot more complicated than usual.

  • The vertex area of the lab is pretty warm - I think the ACs are not running. The wall switch-box (see Attachment #1) shows some red lights which I'm pretty sure are usually green. I pressed the push-buttons above the red light, hopefully this fixed the AC and the lab cools down soon.
  • Related to the above - C1IOO has a bunch of warning orange indicator lights ON that suggest it is feeling the heat. Not sure if that is why, but I am unable to bring any of the C1IOO models back online - the rtcds compilation just fails, after which I am unable to ssh back into the machine as well.
  • C1SUS was problematic as well. I found that the expansion chassis was not powered. Fortunately, this was fixed by simply switching to the one free socket on the power strip that powers a bunch of stuff on 1X4 - this brought the expansion chassis back alive, and after a soft reboot of c1sus, I was able to get these models up and running. Fortunately, none of the electronics seem to have been damaged. Perhaps it is time for surge-protecting power strips inside the lab area as well (if they aren't already)? 
  • I was unable to successfully resolve the dmesg problem alluded to earlier. Looking through some forums, I gather that the output of dmesg should be written to a file in /var/log/. But no such file exists on any of our 5 front-ends (but it does on Megatron, for example). So is this way of setting up the front end machines deliberate? Why does this matter? Because it seems that the buffer which we see when we simply run "dmesg" on the console gets preiodically cleared. So sometime back, when I was trying to verify that the installed DACs are indeed 16-bit DACs by looking at dmesg, running "dmesg | head" showed a first line that was written to well after the last reboot of the machine. Anyway, this probably isn't a big deal, and I also verified during the model recompilation that all our DACs are indeed 16-bit.
  • I was also trying to set up the Upstart processes on megatron such that the MC autolocker and FSS slow control scripts start up automatically when the machine is rebooted. But since C1IOO isn't co-operating, I wasn't able to get very far on this front either...

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)

 

Quote:

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

 

Attachment 1: IMG_7399.JPG
IMG_7399.JPG
  13036   Fri Jun 2 22:01:52 2017 gautamUpdateGeneralPower glitch - recovery

[Koji, Rana, Gautam]

Attachment #1 - CDS status at the end of todays efforts. There is one red indicator light showing an RFM error which couldn't be fixed by running "global diag reset" or "mxstream restart" scripts, but getting to this point was a journey so we decided to call it for today.


The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:

  1. Killed all models on all four other front ends other than c1ioo. 
  2. Hard reboot for c1ioo - at this point, we could ssh into c1ioo. With all other models killed, we restarted the c1ioo models one by one. They all came online smoothly.
  3. We then set about restarting the models on the other machines.
    • We started with the IOP models, and then restarted the others one by one
    • We then tried running "global diag reset", "mxstream restart" and "telnet fb 8087 -> shutdown" to get rid of all the red indicator fields on the CDS overview screen.
    • All models came back online, but the models on c1sus indicated a DC (data concentrator?) error. 
  4. After a few minutes, I noticed that all the models on c1iscex had stalled
    • dmesg pointed to a synchronization error when trying to initialize the ADC
    • The field that normally pulses at ~1pps on the CDS overview MEDM screen when the models are running normally was stuck
    • Repeated attempts to restart the models kept throwing up the same error in dmesg 
    • We even tried killing all models on all other frontends and restarting just those on c1iscex as detailed earlier in this elog for c1ioo - to no avail.
    • A walk to the end station to do a hard reboot of c1iscex revealed that both green indicator lights on the slave timing card in the expansion chassis were OFF.
    • The corresponding lights on the Master Timing Sequencer (which supplies the synchronization signal to all the front ends via optical fiber) were also off.
    • Sometime ago, Eric and I had noticed a similar problem. Back then, we simply switched the connection on the Master Timing Sequencer to the one unused available port, this fixed the problem. This time, switching the fiber connection on the Master Timing Sequencer had no effect.
    • Power cycling the Master Timing Sequencer had no effect
    • However, switching the optical fiber connections going to the X and Y ends lead to the green LED on the suspect port on the Master Timing Sequencer (originally the X end fiber was plugged in here) turning back ON when the Y end fiber was plugged in.
    • This suggested a problem with the slave timing card, and not the master. 
  5. Koji and I then did the following at the X-end electronics rack:
    • Shutdown c1iscex, toggled the switches in the front and back of the expansion chassis
    • Disconnect AC power from rear of c1iscex as well as the expansion chassis. This meant all LEDs in the expansion chassis went off, except a single one labelled "+5AUX" on the PCB - to make this go off, we had to disconnect a jumper on the PCB (see Attachment #2), and then toggle the power switches on the front and back of the expansion chassis (with the AC power still disconnected). Finally all lights were off.
    • Confident we had completely cut all power to the board, we then started re-connecting AC power. First we re-started the expansion chassis, and then re-booted c1iscex.
    • The lights on the slave timing card came on (including the one that pulses at ~1pps, which indicates normal operation)!
  6. Then we went back to the control room, and essentially repeated bullet points 2 and 3, but starting with c1iscex instead of c1ioo.
  7. The last twist in this tale was that though all the models came back online, the DC errors on c1sus models persisted. No amount of "mxstream restart", "global diag reset", or restarting fb would make these go away.
  8. Eventually, Koji noticed that there was a large discrepancy in the gpstimes indicated in c1x02 (the IOP model on c1sus), compared to all the other IOP models (even though the PDT displayed was correct). There were also a large number or IRIG-B errors indicated on the same c1x02 status screen, and the "TIM" indicator in the status word was red.
  9. Turns out, running ntpdate before restarting all the models somehow doesn't sync the gps time - so this was what was causing the DC errors. 
  10. So we did a hard reboot of c1sus (and for good measure, repeated the bullet points of 5 above on c1sus and its expansion chassis). Then, we tried starting the c1x02 model without running ntpdate first (on startup, there is an 8 hour mismatch between the actual time in Pasadena and the system time - but system time is 8 hours behind, so it isn't even somehow syncing to UTC or any other real timezone?)
    • Model started up smoothly
    • But there was still a 1 second discrepancy between the gpstime on c1x02 and all the other IOPs (and the 8 hour discrepancy between displayed PDT and actual time in Pasadena)
    • So we tried running ntpdate after starting c1x02 - this finally fixed the problem, gpstime and PDT on c1x02 agreed with the other frontends and the actual time in Pasadena.
    • However, the models on c1lsc and c1ioo crashed
    • So we restarted the IOPs on both these machines, and then the rest of the models.
  11. Finally, we ran "mxstream restart", "global diag reset", and restarted fb, to make the CDS overview screen look like it does now.

Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error? 

Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.

Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params. 

Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.

Attachment #4 - Warning lights on C1IOO

Quote:

Today's recovery seems to be a lot more complicated than usual.

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

 

Attachment 1: power_glitch_recovery.png
power_glitch_recovery.png
Attachment 2: IMG_7406.JPG
IMG_7406.JPG
Attachment 3: IMG_7407.JPG
IMG_7407.JPG
Attachment 4: IMG_7400.JPG
IMG_7400.JPG
  13038   Sun Jun 4 15:59:50 2017 gautamUpdateGeneralPower glitch - recovery

I think the CDS status is back to normal.

  • Bit 2 of the C1RFM status word was red, indicating something was wrong with "GE FANUC RFM Card 0".
  • You would think the RFM errors occur in pairs, in C1RFM and in some other model - but in this case, the only red light was on c1rfm.
  • While trying to re-align the IFO, I noticed that the TRY time series flatlined at 0 even though I could see flashes on the TRANSMON camera.
  • Quick trip to the Y-End with an oscilloscope confirmed that there was nothing wrong with the PD.
  • I crawled through some elogs, but didn't really find any instructions on how to fix this problem - the couple of references I did find to similar problems reported red indicator lights occurring in pairs on two or more models, and the problem was then fixed by restarting said models.
  • So on a hunch, I restarted all models on c1iscey (no hard or soft reboot of the FE was required)
  • This fixed the problem
  • I also had to start the monit process manually on some of the FEs like c1sus. 

Now IFO work like fixing ASS can continue...

Attachment 1: powerGlitchRecovery.png
powerGlitchRecovery.png
  13039   Mon Jun 5 10:30:45 2017 SteveUpdateSUSruby wire standoff pictures

Atm 1 & 5, showing the ruby R ~10 mm as it is seated on Al SOS test mass

Atm. 2, 3 & 4  chipped long edges with SOS sus wire OD 43 micron as  calibration

Quote:

Ruby wire standoff received from China. I looked one of them with our small USB camera.  They did a good job. The  long edges of the prism are chipped.

The v-groove cutter must avoid them. Pictures will follow.

 

 

Attachment 1: A087_R.png.bmp
A087_R.png.bmp
Attachment 2: A097_chipped_edges.png.bmp
A097_chipped_edges.png.bmp
Attachment 3: A099_cal_wire.png.bmp
A099_cal_wire.png.bmp
Attachment 4: A101_cal_wire_43_micron.png.bmp
A101_cal_wire_43_micron.png.bmp
Attachment 5: Al_SOS_R39mm.jpg
Al_SOS_R39mm.jpg
  13040   Mon Jun 5 12:27:34 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

While attempting to execute the Python/Pylon code for the camera server, camera_server.py, the compiler couldn’t locate the pylon-5.0.5.so file. So I included the path for the required .so file as

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rtcds/Caltech/c1/scripts/GigE/pylon5/lib64

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

  13041   Mon Jun 5 12:50:42 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

I think there might be a problem with the fact that the installation of the various components such as the .ini file and the Pylon software are in directories different from the ones Joe B. specifies in his paper. 

Instead of modifying the paths in the code itself, I tried creating the paths to match the code-

Update in /ligo directory 

/cds/caltech/c1/camera/L1-CAM-MC1.ini  created and then I ran the camera_server.py from scripts/GigE/SnapPy as

./camera_server.py -c /ligo/cds/caltech/c1/camera/L1-CAM-MC1.ini 

This prompted up the following on terminal- 

finished loading settings from /ligo/cds/caltech/c1/camera/L1-CAM-MC1.ini and lists the settings in the configuration file.


However the  gst.ElementNotFoundError: textoverlay still persists. 

Probably I could try putting all files in exactly the same directories as specified in the document. 

Quote:

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

 

  13042   Mon Jun 5 15:04:33 2017 ranaUpdateCamerasAttempt to run camera server Python code

Right - we want to be compatible with new version of the code, so instead of moving the files to where the code wants them you should make symlinks. The symlinkks go in the place that the code wants and points back to the place where we have the files now.

For the textoverlay, you can just comment it out for now. We can add it back in later once we decide on how to label the video.

  13043   Mon Jun 5 18:40:12 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

[Gautam, Jigyasa]

This evening, Gautam helped me resolve the error I had been encountering. I had been trying to run the code on Allegra and that threw up the gst.elementfactory_make(“textoverlay”, “text0”); gst.ElementNotFoundError: textoverlay error.
As an attempt to resolve the error, I had set up the paths to match those mentioned in the document.
However as it turns out, it wasn't really needed.

 When Gautam ran the code from Pianosa, the following error showed up
gst.elementfactory_make(“x264enc”, “ en ”);gst.ElementNotFoundError: x264.

We found that the x264 and x264enc are different entities.
Gautam then installed the Ubuntu- restricted-extras package with the following
gstreamer0.10-plugins-bad-multiverse
gstreamer0.10-plugins-ugly-multiverse

And eventually on compilation, the message ‘starting server’ was displayed on the screen. This was interrupted by another error GenICAM_3_0_Basler_pylon_v5_0::RuntimeException’

 So there is apparently a problem executing the commands on Allegra, because the camera server starts running on Donatella and Pianosa. 

I will now be looking into this newly encountered error and also be setting up the symlinks for the various paths in the code. 

Quote:

Probably I could try putting all files in exactly the same directories as specified in the document. 

Quote:

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

 

 

  13044   Mon Jun 5 21:53:55 2017 ranaUpdateComputersrossa: ubuntu 16.04

With the network config, mounting, and symlinks setup, rossa is able to be used as a workstation for dataviewer and MEDM. For DTT, no luck since there is so far no lscsoft support past the Ubuntu14 stage.

  13045   Tue Jun 6 09:14:26 2017 SteveUpdateCamerasGigE installation at MC2

50mm 1.8 lens with Basler camera at MC2 face with micro clamp 350617    Camera manuals plus

Quote:

Thanks to Steve and Gautam, the IMC was locked.

I was able to capture images with the Rainbow 50 mm lens at exposure times of 100, 300, 1000, 3000, 10000 and 30 microseconds.(The pictures are in the same order). These pictures were taken at a gain of 300 and black level 64.

Special credits to Steve spent a lot of time help me a with setting up the hardware and focusing on the beam spot with the camera. 
I can't thank you enough Steve! :) 

Quote:

In the afternoon, Steve and I tried to install the camera near MC2 and get some images of the mirrors. Due to a restricted field of view of the lens on the camera, after many efforts to focus on the optic, we were able to get this image. MC2 was unlocked so this image captures some resonating higher order mode.

With MC2 locked, I will get some images of the mirror at different exposure times and try to get an HDR image.  
 

 

Attachment 1: MC2.jpg
MC2.jpg
  13046   Wed Jun 7 10:07:00 2017 SteveUpdatePEMair condition thermostate

The Y arm ac thermostate was calibrated after cooling water relay replacement by Mike.... yesterday. The set temp is remaind to be 70F

The east end south wall temp is reading 22C

  13047   Wed Jun 7 11:32:56 2017 SteveUpdateVACsmooth vac reboot

Gautam and Steve,

 

The medm monitor & vac control screens were totally blank since ~ May 24, 2017    Experienced vacuum knowledge is required for this job.

IDENTIFY valve configuration:

                        How to confirm valve configuration when all vac mons are blank?  Each valve has a manual-mechanical position indicator. Look at pressure readings and turbo pump controllers. VAC NORMAL configuration was confirmed based on these information.

Preparation: disconnect valves ( disconnect meaning: valve closes and stays paralized ) in this sequence VC2, VC1 power, VA6, V5, V4 & V1 power,      at ifo pressure 7.3E-6 Torr-it  ( it  = InstruTech cold cathode gauge )

                            This gauge is independent from all other rack  mounted   instrumentation and it is still not logged.

                            Switching to this valve configuration with disconnected valves will insure NOT  venting of the vacuum envelope by accidental glitching voltage drop or computer malfunction.

RESET  v1Vac1 .........in 2-3 minutes........ ( v1Vac1 - 2 )  the vac control screen started reading pressures & position

                    Connected cables to valves (meaning: valve will open if it was open before it was disconnected and it will be control able from computer ) in the following order: V4, V1 power, V5, VA6, VC2 & VC1 power,      at ifo 2E-5 Torr-it.....

                     ....vac configuration is reading VAC NORMAL,

                     ifo 7.4E-6 Torr-it

We have to hook up the it-cold cathode gauge to be monitored - logged !  this should be the substitute for the out of order CC1 pressure gauge.

Attachment 1: vac_reboot.png
vac_reboot.png
  13048   Wed Jun 7 14:11:49 2017 gautamUpdateASSY-arm coil driver electronics investigation

Rana suggested taking a look at the Y-arm test mass actuator TFs (measured by driving the coils one at a time, with only local damping loops on, using the Oplev to measure the response to a given drive). Attached are the results from this measurement (I used the Oplev pitch error signal for all 8 measurements). Although the magnitude response for all coils have the expected 1/f^2 shape, there seems to be some significant (~10dB) asymmetry in both the ETM and ITM coils. The phase-response is also not well understood. If we are just measuring the TF of a pendulum with 1 Hz resonant frequency, then at and above 10Hz, I would expect the phase to be either 0 or 180 deg. Looks like there is a notch at 60 Hz somewhere, but it is unclear to me where the ~90 degree phase at ~100Hz is coming from.

For the ITM, the UL OSEM was replaced during the 2016 summer vent - the coil that is in there is now of the short OSEM variety, perhaps it has a different number of turns or something. I don't recall any coil balancing being done after this OSEM swap. For the ETM, it is unclear to me how long this situation has been like this.

Yesterday night, I tried to measure the ASS output matrix by stepping the ITM, ETM and TTs in PIT and YAW, and looking at the response in the various ASS error signals. During this test, I found the ETM and ITM pitch and yaw error signals to be highly coupled (the input matrix was diagonal). As Rana suggested, I think the whole coil driver signal chain from DAC output to coil driver board output has to be checked before attempting to fix ASS. Results from this investigation to follow.

Note: The OSEM calibration hasn't been done in a while (though the HeNes have been swapped out), but as Attachment #2 shows, if we believe the shadow sensor calibration, then the relative calibrations of the ITM and ETM Oplevs agree. So we can directly compare the TFs for the ITM and ETM.

 

Attachment 1: CoilTFs.pdf
CoilTFs.pdf
Attachment 2: Y_OL_calib_check.png
Y_OL_calib_check.png
  13049   Wed Jun 7 14:27:23 2017 SteveUpdateSummary Pagessummery pages not working

Last good page May 18, 2017

Not found, error message May 19 - June 4,2017

Blank plots,  June 5, 2017

  13050   Wed Jun 7 15:41:51 2017 SteveUpdateComputerswindow laptop scanned

Randy Trudeau scanned our Window laptop Dell 13" Vostro and Steve's memory stick for virus. Nothing was found. The search continues...

Rana thinks that I'm creating these virus beasts with taking pictures with Dino Capture and /or Data Ray on the window machine........

 

 

  13051   Wed Jun 7 17:45:11 2017 gautamUpdateASSY-arm coil driver electronics investigation

I repeated the test of driving C1:SUS-<Optic>_<coil>_EXC individually and measuring the transfer function to C1:SUS-<Optic>_OPLEV_PERROR for Optic in (ITMX, ITMY, ETMX, ETMY, BS), coil in (LLCOIL, LRCOIL, ULCOIL, URCOIL). 

There seems to be a few dB imbalance in the coils in both ETMs, as well as ITMX. ITMY and the BS seem to have pretty much identical TFs for all the coils - I will cross-check using OPLEV_YERROR, but is there any reason why we shouldn't adjust the gains in the coil output (not output matrix) filter banks to correct for this observed imbalance? The Oplev calibrations for the various optics are unknown, so it may not be fair to compare the TFs between optics (I guess the same applies to comparing TF magnitudes from coil to OPLEV_PERROR and OPLEV_YERROR, perhaps we should fix the OL calibrations before fiddling with coil gains...)

The anomalous behaviour of ITMY_UL (10dB greater than the others) was traced down to a rogue x3 gain in the filter module indecision. This has been removed, and now Y arm ASS works fine (with the original dither servo settings). X arm dither still doesn't converge - I double checked the digital filters and all seems in order, will investigate the analog part of the drive electronics now.

 

Attachment 1: CoilTFs.pdf
CoilTFs.pdf CoilTFs.pdf CoilTFs.pdf
  13052   Thu Jun 8 02:11:28 2017 gautamUpdateASSY-arm coil driver electronics investigation

Summary:

I investigated the analog electronics in the coil driver chain by using awggui to drive a given channel with Uniform noise between DC and 8kHz, with an overall gain of 1000 cts. This test was done for both ITMs and the BS. The Whitening/De-Whitening was off during the test. I measured the spectra in

  1. The digital domain (with DTT)
  2. At the output monitor of the AI board (with SR785)
  3. At the output of the coil driver board (with SR785)

Attachment #1 - There is good agreement between all 3 measurements. To convert the DTT spectrum to Vrms/rtHz, I multiplied the Y-axis by 10V / ( 2*sqrt(2) * 2^15 cts). Between DC and ~1kHz, the measured spectrum everywhere is flat, as expected given the test conditions. The AI filter response is also seen.

Attachment #2 - Zoomed in view of Attachment #1 (without the AI filter part).

*The DTT plots have been coarse-grained to keep the PDF file size managable. X (Y) axes are shared for all the plots in columns (rows).

 

Similar verification remains to be done for the ETMs, after which the test has to be repeated with the Whitening/DeWhitening engaged. But it's encouraging that things make sense so far (except perhaps the coil balancing can be better as suggested by the previous elog). 

 

I've left both arms locked. The Y-arm dither alignment is working well again, but for the X arm, the loops that actuate on the BS are still weird. Nothing obvious in the tests so far though.

GV 6pm 8 Jun 2017: I realized the X arm transmission was being monitored by the high-gain PD and not the QPD (which is how we usually run the ASS). The ASC mini screen suggested the transmitted beam was reasonably well centered on the X end QPD, and so I switched to this after which the X end dither alignment too converged. Possibly the beam was falling off the other PD, which is why the BS loops, which control the beam spot position on the ETM, were acting weirdly.

Quote:

will investigate the analog part of the drive electronics now.

 

Not related to this work:

I noticed the X-arm LSC servo was often hitting its limit - so I reduced the gain from 0.03 to 0.02. This reduced the control signal RMS, and re-acquiring lock at this lower gain wasn't a problem either. See attachment #3 (will be rotated later) for control signal spectra at this revised setting.

Attachment 1: AnalogCheck.pdf
AnalogCheck.pdf
Attachment 2: AnalogCheck_zoom.pdf
AnalogCheck_zoom.pdf
Attachment 3: ArmCtrl.pdf
ArmCtrl.pdf
  13053   Thu Jun 8 12:43:42 2017 DhruvaUpdateOptical LeversBeam Profiling Results

 

Quote:

​Updates in the He-Ne beam profiling experiment. ​

New and improved plots for the He-Ne profiling experiment 

Font size has been increased to 30. 

The plots are maximum size (Following Rana's advice, I saved the plots as eps files(maximized) and converted them to pdf later).

There is a shaded region around the trendline that represents the parameter error. 

Function that I fit my data to (should have mentioned this in my earlier elog entries) 

P = \dfrac{P_0}{2}\Bigg[1+erf\Big(\dfrac{\sqrt2(X-X_0)}{w}\Big) \Bigg]

Description of my error analysis -

1. I have assumed a 20% deviation from markings in the micrometer error. 

2. Using the error in the micrometer, I have calculated the propogated error in the beam power :

\delta P = \sqrt{\dfrac{2}{\pi}}{P_0}\dfrac{\delta x}{w}\exp\Bigg({\frac{-2(X-X_0)^2}{w^2}}\Bigg)

I added this error to the stastistical error due to the fluctuation of the oscilloscope reading to obtain the total error in power. 

3. I found the Fisher Matrix by numerically differentiating the function at different data points P_b with respect to the parameters p_i =  P_0, X_0 and w.

F_{ij} = \sum_{b} {\frac{\partial P_b}{\partial p_i}\frac{\partial P_b}{\partial p_j}}\frac{1}{\sigma^2_b}

I then found the covariance matrix by inverting the Fisher Matrix and found the error in spot size estimation. 

EDIT : Residuals added to plots and all axes made equal 

Attachment 1: profile.pdf
profile.pdf profile.pdf profile.pdf profile.pdf profile.pdf profile.pdf profile.pdf
  13054   Fri Jun 9 09:13:26 2017 SteveUpdateCamerasGigE camera lens with AR

We should move on with getting this lens from Edmonds #67-717  at 1064 R<3% 

Computar M5018-SWIR is an other choice

AR coatings 500 - 1100nm R<1% are expensive.

 

Quote:

50mm 1.8 lens with Basler camera at MC2 face with micro clamp 350617    Camera manuals plus

 

Attachment 1: coating_curve.pdf
coating_curve.pdf
  13055   Fri Jun 9 15:31:45 2017 gautamUpdateIMCIMC wonkiness

I've been noticing some weird behaviour with the IMC over the last couple of days. In some lock stretches the WFS control signals ramp up to uncharacteristically huge values - at some point, the IMC loses lock, and doesn't re-acquire it (see Attachment #1). The fact that the IMC doesn't re-acquire lock indicates that there has been some kind of large alignment drift (this is also evident from looking at the (weak) flashes on the MCREFL camera while the IMC attempts to re-lock - I am asking Steve to restore the MC trans camera as well). These drifts don't seem to be correlated with anyone working near MC2.

The WFS servos haven't had their offsets/ DC alignments set in a while, so in order to check if these were to blame, I turned off the inputs to all the WFS servo filter modules (so no angular control of the IMC). I then tweaked the alignment manually. But the alignment seems to have drifted yet again, within a few minutes. Looking at the OSEM sensor signals, it looks like MC2 was the optic that drifted. Steve tells me no one was working near MC2 during this time. But the drift is gradual so this doesn't look like the infamous glitchy Satellite Box problem seen with MC1 in the recent past. The feedback signal to the NPRO / PCdrive look normal during this time, supporting the hypothesis that the problem is indeed related to angular alignment.

Once Steve restores the MC2 Trans cameras, I will hand-align the IMC again and see if the alignment holds for a few hours. If it does, I will reset all offsets for the WFS loops and see if they hold. In particular, the MC2 transmitted spot centering servo has a long time constant so could be something funny there.

*Another issue with the IMC autolocker I've noticed in the recent past: sometimes, the mcup script doesn't get run even though the MC catches a TEM00 mode. So the IMC servo remains in acquisition state (e.g. boosts and WFS servos don't get turned on). Looking at the autolocker log doesn't shed much light - the "saw a flash" log message gets printed, but while normally the mcup script gets run at this point, in these cases, the MC just remains in this weird state. 

Attachment 1: IMG_7409.JPG
IMG_7409.JPG
  13056   Fri Jun 9 16:37:29 2017 jigyasaUpdateComputer Scripts / ProgramsOpenCV installation

OpenCV 3.1.0 has been installed by following the commands locally on Donatella

git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 3.1.0
git clone https://github.com/Itseez/opencv_contrib.git
cd opencv_contrib
git checkout 3.0.0
cd ~/opencv
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/~/opencv_contrib/modules/ ~/opencv/

In ~/opencv/release, make and sudo make install were executed.

This completed the installation. The version of the installation was verified pkg-config --modversion opencv which showed 3.1.0. Also verified the import of cv2 module in python and it seems to work fine. 

 

  13057   Fri Jun 9 17:45:21 2017 Gautam, KaustubhUpdateIMCIMC wonkiness

 

Quote:

Once Steve restores the MC2 Trans cameras, I will hand-align the IMC again and see if the alignment holds for a few hours. If it does, I will reset all offsets for the WFS loops and see if they hold. In particular, the MC2 transmitted spot centering servo has a long time constant so could be something funny there.

 

Summary:

In order to switch on the angular alignment for the IMC mirrors, we needed to center the laser onto the quad-photodiodes at the IMC and the AS Table(WFS1 and WFS2)

I and Gautam went to the IMC table and did the dc centering for the quad-photodiode by varying the beamsplitter angles. After this, we turned the WFS loops off and performed beam centering for the Quad PDs at the AS Table, the WFS1 and WFS2.

Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.

  13058   Fri Jun 9 19:18:10 2017 gautamUpdateIMCIMC wonkiness

It happened again. MC2 UL seems to have gotten the biggest glitch. It's a rather small jump in the signal level compared to what I have seen in the recent past in connection with suspect Satellite boxes, and LL and UR sensors barely see it.

I will squish Sat box cables and check the cabling at the coil driver board end as well, given that these are two areas where there has been some work recently. WFS loops will remain off till I figure this out. At least the (newly centered) DC spot positions on the WFS and MC2 TRANS QPD should serve as some kind of reference for good MC alignment.

GV edit 9pm: I tightened up all the cables, but doesn't seem to have helped. There was another, larger glitch just now. UR and LL basically don't see it at all (see Attachment #2). It also seems to be a much slower process than the glitches seen on MC1, with the misalignment happening over a few seconds (it is also a lot slower). I have to see if this is consistent with a glitch in the bias voltage to one of the coils which gets low passed by a 4xpole@1Hz filter.

Quote:

Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.

 

Attachment 1: MC2_UL_glitchy.png
MC2_UL_glitchy.png
Attachment 2: MC2_glitch_fast.png
MC2_glitch_fast.png
  13059   Mon Jun 12 10:34:10 2017 gautamUpdateCDSslow machine bootfest

Reboots for c1susaux, c1iscaux, c1auxex today. I took this opportunity to squish the Sat. Box. Cabling for MC2 (both on the Sat box end and also the vacuum feedthrough) as some work has been recently ongoing there, maybe something got accidently jiggled during the process and was causing MC2 alignment to jump around.

Relocked PMC to offload some of the DC offset, and re-aligned IMC after c1susaux reboot. PMC and IMC transmission back to nominal levels now. Let's see if MC2 is better behaved after this sat. box. voodoo.

Interestingly, since Feb 6, there were no slow machine reboots for almost 3 months, while there have been three reboots in the last three weeks. Not sure what (if anything) to make of that.

  13060   Mon Jun 12 17:42:39 2017 gautamUpdateASSETMY Oplev Pentek board pulled out

As part of my Oplev servo investigations, I have pulled out the Pentek Generic Whitening board (D020432) from the Y-end electronics rack. ETMY watchdog was shutdown for this, I will restore it once the Oplev is re-installed.

  13061   Mon Jun 12 22:23:20 2017 ranaUpdateIMCIMC wonkiness

wonder if its possible that the slow glitches in MC are just glitches in MC2 trans QPD? Steve sometimes dances on top of the MC2 chamber when he adjusts the MC2 camera.

I've re-enabled the WFS at 22:25 (I think Gautam had them off as part of the MC2 glitch investigation). WFS1 spot position seems way off in pitch & yaw.

From the turn on transient, it seems that the cross-coupled loops have a time constant of ~3 minutes for the MC2 spot, so maybe that's not consistent with the ~30 second long steps seen earlier.

ELOG V3.1.3-