40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 259 of 350  Not logged in ELOG logo
IDup Date Author Type Category Subject
  12988   Fri May 12 12:34:55 2017 gautamUpdateGeneralITM and BS coil driver + dewhite board pulled out

I first set the bias sliders to 0 on the MEDM screen (after checking that the nominal values were stored), then shut down the watchdogs, and then pulled out the boards for inspection + photo-taking.

  12989   Fri May 12 18:45:04 2017 rebeccaUpdateCamerasMC2 Pics with Olympus

Raw and JPG formats of the pictures are saved on the Mac in the control room and at this link:

https://drive.google.com/open?id=0B9WDJpPRYby1c2xXRHhfOExXNFU 

The camera was mounted using the JOBE arm wrapped around a small heavy piece of metal. The lights were kept on, the camera was zoomed in as closely as possible (so the light would take up most of the frame), F number of 8 was used, and shutter speeds from 1/2 to 1/100 seconds were used. 

The pictures still look a bit blurry, probably because looking back at the details of the image, the focal length was 86.34m (as short of a focal length would be ideal, and Olympus is capable of going down to 1m).

Next steps include looking at the saturation in the pictures and setting up a more stable mount. 

  12990   Fri May 12 18:50:08 2017 gautamUpdateGeneralITM and BS coil driver + dewhite board pulled out

I've uploaded high-res photos + marked up schematics to the same DCC page linked in the previous page. I've noted the S/Ns of the ITM, BS and SRM boards on the page, I think it makes sense to collect everything on one page, and I guess eventually we will unify everything to a one or two versions.

To take the photos, I tried to reproduce the "LED light painting" technique reported here. I mounted the Canon EOS Rebel T3i on a tripod, and used some A3 sheets of paper to make a white background against which the board to be photographed was placed. I also used the new Macro lens we recently got. I then played around with the aperture and exposure time till I got what I judged to be good photos. The room lights were turned off, and I used the LED on my phone to do the "painting", from ~a metre away. I think the photos have turned out pretty well, the component values are readable.

Quote:

I first set the bias sliders to 0 on the MEDM screen (after checking that the nominal values were stored), then shut down the watchdogs, and then pulled out the boards for inspection + photo-taking.

 

  12991   Mon May 15 08:26:43 2017 ranaUpdateCDSSVN up in userapps/cds

I did an 'svn update' in userapps/cds/ which pulled in some changes from the sites as well as various CDS utilities in common/ and utilities/

This was to get Keith Thorne's get_data.m and get_data2.m scripts which I tested and they seem to be able to get data. No success with getting minute trend yet, but that may be a user error.

Update Monday 15-May: Our version of NDS client is 0.10 and we need to have 0.14 for this new method to work. Ubuntu12 lscsoft repo doesn't have newer nds client so we'll have to upgrade some OS.

  12992   Mon May 15 19:21:04 2017 KojiUpdateComputer Scripts / ProgramsFSSslow / MCautolocker restarted

It seems that FSS slow servo stopped working.

I found that megatron was restarted (by Rana, to finish an apt-get upgrade) on ~18:47 PDT today.

controls@megatron|~> last -5
controls pts/0        192.168.113.216  Mon May 15 19:15   still logged in   
controls pts/0        192.168.113.216  Mon May 15 19:14 - 19:15  (00:01)    
reboot   system boot  3.2.0-126-generi Mon May 15 18:50 - 19:19  (00:29)    
controls pts/0        192.168.113.200  Mon May 15 18:43 - down   (00:04)    
controls pts/0        192.168.113.200  Mon May 15 15:25 - 17:38  (02:12)


FSSslow / MCautolocker were restarted on megatron.

  12993   Mon May 15 20:43:25 2017 ranaConfigurationComputerscatastrophic multiple monitor failures

this is not the right one; this Ethernet controlled strip we want in the racks for remote control.

Buy some of these for the MONITORS.

Quote:

Surge protective power strip was install on Friday, May 5 in the Control Room

Computers not connected to the UPS are plugged into Isobar12ultra.

Quote:

That's a new failure mode. Probably we can't trust the power to be safe anymore.

Need Steve to order a couple of surge suppressing power strips for the monitors. The computers are already on the UPS, so they don't need it.

 

  12994   Tue May 16 16:16:16 2017 SteveUpdatesafetysafety training

 Early surfs of India Jigyasa and Kaustubh received basic 40m specific safety traning.

Attachment 1: surfs2017.jpg
surfs2017.jpg
  12995   Wed May 17 08:19:59 2017 SteveUpdateSUS4.1M earthquake

Sus dampings recovered. ETMY oplev needs to be recentered.

GV May 17 11am: I shut down the BS, SRM, ITMX and ITMY watchdogs, as the coil-driver boards for these optics are presently not installed.
 

Attachment 1: eq_4.1_SantaBarbara.png
eq_4.1_SantaBarbara.png
Attachment 2: 4.1m_Isla_Vista_CA.png
4.1m_Isla_Vista_CA.png
  12996   Wed May 17 11:10:31 2017 SteveUpdateCamerasMC2 CCD video camera back in place

Olympus camera is removed and our old CCD camera is back to monitor the face of MC2

Quote:

Olympus SP570 UZ - without  IR blocker, set up as Atm.3  Camera distance to MC  face ~85 cm,  IOO-MC_TRANS_SUM 16,300 counts, Lexan cover on not coated viewport.

Image mode: RAW + JPG,  M-costum,  manual focus,  Lens: Olympus 4.6 - 92 mm, f2.8 - 4.5,  Apeture: F2.8 - 8,  Image pick up device: 1/2.33" CCD (primary color filter)

Atm.1,       212k.jpg of raw 15 MB,  exp 0.025s,   apeture 2.97,  f 4.6,   iso 64,  

Atm.2,        Copied through my Cannon S100  (  3.3 MB.jpg of raw from UFraw photo shop )I will look up the original raw file for details.

 

 

  12997   Wed May 17 18:08:45 2017 DhruvaUpdateOptical LeversBeam Profiling Setup

Andrew and I set up the razor blade beam profiling experiment for He-Ne lasers on the "SP" table.  Once I receive the laser safety training, I will make power measurements and fit it to an erfc curve from which I will calculate the gaussian profile of the beam. I'm attaching some pictures of the setup. 

Least count of the micrometer - 2 microns 

Laser  : Lumentum 22037130:1103P

Photodetector : Thor Labs PDA100A

Attachment 1: 1.jpg
1.jpg
Attachment 2: 2.jpg
2.jpg
Attachment 3: 3.jpg
3.jpg
Attachment 4: 4.jpg
4.jpg
Attachment 5: 5.jpg
5.jpg
  12998   Thu May 18 15:20:29 2017 jigyasaSummarytelescope designTelescope Design for the Gig-E cameras

With the objective of designing a telescope system for the Gig-E, a system of two lenses is implemented. A rough schematic of the telescope system is attached. Variables in the system include the focal lengths of the two spherical lenses(f1, f2), distance between the lenses(t), distance between the test mass and the lens combination(u), distance between the other lens and the sensor(v). Also the size of the object to be desired ranges from 3’’ which is the size of the test mass to 1’’ which is approximately focusing on the beam spot implying that the required magnification ranges from 0.06089 to 0.1826 (since the sensor image circle size if ¼”)
The lenses are selected to be 2” in diameter so as to ensure sufficient collected power.

Going through the focal lengths available, namely 50, 100, 150, 200, 250 mm, and noting that the object distance would be within the ranges of 1500 to 2500 mm, plots of various accessible u and v for different values of t were obtained. This optimization was done to ensure the proper selection of the lenses. Additionally, a sensitivity analysis was performed and plots depicting the dependence of magnification on the precision limiting measurements of u (1 mm) and t (5 mm) were obtained. (These were scatter plots quantifying the deviation from the desired magnification ranges). The plots depict the error term induced on the magnification if there was an error in measuring the distance between the lenses as 5mm and if the precision in measuring the object to lens distance by 1mm.

The telescope design might be limited by spherical aberrations and coma, which might be resolved by either using aspherical lenses or by increasing the f-number (typically with an f number around 5 or 6). The use of aspherical lenses particularly parabolic lenses was considered, however this was found to be quite an expensive route. 

Analyzing the plots and taking into consideration the restrictions of the slotted lens tubes, the precision in measurement of the distances, a 150 mm- 250mm focal length solution is proposed. With a diameter of 2”, the f number is computed to be 2.95 and 4.92. With this combination and the object distances lying between 1500 to 2500 mm, the image distance to the sensor varies between 51 to 100mm. So a slotted lens tube controlling the distance between the lenses would be required.

I also considered a combination of focal lengths 250mm and 250mm, as then both of the lenses would at least have an f number of 4.92. The results for this combination are also attached. The image distance from the lens combination is about a 100 to a 140 mm. However, this would require much longer slotted length tubes thereby adding to the cost of the system. The number of accessible u-v points is the same as that for the 150-250 combination. 

I am still trying to search for a much more concrete way of quantifying aberrations.

Attachment 1: ray.png
ray.png
Attachment 2: Schematic.png
Schematic.png
Attachment 3: 150-250uv.png
150-250uv.png
Attachment 4: 150-250error.png
150-250error.png
Attachment 5: 250-250.png
250-250.png
Attachment 6: 250-250error.png
250-250error.png
  12999   Fri May 19 19:18:53 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040

Motivation:

I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week.

Tools and Machines Used:

We worked on the optical bench right in front of the main entrance to the lab. We put the cables, power chords, etc. to their respective places. We used screws, poles, T's, I's, multimeter, Network/Spectrum Analyzer(along with the moving table), a lab computer, Oscilloscope, power supply and the aforementioned PDs for our testing. We took these items from the stack of tools at the Y-arm and the boxes of various different labelled palced near the X-arm. We moved the Network Analyzer(along with the bench) from near the Y-arm to our workplace.

Procedure:

I will include a rough schematic of the setup later.

We alligned the reference PD(High Speed Photoreceiver model 1611) and the test PD(ET-3040 in this case) to get optimal power output. We had set the pump current for the laser at 19.5mA which produced a power of 1.00mW at the output of the fiber couple. At the reference detector the measured voltage was about 1.8V and at the DUT it was about 15mV. The DC transimpedance for the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75A/W. Using this we calculate the power at the reference detector to be 0.24mW. The DC transimpedance for the DUT is 50Ohm and the responsivity of about 0.9A/W. This amounts to a power of about 0.33mW. After measuring the DC voltages, we connected the laser input to the Network Analyzer and gave in an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz. The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. We found that the cut-off frequency for the ET3040 model was at arounf 55 MHz(stated as >50MHz in the data sheet). We have stored the data using the lab PC in the directory .../scripts/general/netgpibdata/data.

Result:

The bandwidth of the ET-3040 PD is as stated in the data sheet, >50 MHz.

Precaution:

These PDs have an internal power supply of 3V for ET-3040 and 6V for ET-3010. Do not leave these connected to any instruments after the experiments have been performed or else the batteries will get drained if there is any photocurrent on the PDs.

To Do:

A similar procedure has to be followed in order to test the ET-3010 PD. I will be doing this tentatively on Monday.

Attachment 1: IMG_20170519_173247922.jpg
IMG_20170519_173247922.jpg
Attachment 2: IMG_20170519_173253252.jpg
IMG_20170519_173253252.jpg
Attachment 3: IMG_20170519_173300174.jpg
IMG_20170519_173300174.jpg
Attachment 4: PD_test_setup.png
PD_test_setup.png
  13000   Mon May 22 10:15:14 2017 jigyasaSummarytelescope designLens tubes and object distances

Since the f numbers of the lenses in the proposed design with biconvex lenses are a little less than 5 and the conjugate ratio(that is the ratio of object to image distance) is greater than 5, I explored the use of plano convex lenses, but with the same focal lengths, the accessible u-v range is restricted with the planoconvex rather than biconvex lenses.
On Friday, I had a discussion with Gautam and Steve about the hardware that is the cylindrical enclosures for the camera and the telescope and we examined two such aluminum cylindrical enclosures. One of them was the one being currently employed for the cameras. The dimensions were measured and the length was found to be 8’’ and an outer diameter of 26 cm within an error of 0.5 cm.
The other enclosure was longer with a length of 52 cm(±0.5 cm), outer diameter of 10”(±0.1”) and an inner diameter of 23.7cm(±0.1cm). Pictures of these enclosures are attached.
Both of these enclosures have internal optical rail to mount the camera and the telescope system. Depending on the weight of the telescope system(that includes the weight of the slotted lens tubes, the lenses), it might be more efficient to clamp the telescope system itself on the rails with the low weight camera mounted on the lens tube.
I also went around to get an idea of distance of the GigE from the test masses. This was just a step to verify if the object distances were really in the ranges being taken into consideration, that is between 1500 and 2500 mm. I also tried to cross check the measurements with the CAD drawing of the 40m. However, as I have been informed, the distances in the CAD version are not updated.

The distances from the optic to the CCD detector would range from around 75.1 cm for MC2, 94.01 cm for ITMX, 97.21 cm for ETMX, 117.19 cm for ITMY and 88.463 cm for ETMY. The illuminator for the ETMY was disconnected, so Gautam helped me access the manual lamp control to enable me to take measurements.
The values for ETMX, MC2 and ITMY are subject to an error of ±1’’. Due to a lot of obstructions, the values for ETMY and ITMX may be subject to a lot more error. Even so, these distances are clearly less than 2 meters, prompting me to run the simulations again and verify that the chosen combination is still useful.

As for the slotted lens tubes to mount the 2” lenses, the following options are available on the Thorlabs catalog. CVI and Edmunds do not seem to offer much of the stackable lens tubes.

SM2L30C is a lens tube onto which the optic can be mounted without the need of a spanner wrench. It also has a length of 3”. However, it has a rotatable slip shield which can be rotated open as and when the access to optic is required. However, there might be a slight compromise with rigidity here.

SM2L30 is a lens tube with internal thread depth of 3”, the optic can be mounted using spanner wrench and a retainer ring. The optic cannot be accessed from both ends of the tube here.
SM2M30 is a lens tube with no external threads, therefore lens tube couplers would be required to stack the tubes. The optic is accessible from both ends here though.

Considering the merits and demerits of all these available options, the use of SM2L30 might be considered as it provides a quick and efficient way of stacking multiple lens tubes. As for accessing the optic from both sides, using multiple tubes helps overcome the problem and still ensures that we are able to access a number of separation distances as per requirement.
Thorlabs also offers an internal C to external SM2 adapter so that the lens tube could be fixed onto the C mount of the camera. 

I would be examining the use of 1" diameter lenses for the eyepiece as suggested by Rana, as that might give us more flexibility. 

Attachment 1: Pictures1.pdf
Pictures1.pdf Pictures1.pdf Pictures1.pdf Pictures1.pdf
  13002   Mon May 22 10:53:02 2017 DhruvaUpdateOptical LeversBeam Profiling Results

 

Quote:

Andrew and I set up the razor blade beam profiling experiment for He-Ne lasers on the "SP" table.  Once I receive the laser safety training, I will make power measurements and fit it to an erfc curve from which I will calculate the gaussian profile of the beam. I'm attaching some pictures of the setup. 

Least count of the micrometer - 2 microns 

Laser  : Lumentum 22037130:1103P

Photodetector : Thor Labs PDA100A

I had measured the y-profile of the beam of Friday at 5 axial locations and fit them to an erfc function using the lsqcurvefit function of MATLAB. 

The results were as follows - 

z(cm)    w (in)

4          0.0131

10        0.0132

15        0.0137

20       0.0139

25        0.0147

I left w in inches in the intensity plots as MATLAB gave more accurate fits for those values.

I converted these to S.I while making the spot-size vs z plot and the corresponding values in microns were 

332.74, 335.28, 347.98, 353.06, 373.38.

On fitting these values to the formula for the spot size of a Gaussian beam, the beam waist came out to be 330.54 microns and the location of the beam waist was at z=-2cm, where z=0 marks the head of the laser. 

 

TO-DO : Measure the spot size of the beam at more axial points to obtain a better fit. 

              Measure the x-profile of the beam. 

              Analyse the error in the spot sizes and corresponding error in the beam waist. 

 

 

Attachment 1: spot_size_.pdf
spot_size_.pdf
Attachment 2: z_25.pdf
z_25.pdf
Attachment 3: z_20.pdf
z_20.pdf
Attachment 4: z_15.pdf
z_15.pdf
Attachment 5: z_10.pdf
z_10.pdf
Attachment 6: z_4.pdf
z_4.pdf
  13003   Mon May 22 13:37:01 2017 gautamUpdateGeneralDAC noise estimate

Summary:

I've spent the last week investigating various parts of the DAC -> OSEM coil signal chain in order to add these noises to the MICH NB. Here is what I have thus far.

Current situation:

  • Coils are operated with no DAC whitening
  • So we expect the DAC noise will dominate any contribution from the electronics noise of the analog De-Whitening and Coil Driver boards
  • There is a factor of 3 gain in the analog De-Whitening board

DAC noise measurement:

  • I essentially followed the prescription in G1401335 and G1401399
  • So far, I only measured one DAC channel (ITMX UL)
  • The noise shaping filter in the above documents was adapted for this measurement. The noise used was uniform between DC and 1kHz for this test.
  • For the >50Hz bandstops, I used 1 complex pole pair at 5Hz, and 1 compelx zero pair at 50Hz to level off the noise.
  • For <50Hz bandstops, I used 1 compelx pole pair at 1Hz and 1 complex zero pair at 5Hz to push the RMS to lower frequencies
  • I set the amplitude ("gain" = 10,000 in awggui) to roughly match the Vpp when the ITM local damping loops are on - this is ~300mVpp (measured with a scope). 
  • The elliptic bandstops were 6th order, with 50dB stopband attenuation.
  • The SR785 input auto-ranging was disabled to allow a fair comparison of the various bandstops - this was fixed to -20 dBVpk for all measurements, and the SR785 noise floor shown is also for this value of the input range. Input was also AC coupled, and since I was using the front-panel LEMO for this test, the signal was effectively single-ended (but the ground of the SR785 was set to "floating" in order to get the differential signal from the DAC) 
  • Attachment #1 shows the results of this measurement - I've subtracted the SR785 noise from the other curves. The noise model was motivated by G1401399, but I use an f^-1/2 model rather than an f^-1 model. It seems to fit the measurement alright (though the "fit" is just done by eye and not by systematic optimization of the parameters of the model function).

Noise budget:

  • I then tried to translate this result into the noise budget
  • The noises for the 4 face coils are added in quadrature, and then the contribution from 3 optics (2 ITMs and BS) are added in quadrature
  • To calibrate into metres, I converted the DAC noise spectral density into cts/rtHz, and used the numbers from this elog. I thought I had missed out on the factor of 3 gain in the de-white board, but the cts-to-meters number from the referenced elog already takes into account this factor.
  • Just to be clear, the black line for DAC noise in Attachment #2 is computed from the single-channel measurement of Attachment #1 according to the following relation: \script{n}_{\mathrm{DAC}} ~ (m/\sqrt{Hz}) = n_{1-ch} (V/\sqrt{Hz}) \times (2^{15}/20) (cts/V) \times G_{act} \times 2 \times \sqrt{6}, where G_act is the coil transfer function from the referenced elog, taken as 5nm/f^2 on average for the 2 ITMs and BS, the factor of 2 comes from adding the noise from 4 coils in quadrature, and the factor of sqrt(6) comes from adding the noise from 3 optics in quadrature (and since the BS has 4 times the noise of the ITMs)
  • Using the 0.016N/A number for each coil gave me an answer than was off by more than an order of magnitude - I am not sure what to make of this. But since the other curves in the NB are made using numbers from the referenced elog, I think the answer I get isn't too crazy...
  • Attachment #2 shows the noise budget in its current form, with DAC noise added. Except for the 30-70Hz region, it looks like the measured noise is accounted for.

Comments:

  • I have made a number of assumptions:
    • All DAC channels have similar noise levels
    • Tried to account for asymmetry between BS and ITMs (BS has 100 ohm resistance in series with the coil driver while the ITMs have 400 ohms) but the individual noises haven't been measured yet
    • This noise estimate holds for the BS, which is the MICH actuator (I didn't attempt to simulate the in-lock MICH control signal and then measure the DAC noise)
  • But this seems sensible as a first estimate
  • The dmesg logs for C1SUS don't tell me what DACs we are using, but I believe they are 16-bit DACs (I'll have to restart the machine to make sure)
  • In the NB, the flattening out of some curves beyond 1kHz is just an artefact of the fact that I don't have data to interpolate in that region, and isn't physical.
  • I had a brief chat with ChrisW who told me that the modified EEPROM/Auto-Cal procedure was only required for 18-bit DACs. So if it is true that our DACs are 16-bit, then he advised that apart from the DAC noise measurement above, the next most important thing to be characterized is the quantization noise (by subtracting the calculated digital control signal from the actual analog signal sent to the coils in lock)
  • More details of my coil driver electronics investigations to follow...
Attachment 1: DAC_noise_model.pdf
DAC_noise_model.pdf
Attachment 2: C1NB_disp_40m_MICH_NB_22_May_2017.pdf
C1NB_disp_40m_MICH_NB_22_May_2017.pdf
  13004   Mon May 22 15:01:41 2017 jigyasaUpdatetelescope designUpdated Telescope design with 1'' eye piece

I examined the use of a single lens system for the available range of focal lengths, for the required magnification and found that a focal length of at most 100 mm would be required to sufficiently cover the object distance range. This would greatly compromise with the f-number and hence lead to a lot more spherical aberrations.

Therefore, a two lens system would be more useful to implement. Using an eyepiece of 1” puts an additional constraint on the system such that the separation between the lenses must now at least equal or be greater than half the image distance from the first lens to ensure that no light from the light cone is lost. This is clarified in the schematic. The image from the first lens in absence of the second lens would form at point A, subtending an angle θ. In order to ensure that no part this light cone emerging from the first lens is lost, the second lens must be placed at a distance atleast v/2 from the first lens.

A combination of 125mm focal length 2” diameter objective with a 250 mm 1” eyepiece covers the required range of object distances (650mm to 1500 mm). Increasing the focal length of the eye piece increases the minimum object distance accessible to 700 mm. 

A glance at the accessible u, v points shows that all magnifications are not possible at a given object distance. To image the entire surface of the test mass, a distance of at least 1.25m is required from the objective, while a beam spot of 1'' diameter can be imaged easily at upto 1200 mm from the objective . This holds true even for the 150-250 mm biconvex 2" lens combination proposed earlier. 

If this sounds reasonable, we could proceed with ordering the lenses.

Attachment 1: 1incep.pdf
1incep.pdf
  13005   Mon May 22 18:20:27 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040

I am adding the text files with the data readings and paramater settings along with the Bode Plot of the data. I plotted these graphs using matplotlib module with python 2.7.

Quote:

Motivation:

I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week...

 

Attachment 1: ET-3040_test.zip
Attachment 2: ET-3040_test.pdf
ET-3040_test.pdf
  13006   Tue May 23 10:27:24 2017 DhruvaUpdateOptical LeversBeam Profiling Results

I have attempted to calculate the instrument error (micrometer least count) using the values of the spot size obtained by the least squares fitting method. This error is large towards the centre of the beam as the power varies significantly between adjecent markings of the micrometer. Using the new values of error obtained, I used the chi-square fitting minimisation method to further optimise the waist size. 

The modified values are - 

z(cm)    w (in)

4         0.0134

10        0.0135

15        0.0140

20        0.0142

25        0.0150

 

And the revised values for the beam waist and location are 338.63 microns and -2.65 cm respectively. 

I will now try to use the chi-square stastitic to estimate the error in spot size. 

Attachment 1: z_25_chisq.pdf
z_25_chisq.pdf
Attachment 2: z_20_chisq.pdf
z_20_chisq.pdf
Attachment 3: z_15_chisq.pdf
z_15_chisq.pdf
Attachment 4: z_10_chisq.pdf
z_10_chisq.pdf
Attachment 5: z_4_chisq.pdf
z_4_chisq.pdf
Attachment 6: spotsize.pdf
spotsize.pdf
  13007   Tue May 23 15:22:04 2017 ranaUpdateOptical LeversBeam Profiling Results
  1. Include several sources of error. Micrometer error is one, but you should be able to think of at least 3 more.
  2. There should be an error bar for the x and y axis.
  3. Also, use pdftk to put the PDFs all into a single file. Remove so much whitespace.
  4. Google 'beautiful plots python' and try to make your plots for the elog be more like publication quality for PRL or Nature.
  13008   Tue May 23 16:33:00 2017 SteveUpdateOptical LeversBeam Profiling Results

You may compare your results with this.

RXA: please no, that's not the right way

  13009   Tue May 23 18:09:18 2017 KaustubhConfigurationGeneralTesting ET-3010 PD

In continuation with the previous(ET-3040 PD) test.

The ET-3010 PD requires to be fiber coupled for optimal use. I will try to test this model without the fiber couple tomorrow and see whether it works or not.

  13010   Tue May 23 22:58:23 2017 gautamUpdateGeneralDe-Whitening board noises

Summary:

I wanted to match a noise model to noise measurement for the coil-driver de-whitening boards. The main objectives were:

  1. Make sure the various poles/zeros of the Bi-Quad stages and the output stage were as expected from the schematics
  2. Figure out which components are dominating the noise contribution, so that these can be prioritized while swapping out the existing thick-film resistors on the board for lower noise thin-film ones
  3. Compare the noise performance of the existing configuration, which uses an LT1128 op-amp (max output current ~20mA) to drive the input of the coil-driver board, with that when we use a TLE2027 (max output current ~50mA) instead. This last change is motivated by the fact that an earlier noise-simulation suggested that the Johnson noise of the 1kohm input resistor on the coil driver board was one of the major noise contributors in the de-whitening board + coil driver board signal chain. Since the TLE2027 can drive an output current of up to 300mA, we could reduce the input impedance of the coil-driver board to mitigate this noise source to some extent. 

Measurement:

  • The back-plane pin controlling the MAX333A that determines whether de-whitening is engaged or not (P1A) was pulled to ground (by means of one of the new extender boards given to us by Ben Abbott). So two de-whitening stages were engaged for subsequent tests.
  • I first measured the transfer function of the signal path with whitening engaged, and then fit my LISO model to the measurement to tweak the values of the various components. This fitted file is what I used for subsequent noise analysis. 
  • ​For the noise measurement, I shorted the input of the de-whitening board (10-pin IDE connector) directly to ground.
  • I then measured the voltage noise at the front-panel SMA connector with the SR785
  • The measurements were only done for 1 channel (CH1, which is the UL coil) for 4 de-whitening boards (2 ITMs, BS, and SRM). The 2 ITM boards are basically identical, and the BS and SRM boards are similar. Here, only results for the board labelled "ITMX" are presented.
  • For this board, I also measured the output voltage noise when the LT1128 was replaced with a TLE2027 (SOIC package, soldered onto a SOIC-to-DIP adaptor). Steve has found (ordered?) some DIP variants of this IC, so we can compare its noise performance when we get it.

Results:

  • Attachment #1 shows the modeled and measured noises, which are in fairly good agreement.
  • The transfer function measurement/fitting (not attached) also suggests that the poles/zeros in the signal path are where we expect as per the schematic. I had already verified the various resistances, but now we can be confident that the capacitance values on the schematic are also correct. 
  • The LT1128 and TLE2027 show pretty much identical noise performance.
  • The SR785 noise floor was low enough to allow this measurement without any pre-amp in between. 
  • I have identified 3 resistors from the LISO model that dominate the noise (all 3 are in the Bi-Quad stages), which should be the first to be replaced. 
  • There are some pretty large 60 Hz harmonics visible. I thought I was careful enough avoiding any ground loops in the measurement, and I have gotten some more tips from Koji about how to better set up the measurement. This was a real problem when trying to characterize the Coil Driver noise.

Next steps:

  • I have data from the other 3 boards I pulled out, to be updated shortly.
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.
  • Once the coil driver board has been characterized, we need to decide what changes to make to these boards. Some things that come to mind at the moment:
    • Replace critical resistors (from noise-performance point of view) with low noise thin film ones.
    • Remove the "fast analog" path on the coil driver boards - these have potentiometers in series with the coil, which we should remove since we are not using this path anyways.
    • Remove all AD797s from both de-whitening and coil driver boards - these are mostly employed as monitor points that go to the backplane connector, which we don't use, and so can be removed.
    • Increase the series resistor at the output of the coil driver (currently, these are either 100ohm or 400ohm depending on the optic/channel). I need to double check the limits on the various LSC servos to make sure we can live with the reduced range we will have if we up these resistances to 1 kohm (which serves to reduce the current noise to the coils, which is ultimately what matters).
Attachment 1: ITMX_deWhite_ch1_noise.pdf
ITMX_deWhite_ch1_noise.pdf
  13011   Wed May 24 18:19:15 2017 KaustubhUpdateGeneralET-3010 PD Test

Summary:

In continuation to the previous test conducted on the ET-3040 PD,  I performed a similar test on the ET-3010 model. This model requires a fiber couple input for proper testing, but I tested it in free space without a fiber couple as the laser power was only 1.00 mW and there was not much danger of scattering of the laser beam. The Data Sheet can be found here

Procedure:

The schematic(attached below) and the procedure are the same as the previous time. The pump current was set to 19.5 mA giving us a laser beam of power 1.00mW at the fiber couple output. The measured voltage for the reference detector was 1.8V. For the DUT, the voltage is amplified using a low noise amplifier(model SR-560) with a gain of 100. Without any laser incidence on the DUT, the multimeter reads 120.6 mV. After alligning the laser with the DUT, the multimeter reads 348.5 mV, i.e. the voltage for the DUT is 227.9/100 ~ 2.28 mV. The DC transimpedance of the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75 A/W. Using this we calculate the power at the reference detector to be 0.24 mW. The DC transimpedance for the DUT is 50Ohm and the responsivity is around 0.85 A/W. Using this we calculate the power at the DUT to be 0.054 mW. After this we connect the the laser input to the Netwrok Analyzer(AG4395A) and give an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz.The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. I stored the data under the directory.../scripts/general/netgpibdata/data. The Bode Plot has been attached below and seeing it we observe that the cut-off frequency for the ET-3010 model is atleast over 500 MHz(stated as >1.5 GHz in the data sheet).

Result:

The bandwidth of the ET-3010 PD is atleast 500MHz, stated in the data sheet as >1.5GHz.

Precaution:

The ET-3010 PD has an internal power supply of 6V. Don't leave the PD connected to any instrument after the experimentation is done or else the batteries will get drained if there is any photocurrent on the PDs.

To Do:

Caliberate the vertical axis in the Bode Plot with transimpedance(Ohms) for the two PDs. Automate the procedure by making a Python script for taking multiple set of readings from the Netwrok Analyzer and aslo plot the error bands.

Attachment 1: PD_test_setup.png
PD_test_setup.png
Attachment 2: ET-3010_test.pdf
ET-3010_test.pdf
Attachment 3: ET-3010_test.zip
  13012   Thu May 25 12:22:59 2017 gautamUpdateCDSslow machine bootfest

After ~3months without any problems on the slow machine front, I had to reboot c1psl, c1susaux and c1iscaux today. The control room StripTool traces were not being displayed for all the PSL channels so I ran testSlowMachines.bash to check the status of the slow machines, which indicated that these three slow machines were dead. After rebooting the slow machines, I had to burt-restore the c1psl snapshot as usual to get the PMC to lock. Now, both PMC and IMC are locked. I also had to restart the StripTool traces (using scripts/general/startStrip.sh) to get the unresponsive traces back online.

Steve tells me that we probably have to do a reboot of the vacuum slow machines sometime soon too, as the MEDM screen for the Vacuum indicator channels are unresponsive.

Quote:

Had to reboot c1psl, c1susaux, c1auxex, c1auxey and c1iscaux today. PMC has been relocked. ITMX didn't get stuck. According to this thread, there have been two instances in the last 10 days in which c1psl and c1susaux have failed. Since we seem to be doing this often lately, I've made a little script that uses the netcat utility to check which slow machines respond to telnet, it is located at /opt/rtcds/caltech/c1/scripts/cds/testSlowMachines.bash.

 

 

  13013   Thu May 25 16:42:41 2017 jigyasaUpdateComputer Scripts / ProgramsMaking pylon installation on shared directory

I have been working on interfacing with the GigE’s. I went through Joe Be’s paper and the previous elogs and verified that the code files are installed.

I then downloaded and extracted a copy of the Pylon software onto my home directory on Allegra. Gautam helped me find installation instructions on Johannes’ directory so that I could make the installation on the shared directory.

So far , according to instructions, these commands need to be executed so that the installation takes place and the rules for camera permissions are set up.

sudo tar –C /opt/rtcds/caltech/c1/scripts/GigE –xzf pylon SDK*.tar.gz

followed by ./setup-usb.sh

The Pylon viewer can then be accessed with /scripts/GigE/pylon5/bin/PylonViewerApp 

Should I go ahead with the installation in the shared directory?

  13014   Thu May 25 18:37:11 2017 jigyasaUpdateComputer Scripts / ProgramsMaking pylon installation on shared directory

Gautam helped me execute the commands mentioned above and Pylon has now been installed on the shared directory. We extracted the pylon installation from Johannes's directory to the shared drive and executing the command tar –C /opt/rtcds/caltech/c1/scripts/GigE –xzf pylon SDK*.tar.gz created an unzipped pylon5 folder in /scripts. The ./setup-usb.sh set up the udev rules for the GigE.

The installation took place without any errors.

The Pylon viewer app can now be accessed at /opt/rtcds/caltech/c1/scripts/GigE/pylon5/bin followed by ./PylonViewerApp 

Quote:

Should I go ahead with the installation in the shared directory?

 

  13015   Thu May 25 19:27:29 2017 gautamUpdateGeneralCoil driver board noises

[Koji, Gautam]

Summary: 

  • Attachment #1 shows the measured/modeled noise of the coil driver board (labelled ITMX). 
  • Measurement was made with "TEST" input (which is what the DAC drives) is connected to ground via 50ohm terminator, and "BIAS" input grounded.
  • The model tells us to expect a noise of the order of 5nV/rtHz - this is comparable to (or below) the input noise of the SR785, and even the SR560. So this measurement only serves to place an upper bound on the coil driver board noise.
  • There is some excess noise below 40Hz, would be interesting to see if this disappears with swapping out thick-film resistors for thin film ones.
  • The LISO model says that the dominant contribution is from the voltage and input current noise of the two op-amps (LT1125) in the bias LP filter path. 
  • But if we can indeed realize this noise level of ~10-20nV/rtHz, we are already at the ~10^-17m/rtHz displacement noise for MICH at about 200Hz. I suspect there are other noises that will prevent us from realizing this performance in displacement noise.

Details:

This measurement has been troublesome - I was plagued by large 60Hz harmonics (see Attachment #1), the cause of which was unknown. I powered all electronics used in the measurement set up from the same power strip (one of the new surge-protecting ones Steve recently acquired for us), but these remained present. Yesterday, Koji helped me troubleshoot this issue. We did the various things, I try to put them here in the order we did them:

  1. Double check that all electronics were indeed being powered from the same power strip - OK, but harmonics remained present.
  2. Tried using a different DC power supply - no effect.
  3. Checked the signal with an oscilloscope - got no additional insight.
  4. I was using a DB25 breakout board + pomona minigrabbers to measure the output signal and pipe it to the SR785. Koji suggested using twisted ribbon wire + soldered BNC connector (recycled from some used ones lying around the lab). The idea was to minimize stray radiation pickup. We also disconnected the WiFi extender and GPIB box from the analyzer and also disconnected these from the power - this finaly had the desired effect, the large harmonics vanished. 

Today, I tried to repeat the measurement, with the newly made twisted ribbon cable, but the large 60Hz harmonics were back. Then I realized we had also disconnected the WiFi extender and GPIB box yesterday.

Turns out that connecting the Prologix box to the SR785 (even with no power) is the culprit! Disconnecting the Prologix box makes these harmonics go away. I was using the box labelled "Santuzza.martian" (192.168.113.109), but I double-checked with the box labelled "vanna.martian" (192.168.113.105, also a different DC power supply adapter for the box), the effect is the same. I checked various combinations like 

  • GPIB box connected but not powered
  • GPIB box connected with no network cable

but it looks like connecting the GPIB box to the analyzer is what causes the problem. This was reproducible on both SR785s in the lab. So to make this measurement, I had to do things the painful way - acquire the spectrum by manually pushing buttons with the GPIB box disconnected, then re-connect the box and download the data using SRmeasure --getdata. I don't fully understand what is going on, especially since if the input connector is directly terminated using a 50ohm BNC terminator, there are no harmonics, regardless of whether the GPIB box is connected or not. But it is worth keeping this problem in mind for future low-noise measurements. My elog searches did not reveal past reports of similar problems, has anyone seen something like this before?

It also looks like my previous measurement of the de-whitening board noises was plagued by the same problem (I took all those spectra with the GPIB boxes connected). I will repeat this measurement.

Next steps:

At the meeting this week, it was decided that

  • All AD797s would be removed from de-whitening boards and also coil-driver boards (as they are unused).
  • Thick film resistors with the most dominant noise contributions to be replaced with thin-film ones.
  • Gain of 3 on de-whitening board to be changed to gain of 1.

I also think it would be a good idea to up the 100-ohm resistors in the bias path on the ITM coil driver boards to 1kohm wire-wound. Since the dominant noise on the coil-driver boards is from the voltage noise of the Op-Amps in the bias path, this would definitely be an improvement. Looking at the current values of the bias MEDM sliders, a 10x increase in the resistance for ITMX will not be possible (the yaw bias is ~-1.5V), but perhaps we can go for a 4x increase?

The plan is to then re-install the boards, and see if we can 

  1. Turn on the whitening successfully (I checked with an extender board that the switching of the whitening stages works - turning OFF the "simDW" filter in the coil driver filter banks enables the analog de-whitening).
  2. Relize the promised improvement in MICH displacement noise with the existing whitening configuration.

We can then take a call on how much to up the series resistance in the DAC signal path. 

Now that I have figured out the cause of the harmonics, I will also try and measure the combined electronics noise of de-whitening board + coil driver board and compare it to the model.

Quote:
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.

 

Attachment 1: coilDriverNoises.pdf
coilDriverNoises.pdf
  13016   Sat May 27 10:26:28 2017 KaustubhUpdateGeneralTransimpedance Calibration

Using Alberto's paper LIGO-T10002-09-R titled "40m RF PDs Upgrade", I calibrated the vertical axis in the bode plots I had obtained for the two PDs ET-3010 and ET-3040.

I am not sure whether the values I have obtained are correct or not(i.e. whether the calibration is correct or not). Kindly review them.

EDIT: Attached the formula used to calculate transimpedance for each data point and the values of other paramaters.

EDIT 2: Updated the plots by changing the conversion for gettin ghte ratio of the transfer functions from 10^(y/10) to 10^(y/20).

Attachment 1: ET-3040_test_transimpedance.pdf
ET-3040_test_transimpedance.pdf
Attachment 2: ET-3010_test_transimpedance.pdf
ET-3010_test_transimpedance.pdf
Attachment 3: Formula_for_Transimpedance.pdf
Formula_for_Transimpedance.pdf
  13017   Mon May 29 16:47:38 2017 gautamUpdateGeneralCoil driver boards reinstalled

Yesterday, I reinstalled the de-whitening boards + coil driver boards into their respective Eurocrate slots, and reconnected the cabling. I then roughly re-aligned the ITMs using the green beams. 

I've given Steve a list of the thin-film resistors we need to implement the changes discussed in the preceeding elogs - but I figured it would be good to see if we can realize the projected improvement in MICH displacement noise just by fixing the BS Oplev loop shape and turning the existing whitening on. Before re-installing them however, I did make a few changes:

  • Removed the gain of x3 on all the signal paths on the De-Whitening boards, and made them gain x1. For the De-Whitened path, this was done by changing the feedback resistor in the final op-amp (OP27) from 7.5kohm to 2.49kOhm, while for the bypass path, the feedback resistor in the LT1125 stages were changed from 3.01kohm to 1kohm. 
  • To recap - this gain of x3 was originally implemented because the DACs were +/- 5V, while the coil driver electronics had supply voltage of +/- 15V. Now, our DACs are +/- 10V, and even though the supply voltage to the coil driver boards is +/- 15V, in reality, the op-amps saturate at around 12V, so we aren't really losing much in terms of range.
  • I also modified the de-whitening path in the BS de-whitening board to mimic the configuration on the ITM de-whitening boards. Mainly, this involved replacing the final stage AD797 with an OP27, and also implementing the passive pole-zero network at the output of the de-whitened path. I couldn't find capacitors similar to those used on the ITM de-whitening boards, so I used WIMA capacitors.
  • The SRM de-whitening path was not touched for now.
  • On all the boards, I replaced any AD797s that were being used with OP27s, and simply removed AD797s that were in DAQ paths.
  • I removed all the potentiometers on all the boards (FAST analog path on the coil driver boards, and some offset trim Pots on the BS and SRM de-whitening boards for the AD797s, which were also removed).
  • For one signal path on the coil driver board (ITMX ch1), I replaced all of the resistors with thin-film ones and re-measured the noise. However, the excess noise in the measurement below ~40Hz (relative to the model) remained.

Photos of all the boards were taken prior to re-installation, and have been uploaded to the 40m Google Photos page - I will update schematics + photos on the DCC page once other planned changes are implemented.

I also measured the transfer functions on the de-whitened signal paths on all the boards before re-installing them. I then fit everything using LISO, and updated the filter banks in Foton to match these measurements - the original filters were copied over from FM9 and FM10 to FM7 and FM8. The new filters are appended with the suffix "_0517", and live in FM9 and FM10 of the coil output filter banks. The measured TFs (for ITMs and BS) are summarized in Attachment #1, while Attachment #2 contains the data and LISO file used to do the fits (path to the .bod files in the .fil file will have to be changed appropriately). I used 2 complex pole pairs at ~10 Hz, two complex zero pairs at ~100Hz, real poles at ~15Hz and ~3kHz, and real zeros at ~100Hz and ~550Hz for the fits. The fits line up well with the measured data, and are close enough to the "expected" values (as calculated from component values) to be explained by tolerances on the installed components - I omit the plots here. 

After re-installing the boards in the Eurocrate, restoring rough alignment, and updating the filter banks with the most recent measured values, I wanted to see if I could turn the whitening on for one of the optics (ITMY) smoothly before trying to do so in the full DRMI - switching off the "SimDW_0517" filter (FM9) should switch the signal path on the de-whitening board from bypass to de-whitened, and I had confirmed last week with an extender board that the voltage at the appropriate backplane connector pin does change as expected when the FM9 MEDM button is toggled (for both ITMs, BS and SRM). But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further. 


Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?

Attachment 1: deWhitening_consolidated.pdf
deWhitening_consolidated.pdf deWhitening_consolidated.pdf deWhitening_consolidated.pdf
Attachment 2: deWhitening_measurements.zip
Attachment 3: ETMY_OL.png
ETMY_OL.png
  13018   Tue May 30 13:36:58 2017 SteveUpdateOptical LeversETMY Oplev HeNe is replaced

Finally I reallized what is killing the ETMY oplev laser. Wrong  power supply, it  was driving the HeNe laser by 600V higher voltage than recommended. Power supply 101T-2300Vdc replaced by 101T-1700Vdc ( Uniphase model 1201-1, sn 2712420 )

The laser head 1103P, sn P947049 lived for 120 days and it was replaced by sn P964431   New laser output 2.8 mW,  quadrant sum 19,750 counts

 

Attachment 1: oplevETMY120d.png
oplevETMY120d.png
  13019   Tue May 30 16:02:59 2017 gautamUpdateGeneralCoil driver boards reinstalled

I think the reason I am unable to engage the de-whitening is that the OL loop is injecting a ton of control noise - see Attachment #1. With the OL loop off (i.e. just local damping loops engaged for the ITMs), the RMS control signal at 100Hz is ~6 orders of magnitude (!) lower than with the OL loop on. So turning on the whitening was just railing the DAC I guess (since the whitening has something like 60dB gain at 100Hz).

The Oplev loops for the ITMs use an "Ellip15" low-pass filter to do the roll-off (2nd order Elliptic low pass filter with 15dB stopband atten and 2dB ripple). I confirmed that if I disable the OL loops, I was able to turn on the whitening for ITMY smoothly.

Now that the ETMY OL HeNe has been replaced, I restored alignment of the IFO. Both arms lock fine (I was also able to engage the ITMY Coil Driver whitening smoothly with the arm locked). However, something funny is going on with ASS - running the dither seems to inject huge offsets into the ITMY pit and yaw such that it almost immediately breaks the lock. This probably has to do with some EPICS values not being reset correctly since the recent slow-machine restarts (for instance, the c1iscaux restart caused all the LSC RFPD whitening gains to be reset to random values, I had to burt-restore the POX11 and POY11 values before I could get the arms to lock), I will have to investigate further.

GV edit 2pm 31 May: After talking to Koji at the meeting, I realized I did not specify what channel the attached spectra are for - it is  C1:SUS-ITMY_ULCOIL_OUT.

Quote:
 

But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further. 


Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?

 

Attachment 1: OL_noiseInjection.pdf
OL_noiseInjection.pdf
  13020   Tue May 30 17:45:35 2017 jigyasaSummaryCamerasGigE configuration

To verify the Pylon Installation on the shared drive, I tried connecting the Basler acA640-100gm to the PoE connector and running it through Allegra.

Each time the camera was opened, I got a message on Terminal saying ‘Failed to get the node ‘AcquisitionFrameRate’ from the device’s nodemap’.

Yet, I was able to capture images in single shot and continuous shot mode. I tried to emulate the analog controls (gain at 360, Black level 121) as in Johannes’ elog  12617 and varied the exposure rate from 1 to 5 milliseconds. The camera had the Rainbow 50mm lens with which I was able to focus on some markings on the white board, however the image was extremely magnified and this lens was extremely sensitive which meant that the image went quickly out of focus.
I checked the CCD cabinet in the 40m to find 12 mm lenses which couldn’t focus properly. So I couldn’t quite get an image as Johannes had been able to obtain! I also got an image of a cable in focus but it is very dark due to the exposure time.
 WIth the components for the telescope design arriving(hopefully) by tomorrow, I should be able to assemble the telescope and capture some more images.

From Joe B’s paper and discussion with Gautam and Johannes, I came up with three models for configuring the GigE’s. Three configuration models for the GigE have been proposed which connect the camera to a computer network. While the first model is just involves connecting the camera directly to a PC with Pylon installation using a Power over Ethernet adapter, it would be only efficient in the basic IP configuration of the camera without involving a complex network. The second model describes the integration of the camera to 'Martian'. The third model combines the creation of a separate camera subnetwork and integrating this network with the main network in the lab through a switch. This model would be more efficient to employ as the number of cameras increases. The same purpose could be achieved by using a PC with two network ports one of which connects to the camera subnet while another links it to the Martian where the computers running the client script could stream desired frames.

 

Attachment 1: GigEconfiguration.pdf
GigEconfiguration.pdf GigEconfiguration.pdf GigEconfiguration.pdf
  13021   Tue May 30 18:31:54 2017 DhruvaUpdateOptical LeversBeam Profiling Results

​Updates in the He-Ne beam profiling experiment. 

  1. I've made intensity profile plots at two more points on the z-axis. The additition of this plots hasn't affected the earlier obtained beam waist significantly. 
  2. I have added other sources of error, such as the statisitical fluctuations on the oscilloscope(which is small compared to the least count error of the micrometer) and the least count of the z-axis scale.
  3. I have also calculated the error in the parameters obtained by fiiting by calculating the covariance matrix using the jacobian returned by the lsqcurvefit function in MATLAB. 
  4. I have also added horizontal error bars to all plots. 
  5. All plots are now in S.I. units 

 

 

Attachment 1: plots.pdf
plots.pdf plots.pdf plots.pdf plots.pdf plots.pdf plots.pdf plots.pdf
Attachment 2: spot_size_y.pdf
spot_size_y.pdf
  13022   Wed May 31 12:58:30 2017 Eric GustafsonUpdateLSCRunning the 40 m PD Frequency Response Fiber System; Hardware and Software

Overall Design

A schematic of the overall subsystem diagran in attachment.

RF and Optical Connections

Starting at the top left corner is the diode laser module.  This laser has an input which allows it to be amplitude modulated.  The output of the laser is coupled into an optical fiber which is connectorized with an FC/APC connector and is connected to the input port of a 1 by 16 Optical Fiber Splitter. The Splitter produces 16 optical fiber outputs dividing the input laser power into 16 roughly equal optical optical fiber outputs.  These optical fibers are routed to the Photodiode Receivers (PD) which are the devices under test. All of the PDs are illuminated simultaneously with amplitude modulated light. The Optical Fiber outputs each have a collimating fiber telescope which is used to focus the light onto the PDs. Optical Fiber CH1 is routed to a broadband flat response reference photodiode which is used to provide a reference to the HP-4395A Network Analyzer.  The other Channel outputs are connected to an RF switch which can be programmed to select one of 16 inputs as the output.  The selected outputs can then be sent into channel A of the RF Network Analyzer. 

 

RF Switch

The RF switch consists of two 8 by 1 Multiplexers (National Instruments PXI-254x) slotted into a PXI Chassis (National Instruments PXI-1033).  The Multiplexers have 8 RF inputs and one RF output and can be programmed through the PXI Chassis to select one and only one of the 8 inputs to be routed to the RF output.) The first 8 Channels are connected to the first 8 inputs of the first Multiplexer.  The first Multiplexer’s output is then connected to the Channel 1 input of the second Multiplexer. The remaining PD outputs are connected to the remaining inputs of the second Multiplexer. The output of the second Multiplexer is connected to the A channel of the RF Network Analyzer.  Thus it is possible to select any one of the PD RF outputs for analysis.

Software

Something on this tomorrow.

 

Attachment 1: Overall_schematic_D1300603-v2.pdf
Overall_schematic_D1300603-v2.pdf
  13023   Wed May 31 14:23:42 2017 jigyasaUpdateComputer Scripts / ProgramsEstablishing the EPICs channels for the GigE

To set up the EPICs channels for the GigE, Gautam and I followed the steps in the elog by him  8957 .

We copied the 11 required channels from scripts/GigE/SnapPy/example_camera.db to c1cam.db that we created, however due to conflicts with the existing CAM-AS_PORT channels, the channels could not be accessed.

We later changed the database file to Video.db and on restarting the slow machine, it was verified that the channels indeed could be written to and read from.

11 channels were added

C1: CAM-MC1_X (X centroid position)
C1: CAM-MC1_Y (Y centroid position)
C1: CAM-MC1_WX (Gaussian width in the X direction)
C1: CAM-MC1_WY (Gaussian width in the Y direction)
C1: CAM-MC1_XY (Gaussian width along XY line)
C1: CAM-MC1_SUM (Pixel sum)
C1: CAM-MC1_EXP (Exposure time in microseconds)
C1: CAM-MC1_SNAP (Control signal for taking snapshots)
C1: CAM-MC1_FILE(File name for image to saved to - time stamp automatically appended)
C1: CAM-MC1_RELOAD (Reloads configuration file)
C1: CAM-MC1_AUTO (1 means autoexposure on, 0 means autoexposure off)

The procedure followed –

  • Add the channel names to the file C0EDCU.ini (path = /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini).
  • Make a database (.db) file so that these channels are actually recorded (path = /cvs/cds/caltech/target/c1aux/Video.db).
  • Restarted the slow machine. FB
  • Verify that the channels indeed exist and can be read and written to using ezcaread and ezcawrite.

GV: Initially, I made a new directory called c1cam in /cvs/cds/caltech/target/ and made a .db file in there. However, the channels were not accessible after re-starting FB (attempting to read these channels threw up the "Channel does not exist" error). On digging a little further, I saw that there were already some "C1:CAM-AS_PORT" channels in C0EDCU.ini. The corresponding database records were defined inside /cvs/cds/caltech/target/c1aux/Video.db. So I just added the new records there. I also had to uncomment out the dummy channel in C0EDCU.ini to keep an even number of channels. Restarting FB still did not allow read/write access to the channels. Looking through the files in /cvs/cds/caltech/target/c1aux, I suspected that the epics database records are loaded when the machine is first booted up - so on a hunch I re-started c1aux by keying the crate, and this did the trick. The channels can now be read / written to (tested using Python cdsutils).

  13024   Wed May 31 18:17:28 2017 jigyasaSummaryCamerasGigE configuration

This evening I was able to obtain some images with the same lens on the GigE. 
The problem earlier, as Johannes pointed out, was that we were using too many adapters on the camera and so it was able to focus at really shallow distances or at really low depths of focus. 
So after removing the adapters we were able to focus on objects at much larger distances.

The mug for example was at a distance greater than 1.5 meters from the camera.

Here are some images that were captured on Allegra by plugging in the GigE to the PoE connector connected to the Martian. 

Quote:

I tried to emulate the analog controls (gain at 360, Black level 121) as in Johannes’ elog  12617 and varied the exposure rate from 1 to 5 milliseconds. The camera had the Rainbow 50mm lens with which I was able to focus on some markings on the white board, however the image was extremely magnified and this lens was extremely sensitive which meant that the image went quickly out of focus.

 

Attachment 1: PictureswithPylon.pdf
PictureswithPylon.pdf PictureswithPylon.pdf
  13025   Wed May 31 19:18:53 2017 jigyasaSummaryCamerasGigE configuration

And here's another picture of Kaustubh, my fellow SURF, captured in all his glory by Rana! :)

 

Quote:

This evening I was able to obtain some images with the same lens on the GigE. 
The problem earlier, as Johannes pointed out, was that we were using too many adapters on the camera and so it was able to focus at really shallow distances or at really low depths of focus. 
So after removing the adapters we were able to focus on objects at much larger distances.

The mug for example was at a distance greater than 1.5 meters from the camera.

Here are some images that were captured on Allegra by plugging in the GigE to the PoE connector connected to the Martian. 

Quote:

I tried to emulate the analog controls (gain at 360, Black level 121) as in Johannes’ elog  12617 and varied the exposure rate from 1 to 5 milliseconds. The camera had the Rainbow 50mm lens with which I was able to focus on some markings on the white board, however the image was extremely magnified and this lens was extremely sensitive which meant that the image went quickly out of focus.

 

 

Attachment 1: Image__2017-05-31__18-49-37.bmp
  13026   Thu Jun 1 00:10:15 2017 gautamUpdateGeneralCoil driver boards reinstalled

[Koji, Gautam]

We tried to debug the mysterious sudden failure of ASS - here is a summary of what we did tonight. These are just notes for now, so I don't forget tomorrow.

What are the problems/symptoms?

  • After re-installing the coil driver electronics, the ASS loops do not appear to converge - one or more loops seem to run away to the point we lose the lock.
  • For the Y-arm dithers, the previously nominal ITM PIT and YAW oscillator amplitudes (of ~1000cts each) now appears far too large (the fuzz on the Y arm transmission increases by x3 as viewed on StripTool).
  • The convergence problem exists for the X arm alignment servos too.

What are the (known) changes since the servos were last working?

  • Gain of x3 on the de-whitening boards for ITMX, ITMY, BS and SRM have been replaced with gain x1. But I had measurements for all transfer functions (De-White board input to De-White Board outputs) before and after this change, so I compensated by adding a filter of gain ~x3 to all the coil filter banks for these optics (the exact value was the ratio of the DC gain of the transfer functions before/after).
  • The ETMY Oplev has been replaced. I walked over to the endtable and there doesn't seem to be any obvious clipping of either the Oplev beam or the IR transmission.

Hypotheses plus checks (indented bullets) to test them:

  1. The actuation on the ITMs are ~x10 times stronger now (for reasons unknown).
    • I locked the Y-arm and drove a line in the channels C1:SUS-ETMY_LSC_EXC and C1:SUS-ITMY_LSC_EXC at ~100Hz and ~30Hz, (one optic at one frequency at a time), and looked at the response in the LSC control signal. The peaks at both frequencies for the ITMs and ETMs were within a factor of ~2. Seems reasonable.
    • We further checked by driving lines in C1:SUS-ETMY_ASCPIT_EXC and C1:SUS-ITMY_ASCPIT_EXC (and also the corresponding YAW channels), and looked at peak heights at the drive frequencies in the OL control signal spectra - the peak heights matched up well in both the ITM and ETM spectra (the drive was in the same number of counts).

      So it doesn't look like there is any strange actuation imbalance between the ITM and ETM as a result of the recent electronics work, which makes sense as the other control loops acting on the suspensions (local damping, Oplevs etc seem to work fine). 
  2. The way the dither servo is set up for the Y-arm, the tip-tilts are used to set the input axis to the cavity axis, while actuation to the ITM and ETM takes care of the spot centering. The problem lies with one of these subsystems.
    • We tried disabling the ASS servo inputs to all the spot-centering loops - but even with just actuation on the TTs, the arm transmission isn't maximized.
    • We tried the other combination - disable actuation path to TTs, leave those to ITM and ETM on - same result, but the divergence is much faster (lock lost within a couple of seconds, large offsets appear in the ETM_PIT_L / ETM_YAW_L error signals.
    • Tried turning on loops one at a time - but still the arm transmission isn't maximized.
  3. Something is funny with the IR transmon QPD / ETMY Oplev.
    • I quickly measured Oplev PIT and YAW OLTFs, they seem normal with upper UGFs around 5Hz and phase margins of ~30 degrees.
    • We had no success using either of the two available Transmon QPDs
    • Looking at the QPD quadrants, the alignment isn't stellar but we get roughly the same number of counts on all quadrants, and the spot isn't drastically misaligned in either PIT or YAW.

For whatever reasons, it appears that dithering the cavity mirrors at frequencies with amplitudes that worked ~3 weeks ago is no longer giving us the correct error signals for dither alignment. We are out of ideas for tonight, TBC tomorrow...

 

  13027   Thu Jun 1 15:33:39 2017 jigyasaUpdateCamerasGigE installation in the IFO area

I tried to capture some images with the GigE inside the Interferometer area in the 40m today. For that, I connected the POE injector to the Netgear Switch in 1x6 and connected it to the GigE. I then tried to access the Pylon Viewer App through Paola but that seemed to have some errors. When trying to connect to the Basler, quite a few errors were encountered in establishing connection and trying to capture the image. There were a few errors with single shot capture but the continuous shot could not even be started. To locate the problem, I tried running the Pylon installation through Allegra in the control room and everything seemed to work fine there.

Few error messages encountered

createPylondevice error :Failed to read memory at 0xc0000000, 0xd800 bytes. Timeout. No message received.
Failed to stop the camera; stopgrab: Exception Occurred: Control Channel not open


Eventually I connected Paola to the Switch with an Ethernet cable and over this wired connection, the errors were resolved and I was able to capture some images in Continuous shot mode at 103.3 fps without any problem.

In the afternoon, Steve and I tried to install the camera near MC2 and get some images of the mirrors. Due to a restricted field of view of the lens on the camera, after many efforts to focus on the optic, we were able to get this image. MC2 was unlocked so this image captures some resonating higher order mode.

With MC2 locked, I will get some images of the mirror at different exposure times and try to get an HDR image.  
As per Rana's suggestion, I am also looking up which compression format would be the best to save the images in.

 

Attachment 1: HOMMC2.pdf
HOMMC2.pdf
  13028   Thu Jun 1 15:37:01 2017 gautamUpdateCDSslow machine bootfest

Steve alerted me that the IMC wouldn't lock. Reboots for c1susaux, c1iool0 today. I tried using the reset button instead of keying the crates. This worked for c1iool0, but not for c1susaux. So I had to key the latter crate. The machine took a good 5-10 minutes before coming back up, but eventually it did. Now IMC locks fine.

  13029   Thu Jun 1 16:14:55 2017 jigyasaUpdateCamerasGigE installation in the IFO area

Thanks to Steve and Gautam, the IMC was locked.

I was able to capture images with the Rainbow 50 mm lens at exposure times of 100, 300, 1000, 3000, 10000 and 30 microseconds.(The pictures are in the same order). These pictures were taken at a gain of 300 and black level 64.

Special credits to Steve spent a lot of time help me a with setting up the hardware and focusing on the beam spot with the camera. 
I can't thank you enough Steve! :) 

Quote:

In the afternoon, Steve and I tried to install the camera near MC2 and get some images of the mirrors. Due to a restricted field of view of the lens on the camera, after many efforts to focus on the optic, we were able to get this image. MC2 was unlocked so this image captures some resonating higher order mode.

With MC2 locked, I will get some images of the mirror at different exposure times and try to get an HDR image.  
 

Attachment 1: MC2.pdf
MC2.pdf MC2.pdf MC2.pdf MC2.pdf MC2.pdf MC2.pdf
  13030   Thu Jun 1 16:21:55 2017 SteveUpdateSUS wire standoffs update

Ruby wire standoff received from China. I looked one of them with our small USB camera.  They did a good job. The  long edges of the prism are chipped.

The v-groove cutter must avoid them. Pictures will follow.

 

  13031   Thu Jun 1 20:16:11 2017 ranaUpdateCamerasGigE installation in the IFO area

Good installation. I think the images are still out of focus, so try to resolve into some small dots at the low exposure setting.

  13032   Fri Jun 2 00:54:08 2017 KojiUpdateASSXarm ASS restoration work

While Gautam is working the restoration of Yarm ASS, I worked on Xarm.

Basically, I have changed the oscillator freqs and amps so as to have linear signals to the misalignment of the mirrors.
Also reduced the complexity of the input/output matrices to avoid any confusion.

Now the ITM dither takes care of the ITM alignment, and the ETM dither takes care of the ETM alignment.
The cavity alignment servos (4dofs) are running fine although the control band widths are still low (<0.1Hz).
The ETM spot positions should be controlled by the BS alignment, but it seems that these loops have suspicion about the signal quality.

While Gautam wa stouching the input TTs, we occasionally saw anomalously high transmission of the arm cavities (~1.2).
We decided to use this beam as this could have indicated partial clipping of the beam somewhere in the input optics chain.

Then the arm cavity was aligned to have reasonably high transmission for the green beam. i.e. Use the green power mon PD as a part of the alignment reference.

This resulted very stable transmission of both the IR and green beams. We liked them. We decide to use this a reference beam at least for now.

Attachment1: GTRX image at the end of the work.

Attachment2: ASSX screen shot

Attachment3: ASSX servo screen shot

Attachment4: Green ASX servo screen shot

Attachment 5: Screen shot of the ASS X strip tool

Attachment 6: Screen shot of the ASS X input matrix

Attachment 7: Screen shot of the ASS X output matrix

Attachment 1: GTRX.jpeg
GTRX.jpeg
Attachment 2: 54.png
54.png
Attachment 3: 37.png
37.png
Attachment 4: 16.png
16.png
Attachment 5: 26.png
26.png
Attachment 6: 41.png
41.png
Attachment 7: 01.png
01.png
  13033   Fri Jun 2 01:22:50 2017 gautamUpdateASSASS restoration work

I started by checking if shaking an optic in pitch really moves it in pitch - i.e. how much PIT to YAW coupling is there. The motivation being if we aren't really dithering the optics in orthogonal DoFs, the demodulated error signals carry mixed information which the dither alignment servos get confused by. First, I checked with a low frequency dither (~4Hz) and looked at the green transmission on the video monitors. The spot seemed to respond reasonably orthogonally to both pitch and yaw excitations on either ITMY or ETMY. But looking at the Oplev control signal spectra, there seems to be a significant amount of cross coupling. ITMY YAW, ETMY PIT, and ETMY YAW have the peak in the orthogonal degree of freedom at the excitation frequency roughly 20% of the height of the DoF being driven. But for ITMY PIT, the peaks in the orthogonal DoFs are almost of equal height. This remains true even when I changed the excitation frequencies to the nominal dither alignment servo frequencies.

I then tried to see if I could get parts of the ASS working. I tried to manually align the ITM, ETM and TTs as best as I could. There are many "alignment references" - prior to the coil driver board removal, I had centered all Oplevs and also checked that both X and Y green beams had nominal transmission levels (~0.4 for GTRY, ~0.5 for GTRX). Then there are the Transmon QPDs. After trying various combinations, I was able to get good IR transmission, and reasonable GTRY.

Next, I tried running the ASS loops that use error signals demodulated at the ETM dither frequencies (so actuation is on the ITM and TT1 as per the current output matrix which I did not touch for tonight). This worked reasonably well - Attachment #1 shows that the servos were able to recover good IR transmission when various optics in the Y arm were disturbed. I used the same oscillator frequencies as in the existing burt snapshot. But the amplitudes were tweaked.

Unfortunately I had no luck enabling the servos that demodulate the ITM dithers.

The plan for daytime work tomorrow is to check the linearity of the error signals in response to static misalignment of some optics, and then optimize the elements of the output matrix.

I am uploading a .zip file with Sensoray screen-grabs of all the test-masses in their best aligned state from tonight (except ITMX face, which for some reason I can't grab).

And for good measure, the Oplev spot positions - Attachment #3.

Quote:

While Gautam is working the restoration of Yarm ASS, I worked on Xarm.

 

Attachment 1: ASS_Y_recovery.png
ASS_Y_recovery.png
Attachment 2: ASS_Repairs.zip
Attachment 3: OLs.png
OLs.png
  13034   Fri Jun 2 12:32:16 2017 gautamUpdateGeneralPower glitch

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

  13035   Fri Jun 2 16:02:34 2017 gautamUpdateGeneralPower glitch

Today's recovery seems to be a lot more complicated than usual.

  • The vertex area of the lab is pretty warm - I think the ACs are not running. The wall switch-box (see Attachment #1) shows some red lights which I'm pretty sure are usually green. I pressed the push-buttons above the red light, hopefully this fixed the AC and the lab cools down soon.
  • Related to the above - C1IOO has a bunch of warning orange indicator lights ON that suggest it is feeling the heat. Not sure if that is why, but I am unable to bring any of the C1IOO models back online - the rtcds compilation just fails, after which I am unable to ssh back into the machine as well.
  • C1SUS was problematic as well. I found that the expansion chassis was not powered. Fortunately, this was fixed by simply switching to the one free socket on the power strip that powers a bunch of stuff on 1X4 - this brought the expansion chassis back alive, and after a soft reboot of c1sus, I was able to get these models up and running. Fortunately, none of the electronics seem to have been damaged. Perhaps it is time for surge-protecting power strips inside the lab area as well (if they aren't already)? 
  • I was unable to successfully resolve the dmesg problem alluded to earlier. Looking through some forums, I gather that the output of dmesg should be written to a file in /var/log/. But no such file exists on any of our 5 front-ends (but it does on Megatron, for example). So is this way of setting up the front end machines deliberate? Why does this matter? Because it seems that the buffer which we see when we simply run "dmesg" on the console gets preiodically cleared. So sometime back, when I was trying to verify that the installed DACs are indeed 16-bit DACs by looking at dmesg, running "dmesg | head" showed a first line that was written to well after the last reboot of the machine. Anyway, this probably isn't a big deal, and I also verified during the model recompilation that all our DACs are indeed 16-bit.
  • I was also trying to set up the Upstart processes on megatron such that the MC autolocker and FSS slow control scripts start up automatically when the machine is rebooted. But since C1IOO isn't co-operating, I wasn't able to get very far on this front either...

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)

 

Quote:

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

 

Attachment 1: IMG_7399.JPG
IMG_7399.JPG
  13036   Fri Jun 2 22:01:52 2017 gautamUpdateGeneralPower glitch - recovery

[Koji, Rana, Gautam]

Attachment #1 - CDS status at the end of todays efforts. There is one red indicator light showing an RFM error which couldn't be fixed by running "global diag reset" or "mxstream restart" scripts, but getting to this point was a journey so we decided to call it for today.


The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:

  1. Killed all models on all four other front ends other than c1ioo. 
  2. Hard reboot for c1ioo - at this point, we could ssh into c1ioo. With all other models killed, we restarted the c1ioo models one by one. They all came online smoothly.
  3. We then set about restarting the models on the other machines.
    • We started with the IOP models, and then restarted the others one by one
    • We then tried running "global diag reset", "mxstream restart" and "telnet fb 8087 -> shutdown" to get rid of all the red indicator fields on the CDS overview screen.
    • All models came back online, but the models on c1sus indicated a DC (data concentrator?) error. 
  4. After a few minutes, I noticed that all the models on c1iscex had stalled
    • dmesg pointed to a synchronization error when trying to initialize the ADC
    • The field that normally pulses at ~1pps on the CDS overview MEDM screen when the models are running normally was stuck
    • Repeated attempts to restart the models kept throwing up the same error in dmesg 
    • We even tried killing all models on all other frontends and restarting just those on c1iscex as detailed earlier in this elog for c1ioo - to no avail.
    • A walk to the end station to do a hard reboot of c1iscex revealed that both green indicator lights on the slave timing card in the expansion chassis were OFF.
    • The corresponding lights on the Master Timing Sequencer (which supplies the synchronization signal to all the front ends via optical fiber) were also off.
    • Sometime ago, Eric and I had noticed a similar problem. Back then, we simply switched the connection on the Master Timing Sequencer to the one unused available port, this fixed the problem. This time, switching the fiber connection on the Master Timing Sequencer had no effect.
    • Power cycling the Master Timing Sequencer had no effect
    • However, switching the optical fiber connections going to the X and Y ends lead to the green LED on the suspect port on the Master Timing Sequencer (originally the X end fiber was plugged in here) turning back ON when the Y end fiber was plugged in.
    • This suggested a problem with the slave timing card, and not the master. 
  5. Koji and I then did the following at the X-end electronics rack:
    • Shutdown c1iscex, toggled the switches in the front and back of the expansion chassis
    • Disconnect AC power from rear of c1iscex as well as the expansion chassis. This meant all LEDs in the expansion chassis went off, except a single one labelled "+5AUX" on the PCB - to make this go off, we had to disconnect a jumper on the PCB (see Attachment #2), and then toggle the power switches on the front and back of the expansion chassis (with the AC power still disconnected). Finally all lights were off.
    • Confident we had completely cut all power to the board, we then started re-connecting AC power. First we re-started the expansion chassis, and then re-booted c1iscex.
    • The lights on the slave timing card came on (including the one that pulses at ~1pps, which indicates normal operation)!
  6. Then we went back to the control room, and essentially repeated bullet points 2 and 3, but starting with c1iscex instead of c1ioo.
  7. The last twist in this tale was that though all the models came back online, the DC errors on c1sus models persisted. No amount of "mxstream restart", "global diag reset", or restarting fb would make these go away.
  8. Eventually, Koji noticed that there was a large discrepancy in the gpstimes indicated in c1x02 (the IOP model on c1sus), compared to all the other IOP models (even though the PDT displayed was correct). There were also a large number or IRIG-B errors indicated on the same c1x02 status screen, and the "TIM" indicator in the status word was red.
  9. Turns out, running ntpdate before restarting all the models somehow doesn't sync the gps time - so this was what was causing the DC errors. 
  10. So we did a hard reboot of c1sus (and for good measure, repeated the bullet points of 5 above on c1sus and its expansion chassis). Then, we tried starting the c1x02 model without running ntpdate first (on startup, there is an 8 hour mismatch between the actual time in Pasadena and the system time - but system time is 8 hours behind, so it isn't even somehow syncing to UTC or any other real timezone?)
    • Model started up smoothly
    • But there was still a 1 second discrepancy between the gpstime on c1x02 and all the other IOPs (and the 8 hour discrepancy between displayed PDT and actual time in Pasadena)
    • So we tried running ntpdate after starting c1x02 - this finally fixed the problem, gpstime and PDT on c1x02 agreed with the other frontends and the actual time in Pasadena.
    • However, the models on c1lsc and c1ioo crashed
    • So we restarted the IOPs on both these machines, and then the rest of the models.
  11. Finally, we ran "mxstream restart", "global diag reset", and restarted fb, to make the CDS overview screen look like it does now.

Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error? 

Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.

Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params. 

Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.

Attachment #4 - Warning lights on C1IOO

Quote:

Today's recovery seems to be a lot more complicated than usual.

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

 

Attachment 1: power_glitch_recovery.png
power_glitch_recovery.png
Attachment 2: IMG_7406.JPG
IMG_7406.JPG
Attachment 3: IMG_7407.JPG
IMG_7407.JPG
Attachment 4: IMG_7400.JPG
IMG_7400.JPG
  13037   Sun Jun 4 14:19:33 2017 ranaFrogsComputersNetwork slowdown: Martians are behind a waterwall

A few weeks ago we did some internet speed tests and found a dramatic difference between our general network and our internal Martian network in terms of access speed to the outside world.

As you can see, the speed from nodus is consistent with a Gigabit connection. But the speeds from any machine on the inside is ~100x slower. We need to take a look at our router / NAT setup to see if its an old hardware problem or just something in the software firewall. By comparison, my home internet download speed test is ~48 Mbit/s; ~6x faster than our CDS computers.


controls@megatron|~> speedtest
/usr/local/bin/speedtest:5: UserWarning: Module dap was already imported from None, but /usr/lib/python2.7/dist-packages is being added to sys.path
  from pkg_resources import load_entry_point
Retrieving speedtest.net configuration...
Testing from Caltech (131.215.115.189)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Race Communications (Los Angeles, CA) [29.63 km]: 6.52 ms
Testing download speed................................................................................
Download: 6.35 Mbit/s
Testing upload speed................................................................................................
Upload: 5.10 Mbit/s
controls@megatron|~> exit
logout
Connection to megatron closed.
controls@nodus|~ > speedtest
Retrieving speedtest.net configuration...
Testing from Caltech (131.215.115.52)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Phyber Communications (Los Angeles, CA) [29.63 km]: 2.196 ms
Testing download speed................................................................................
Download: 721.92 Mbit/s
Testing upload speed................................................................................................
Upload: 251.38 Mbit/s

Attachment 1: Screen_Shot_2017-06-04_at_1.47.47_PM.png
Screen_Shot_2017-06-04_at_1.47.47_PM.png
Attachment 2: Screen_Shot_2017-06-04_at_1.44.42_PM.png
Screen_Shot_2017-06-04_at_1.44.42_PM.png
  13038   Sun Jun 4 15:59:50 2017 gautamUpdateGeneralPower glitch - recovery

I think the CDS status is back to normal.

  • Bit 2 of the C1RFM status word was red, indicating something was wrong with "GE FANUC RFM Card 0".
  • You would think the RFM errors occur in pairs, in C1RFM and in some other model - but in this case, the only red light was on c1rfm.
  • While trying to re-align the IFO, I noticed that the TRY time series flatlined at 0 even though I could see flashes on the TRANSMON camera.
  • Quick trip to the Y-End with an oscilloscope confirmed that there was nothing wrong with the PD.
  • I crawled through some elogs, but didn't really find any instructions on how to fix this problem - the couple of references I did find to similar problems reported red indicator lights occurring in pairs on two or more models, and the problem was then fixed by restarting said models.
  • So on a hunch, I restarted all models on c1iscey (no hard or soft reboot of the FE was required)
  • This fixed the problem
  • I also had to start the monit process manually on some of the FEs like c1sus. 

Now IFO work like fixing ASS can continue...

Attachment 1: powerGlitchRecovery.png
powerGlitchRecovery.png
ELOG V3.1.3-