40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 281 of 357  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  13035   Fri Jun 2 16:02:34 2017 gautamUpdateGeneralPower glitch

Today's recovery seems to be a lot more complicated than usual.

  • The vertex area of the lab is pretty warm - I think the ACs are not running. The wall switch-box (see Attachment #1) shows some red lights which I'm pretty sure are usually green. I pressed the push-buttons above the red light, hopefully this fixed the AC and the lab cools down soon.
  • Related to the above - C1IOO has a bunch of warning orange indicator lights ON that suggest it is feeling the heat. Not sure if that is why, but I am unable to bring any of the C1IOO models back online - the rtcds compilation just fails, after which I am unable to ssh back into the machine as well.
  • C1SUS was problematic as well. I found that the expansion chassis was not powered. Fortunately, this was fixed by simply switching to the one free socket on the power strip that powers a bunch of stuff on 1X4 - this brought the expansion chassis back alive, and after a soft reboot of c1sus, I was able to get these models up and running. Fortunately, none of the electronics seem to have been damaged. Perhaps it is time for surge-protecting power strips inside the lab area as well (if they aren't already)? 
  • I was unable to successfully resolve the dmesg problem alluded to earlier. Looking through some forums, I gather that the output of dmesg should be written to a file in /var/log/. But no such file exists on any of our 5 front-ends (but it does on Megatron, for example). So is this way of setting up the front end machines deliberate? Why does this matter? Because it seems that the buffer which we see when we simply run "dmesg" on the console gets preiodically cleared. So sometime back, when I was trying to verify that the installed DACs are indeed 16-bit DACs by looking at dmesg, running "dmesg | head" showed a first line that was written to well after the last reboot of the machine. Anyway, this probably isn't a big deal, and I also verified during the model recompilation that all our DACs are indeed 16-bit.
  • I was also trying to set up the Upstart processes on megatron such that the MC autolocker and FSS slow control scripts start up automatically when the machine is rebooted. But since C1IOO isn't co-operating, I wasn't able to get very far on this front either...

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)

 

Quote:

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

 

Attachment 1: IMG_7399.JPG
IMG_7399.JPG
  13036   Fri Jun 2 22:01:52 2017 gautamUpdateGeneralPower glitch - recovery

[Koji, Rana, Gautam]

Attachment #1 - CDS status at the end of todays efforts. There is one red indicator light showing an RFM error which couldn't be fixed by running "global diag reset" or "mxstream restart" scripts, but getting to this point was a journey so we decided to call it for today.


The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:

  1. Killed all models on all four other front ends other than c1ioo. 
  2. Hard reboot for c1ioo - at this point, we could ssh into c1ioo. With all other models killed, we restarted the c1ioo models one by one. They all came online smoothly.
  3. We then set about restarting the models on the other machines.
    • We started with the IOP models, and then restarted the others one by one
    • We then tried running "global diag reset", "mxstream restart" and "telnet fb 8087 -> shutdown" to get rid of all the red indicator fields on the CDS overview screen.
    • All models came back online, but the models on c1sus indicated a DC (data concentrator?) error. 
  4. After a few minutes, I noticed that all the models on c1iscex had stalled
    • dmesg pointed to a synchronization error when trying to initialize the ADC
    • The field that normally pulses at ~1pps on the CDS overview MEDM screen when the models are running normally was stuck
    • Repeated attempts to restart the models kept throwing up the same error in dmesg 
    • We even tried killing all models on all other frontends and restarting just those on c1iscex as detailed earlier in this elog for c1ioo - to no avail.
    • A walk to the end station to do a hard reboot of c1iscex revealed that both green indicator lights on the slave timing card in the expansion chassis were OFF.
    • The corresponding lights on the Master Timing Sequencer (which supplies the synchronization signal to all the front ends via optical fiber) were also off.
    • Sometime ago, Eric and I had noticed a similar problem. Back then, we simply switched the connection on the Master Timing Sequencer to the one unused available port, this fixed the problem. This time, switching the fiber connection on the Master Timing Sequencer had no effect.
    • Power cycling the Master Timing Sequencer had no effect
    • However, switching the optical fiber connections going to the X and Y ends lead to the green LED on the suspect port on the Master Timing Sequencer (originally the X end fiber was plugged in here) turning back ON when the Y end fiber was plugged in.
    • This suggested a problem with the slave timing card, and not the master. 
  5. Koji and I then did the following at the X-end electronics rack:
    • Shutdown c1iscex, toggled the switches in the front and back of the expansion chassis
    • Disconnect AC power from rear of c1iscex as well as the expansion chassis. This meant all LEDs in the expansion chassis went off, except a single one labelled "+5AUX" on the PCB - to make this go off, we had to disconnect a jumper on the PCB (see Attachment #2), and then toggle the power switches on the front and back of the expansion chassis (with the AC power still disconnected). Finally all lights were off.
    • Confident we had completely cut all power to the board, we then started re-connecting AC power. First we re-started the expansion chassis, and then re-booted c1iscex.
    • The lights on the slave timing card came on (including the one that pulses at ~1pps, which indicates normal operation)!
  6. Then we went back to the control room, and essentially repeated bullet points 2 and 3, but starting with c1iscex instead of c1ioo.
  7. The last twist in this tale was that though all the models came back online, the DC errors on c1sus models persisted. No amount of "mxstream restart", "global diag reset", or restarting fb would make these go away.
  8. Eventually, Koji noticed that there was a large discrepancy in the gpstimes indicated in c1x02 (the IOP model on c1sus), compared to all the other IOP models (even though the PDT displayed was correct). There were also a large number or IRIG-B errors indicated on the same c1x02 status screen, and the "TIM" indicator in the status word was red.
  9. Turns out, running ntpdate before restarting all the models somehow doesn't sync the gps time - so this was what was causing the DC errors. 
  10. So we did a hard reboot of c1sus (and for good measure, repeated the bullet points of 5 above on c1sus and its expansion chassis). Then, we tried starting the c1x02 model without running ntpdate first (on startup, there is an 8 hour mismatch between the actual time in Pasadena and the system time - but system time is 8 hours behind, so it isn't even somehow syncing to UTC or any other real timezone?)
    • Model started up smoothly
    • But there was still a 1 second discrepancy between the gpstime on c1x02 and all the other IOPs (and the 8 hour discrepancy between displayed PDT and actual time in Pasadena)
    • So we tried running ntpdate after starting c1x02 - this finally fixed the problem, gpstime and PDT on c1x02 agreed with the other frontends and the actual time in Pasadena.
    • However, the models on c1lsc and c1ioo crashed
    • So we restarted the IOPs on both these machines, and then the rest of the models.
  11. Finally, we ran "mxstream restart", "global diag reset", and restarted fb, to make the CDS overview screen look like it does now.

Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error? 

Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.

Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params. 

Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.

Attachment #4 - Warning lights on C1IOO

Quote:

Today's recovery seems to be a lot more complicated than usual.

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

 

Attachment 1: power_glitch_recovery.png
power_glitch_recovery.png
Attachment 2: IMG_7406.JPG
IMG_7406.JPG
Attachment 3: IMG_7407.JPG
IMG_7407.JPG
Attachment 4: IMG_7400.JPG
IMG_7400.JPG
  13038   Sun Jun 4 15:59:50 2017 gautamUpdateGeneralPower glitch - recovery

I think the CDS status is back to normal.

  • Bit 2 of the C1RFM status word was red, indicating something was wrong with "GE FANUC RFM Card 0".
  • You would think the RFM errors occur in pairs, in C1RFM and in some other model - but in this case, the only red light was on c1rfm.
  • While trying to re-align the IFO, I noticed that the TRY time series flatlined at 0 even though I could see flashes on the TRANSMON camera.
  • Quick trip to the Y-End with an oscilloscope confirmed that there was nothing wrong with the PD.
  • I crawled through some elogs, but didn't really find any instructions on how to fix this problem - the couple of references I did find to similar problems reported red indicator lights occurring in pairs on two or more models, and the problem was then fixed by restarting said models.
  • So on a hunch, I restarted all models on c1iscey (no hard or soft reboot of the FE was required)
  • This fixed the problem
  • I also had to start the monit process manually on some of the FEs like c1sus. 

Now IFO work like fixing ASS can continue...

Attachment 1: powerGlitchRecovery.png
powerGlitchRecovery.png
  13039   Mon Jun 5 10:30:45 2017 SteveUpdateSUSruby wire standoff pictures

Atm 1 & 5, showing the ruby R ~10 mm as it is seated on Al SOS test mass

Atm. 2, 3 & 4  chipped long edges with SOS sus wire OD 43 micron as  calibration

Quote:

Ruby wire standoff received from China. I looked one of them with our small USB camera.  They did a good job. The  long edges of the prism are chipped.

The v-groove cutter must avoid them. Pictures will follow.

 

 

Attachment 1: A087_R.png.bmp
A087_R.png.bmp
Attachment 2: A097_chipped_edges.png.bmp
A097_chipped_edges.png.bmp
Attachment 3: A099_cal_wire.png.bmp
A099_cal_wire.png.bmp
Attachment 4: A101_cal_wire_43_micron.png.bmp
A101_cal_wire_43_micron.png.bmp
Attachment 5: Al_SOS_R39mm.jpg
Al_SOS_R39mm.jpg
  13040   Mon Jun 5 12:27:34 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

While attempting to execute the Python/Pylon code for the camera server, camera_server.py, the compiler couldn’t locate the pylon-5.0.5.so file. So I included the path for the required .so file as

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rtcds/Caltech/c1/scripts/GigE/pylon5/lib64

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

  13041   Mon Jun 5 12:50:42 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

I think there might be a problem with the fact that the installation of the various components such as the .ini file and the Pylon software are in directories different from the ones Joe B. specifies in his paper. 

Instead of modifying the paths in the code itself, I tried creating the paths to match the code-

Update in /ligo directory 

/cds/caltech/c1/camera/L1-CAM-MC1.ini  created and then I ran the camera_server.py from scripts/GigE/SnapPy as

./camera_server.py -c /ligo/cds/caltech/c1/camera/L1-CAM-MC1.ini 

This prompted up the following on terminal- 

finished loading settings from /ligo/cds/caltech/c1/camera/L1-CAM-MC1.ini and lists the settings in the configuration file.


However the  gst.ElementNotFoundError: textoverlay still persists. 

Probably I could try putting all files in exactly the same directories as specified in the document. 

Quote:

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

 

  13042   Mon Jun 5 15:04:33 2017 ranaUpdateCamerasAttempt to run camera server Python code

Right - we want to be compatible with new version of the code, so instead of moving the files to where the code wants them you should make symlinks. The symlinkks go in the place that the code wants and points back to the place where we have the files now.

For the textoverlay, you can just comment it out for now. We can add it back in later once we decide on how to label the video.

  13043   Mon Jun 5 18:40:12 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

[Gautam, Jigyasa]

This evening, Gautam helped me resolve the error I had been encountering. I had been trying to run the code on Allegra and that threw up the gst.elementfactory_make(“textoverlay”, “text0”); gst.ElementNotFoundError: textoverlay error.
As an attempt to resolve the error, I had set up the paths to match those mentioned in the document.
However as it turns out, it wasn't really needed.

 When Gautam ran the code from Pianosa, the following error showed up
gst.elementfactory_make(“x264enc”, “ en ”);gst.ElementNotFoundError: x264.

We found that the x264 and x264enc are different entities.
Gautam then installed the Ubuntu- restricted-extras package with the following
gstreamer0.10-plugins-bad-multiverse
gstreamer0.10-plugins-ugly-multiverse

And eventually on compilation, the message ‘starting server’ was displayed on the screen. This was interrupted by another error GenICAM_3_0_Basler_pylon_v5_0::RuntimeException’

 So there is apparently a problem executing the commands on Allegra, because the camera server starts running on Donatella and Pianosa. 

I will now be looking into this newly encountered error and also be setting up the symlinks for the various paths in the code. 

Quote:

Probably I could try putting all files in exactly the same directories as specified in the document. 

Quote:

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

 

 

  13044   Mon Jun 5 21:53:55 2017 ranaUpdateComputersrossa: ubuntu 16.04

With the network config, mounting, and symlinks setup, rossa is able to be used as a workstation for dataviewer and MEDM. For DTT, no luck since there is so far no lscsoft support past the Ubuntu14 stage.

  13045   Tue Jun 6 09:14:26 2017 SteveUpdateCamerasGigE installation at MC2

50mm 1.8 lens with Basler camera at MC2 face with micro clamp 350617    Camera manuals plus

Quote:

Thanks to Steve and Gautam, the IMC was locked.

I was able to capture images with the Rainbow 50 mm lens at exposure times of 100, 300, 1000, 3000, 10000 and 30 microseconds.(The pictures are in the same order). These pictures were taken at a gain of 300 and black level 64.

Special credits to Steve spent a lot of time help me a with setting up the hardware and focusing on the beam spot with the camera. 
I can't thank you enough Steve! :) 

Quote:

In the afternoon, Steve and I tried to install the camera near MC2 and get some images of the mirrors. Due to a restricted field of view of the lens on the camera, after many efforts to focus on the optic, we were able to get this image. MC2 was unlocked so this image captures some resonating higher order mode.

With MC2 locked, I will get some images of the mirror at different exposure times and try to get an HDR image.  
 

 

Attachment 1: MC2.jpg
MC2.jpg
  13046   Wed Jun 7 10:07:00 2017 SteveUpdatePEMair condition thermostate

The Y arm ac thermostate was calibrated after cooling water relay replacement by Mike.... yesterday. The set temp is remaind to be 70F

The east end south wall temp is reading 22C

  13047   Wed Jun 7 11:32:56 2017 SteveUpdateVACsmooth vac reboot

Gautam and Steve,

 

The medm monitor & vac control screens were totally blank since ~ May 24, 2017    Experienced vacuum knowledge is required for this job.

IDENTIFY valve configuration:

                        How to confirm valve configuration when all vac mons are blank?  Each valve has a manual-mechanical position indicator. Look at pressure readings and turbo pump controllers. VAC NORMAL configuration was confirmed based on these information.

Preparation: disconnect valves ( disconnect meaning: valve closes and stays paralized ) in this sequence VC2, VC1 power, VA6, V5, V4 & V1 power,      at ifo pressure 7.3E-6 Torr-it  ( it  = InstruTech cold cathode gauge )

                            This gauge is independent from all other rack  mounted   instrumentation and it is still not logged.

                            Switching to this valve configuration with disconnected valves will insure NOT  venting of the vacuum envelope by accidental glitching voltage drop or computer malfunction.

RESET  v1Vac1 .........in 2-3 minutes........ ( v1Vac1 - 2 )  the vac control screen started reading pressures & position

                    Connected cables to valves (meaning: valve will open if it was open before it was disconnected and it will be control able from computer ) in the following order: V4, V1 power, V5, VA6, VC2 & VC1 power,      at ifo 2E-5 Torr-it.....

                     ....vac configuration is reading VAC NORMAL,

                     ifo 7.4E-6 Torr-it

We have to hook up the it-cold cathode gauge to be monitored - logged !  this should be the substitute for the out of order CC1 pressure gauge.

Attachment 1: vac_reboot.png
vac_reboot.png
  13048   Wed Jun 7 14:11:49 2017 gautamUpdateASSY-arm coil driver electronics investigation

Rana suggested taking a look at the Y-arm test mass actuator TFs (measured by driving the coils one at a time, with only local damping loops on, using the Oplev to measure the response to a given drive). Attached are the results from this measurement (I used the Oplev pitch error signal for all 8 measurements). Although the magnitude response for all coils have the expected 1/f^2 shape, there seems to be some significant (~10dB) asymmetry in both the ETM and ITM coils. The phase-response is also not well understood. If we are just measuring the TF of a pendulum with 1 Hz resonant frequency, then at and above 10Hz, I would expect the phase to be either 0 or 180 deg. Looks like there is a notch at 60 Hz somewhere, but it is unclear to me where the ~90 degree phase at ~100Hz is coming from.

For the ITM, the UL OSEM was replaced during the 2016 summer vent - the coil that is in there is now of the short OSEM variety, perhaps it has a different number of turns or something. I don't recall any coil balancing being done after this OSEM swap. For the ETM, it is unclear to me how long this situation has been like this.

Yesterday night, I tried to measure the ASS output matrix by stepping the ITM, ETM and TTs in PIT and YAW, and looking at the response in the various ASS error signals. During this test, I found the ETM and ITM pitch and yaw error signals to be highly coupled (the input matrix was diagonal). As Rana suggested, I think the whole coil driver signal chain from DAC output to coil driver board output has to be checked before attempting to fix ASS. Results from this investigation to follow.

Note: The OSEM calibration hasn't been done in a while (though the HeNes have been swapped out), but as Attachment #2 shows, if we believe the shadow sensor calibration, then the relative calibrations of the ITM and ETM Oplevs agree. So we can directly compare the TFs for the ITM and ETM.

 

Attachment 1: CoilTFs.pdf
CoilTFs.pdf
Attachment 2: Y_OL_calib_check.png
Y_OL_calib_check.png
  13049   Wed Jun 7 14:27:23 2017 SteveUpdateSummary Pagessummery pages not working

Last good page May 18, 2017

Not found, error message May 19 - June 4,2017

Blank plots,  June 5, 2017

  13050   Wed Jun 7 15:41:51 2017 SteveUpdateComputerswindow laptop scanned

Randy Trudeau scanned our Window laptop Dell 13" Vostro and Steve's memory stick for virus. Nothing was found. The search continues...

Rana thinks that I'm creating these virus beasts with taking pictures with Dino Capture and /or Data Ray on the window machine........

 

 

  13051   Wed Jun 7 17:45:11 2017 gautamUpdateASSY-arm coil driver electronics investigation

I repeated the test of driving C1:SUS-<Optic>_<coil>_EXC individually and measuring the transfer function to C1:SUS-<Optic>_OPLEV_PERROR for Optic in (ITMX, ITMY, ETMX, ETMY, BS), coil in (LLCOIL, LRCOIL, ULCOIL, URCOIL). 

There seems to be a few dB imbalance in the coils in both ETMs, as well as ITMX. ITMY and the BS seem to have pretty much identical TFs for all the coils - I will cross-check using OPLEV_YERROR, but is there any reason why we shouldn't adjust the gains in the coil output (not output matrix) filter banks to correct for this observed imbalance? The Oplev calibrations for the various optics are unknown, so it may not be fair to compare the TFs between optics (I guess the same applies to comparing TF magnitudes from coil to OPLEV_PERROR and OPLEV_YERROR, perhaps we should fix the OL calibrations before fiddling with coil gains...)

The anomalous behaviour of ITMY_UL (10dB greater than the others) was traced down to a rogue x3 gain in the filter module indecision. This has been removed, and now Y arm ASS works fine (with the original dither servo settings). X arm dither still doesn't converge - I double checked the digital filters and all seems in order, will investigate the analog part of the drive electronics now.

 

Attachment 1: CoilTFs.pdf
CoilTFs.pdf CoilTFs.pdf CoilTFs.pdf
  13052   Thu Jun 8 02:11:28 2017 gautamUpdateASSY-arm coil driver electronics investigation

Summary:

I investigated the analog electronics in the coil driver chain by using awggui to drive a given channel with Uniform noise between DC and 8kHz, with an overall gain of 1000 cts. This test was done for both ITMs and the BS. The Whitening/De-Whitening was off during the test. I measured the spectra in

  1. The digital domain (with DTT)
  2. At the output monitor of the AI board (with SR785)
  3. At the output of the coil driver board (with SR785)

Attachment #1 - There is good agreement between all 3 measurements. To convert the DTT spectrum to Vrms/rtHz, I multiplied the Y-axis by 10V / ( 2*sqrt(2) * 2^15 cts). Between DC and ~1kHz, the measured spectrum everywhere is flat, as expected given the test conditions. The AI filter response is also seen.

Attachment #2 - Zoomed in view of Attachment #1 (without the AI filter part).

*The DTT plots have been coarse-grained to keep the PDF file size managable. X (Y) axes are shared for all the plots in columns (rows).

 

Similar verification remains to be done for the ETMs, after which the test has to be repeated with the Whitening/DeWhitening engaged. But it's encouraging that things make sense so far (except perhaps the coil balancing can be better as suggested by the previous elog). 

 

I've left both arms locked. The Y-arm dither alignment is working well again, but for the X arm, the loops that actuate on the BS are still weird. Nothing obvious in the tests so far though.

GV 6pm 8 Jun 2017: I realized the X arm transmission was being monitored by the high-gain PD and not the QPD (which is how we usually run the ASS). The ASC mini screen suggested the transmitted beam was reasonably well centered on the X end QPD, and so I switched to this after which the X end dither alignment too converged. Possibly the beam was falling off the other PD, which is why the BS loops, which control the beam spot position on the ETM, were acting weirdly.

Quote:

will investigate the analog part of the drive electronics now.

 

Not related to this work:

I noticed the X-arm LSC servo was often hitting its limit - so I reduced the gain from 0.03 to 0.02. This reduced the control signal RMS, and re-acquiring lock at this lower gain wasn't a problem either. See attachment #3 (will be rotated later) for control signal spectra at this revised setting.

Attachment 1: AnalogCheck.pdf
AnalogCheck.pdf
Attachment 2: AnalogCheck_zoom.pdf
AnalogCheck_zoom.pdf
Attachment 3: ArmCtrl.pdf
ArmCtrl.pdf
  13053   Thu Jun 8 12:43:42 2017 DhruvaUpdateOptical LeversBeam Profiling Results

 

Quote:

​Updates in the He-Ne beam profiling experiment. ​

New and improved plots for the He-Ne profiling experiment 

Font size has been increased to 30. 

The plots are maximum size (Following Rana's advice, I saved the plots as eps files(maximized) and converted them to pdf later).

There is a shaded region around the trendline that represents the parameter error. 

Function that I fit my data to (should have mentioned this in my earlier elog entries) 

P = \dfrac{P_0}{2}\Bigg[1+erf\Big(\dfrac{\sqrt2(X-X_0)}{w}\Big) \Bigg]

Description of my error analysis -

1. I have assumed a 20% deviation from markings in the micrometer error. 

2. Using the error in the micrometer, I have calculated the propogated error in the beam power :

\delta P = \sqrt{\dfrac{2}{\pi}}{P_0}\dfrac{\delta x}{w}\exp\Bigg({\frac{-2(X-X_0)^2}{w^2}}\Bigg)

I added this error to the stastistical error due to the fluctuation of the oscilloscope reading to obtain the total error in power. 

3. I found the Fisher Matrix by numerically differentiating the function at different data points P_b with respect to the parameters p_i =  P_0, X_0 and w.

F_{ij} = \sum_{b} {\frac{\partial P_b}{\partial p_i}\frac{\partial P_b}{\partial p_j}}\frac{1}{\sigma^2_b}

I then found the covariance matrix by inverting the Fisher Matrix and found the error in spot size estimation. 

EDIT : Residuals added to plots and all axes made equal 

Attachment 1: profile.pdf
profile.pdf profile.pdf profile.pdf profile.pdf profile.pdf profile.pdf profile.pdf
  13054   Fri Jun 9 09:13:26 2017 SteveUpdateCamerasGigE camera lens with AR

We should move on with getting this lens from Edmonds #67-717  at 1064 R<3% 

Computar M5018-SWIR is an other choice

AR coatings 500 - 1100nm R<1% are expensive.

 

Quote:

50mm 1.8 lens with Basler camera at MC2 face with micro clamp 350617    Camera manuals plus

 

Attachment 1: coating_curve.pdf
coating_curve.pdf
  13055   Fri Jun 9 15:31:45 2017 gautamUpdateIMCIMC wonkiness

I've been noticing some weird behaviour with the IMC over the last couple of days. In some lock stretches the WFS control signals ramp up to uncharacteristically huge values - at some point, the IMC loses lock, and doesn't re-acquire it (see Attachment #1). The fact that the IMC doesn't re-acquire lock indicates that there has been some kind of large alignment drift (this is also evident from looking at the (weak) flashes on the MCREFL camera while the IMC attempts to re-lock - I am asking Steve to restore the MC trans camera as well). These drifts don't seem to be correlated with anyone working near MC2.

The WFS servos haven't had their offsets/ DC alignments set in a while, so in order to check if these were to blame, I turned off the inputs to all the WFS servo filter modules (so no angular control of the IMC). I then tweaked the alignment manually. But the alignment seems to have drifted yet again, within a few minutes. Looking at the OSEM sensor signals, it looks like MC2 was the optic that drifted. Steve tells me no one was working near MC2 during this time. But the drift is gradual so this doesn't look like the infamous glitchy Satellite Box problem seen with MC1 in the recent past. The feedback signal to the NPRO / PCdrive look normal during this time, supporting the hypothesis that the problem is indeed related to angular alignment.

Once Steve restores the MC2 Trans cameras, I will hand-align the IMC again and see if the alignment holds for a few hours. If it does, I will reset all offsets for the WFS loops and see if they hold. In particular, the MC2 transmitted spot centering servo has a long time constant so could be something funny there.

*Another issue with the IMC autolocker I've noticed in the recent past: sometimes, the mcup script doesn't get run even though the MC catches a TEM00 mode. So the IMC servo remains in acquisition state (e.g. boosts and WFS servos don't get turned on). Looking at the autolocker log doesn't shed much light - the "saw a flash" log message gets printed, but while normally the mcup script gets run at this point, in these cases, the MC just remains in this weird state. 

Attachment 1: IMG_7409.JPG
IMG_7409.JPG
  13056   Fri Jun 9 16:37:29 2017 jigyasaUpdateComputer Scripts / ProgramsOpenCV installation

OpenCV 3.1.0 has been installed by following the commands locally on Donatella

git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 3.1.0
git clone https://github.com/Itseez/opencv_contrib.git
cd opencv_contrib
git checkout 3.0.0
cd ~/opencv
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/~/opencv_contrib/modules/ ~/opencv/

In ~/opencv/release, make and sudo make install were executed.

This completed the installation. The version of the installation was verified pkg-config --modversion opencv which showed 3.1.0. Also verified the import of cv2 module in python and it seems to work fine. 

 

  13057   Fri Jun 9 17:45:21 2017 Gautam, KaustubhUpdateIMCIMC wonkiness

 

Quote:

Once Steve restores the MC2 Trans cameras, I will hand-align the IMC again and see if the alignment holds for a few hours. If it does, I will reset all offsets for the WFS loops and see if they hold. In particular, the MC2 transmitted spot centering servo has a long time constant so could be something funny there.

 

Summary:

In order to switch on the angular alignment for the IMC mirrors, we needed to center the laser onto the quad-photodiodes at the IMC and the AS Table(WFS1 and WFS2)

I and Gautam went to the IMC table and did the dc centering for the quad-photodiode by varying the beamsplitter angles. After this, we turned the WFS loops off and performed beam centering for the Quad PDs at the AS Table, the WFS1 and WFS2.

Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.

  13058   Fri Jun 9 19:18:10 2017 gautamUpdateIMCIMC wonkiness

It happened again. MC2 UL seems to have gotten the biggest glitch. It's a rather small jump in the signal level compared to what I have seen in the recent past in connection with suspect Satellite boxes, and LL and UR sensors barely see it.

I will squish Sat box cables and check the cabling at the coil driver board end as well, given that these are two areas where there has been some work recently. WFS loops will remain off till I figure this out. At least the (newly centered) DC spot positions on the WFS and MC2 TRANS QPD should serve as some kind of reference for good MC alignment.

GV edit 9pm: I tightened up all the cables, but doesn't seem to have helped. There was another, larger glitch just now. UR and LL basically don't see it at all (see Attachment #2). It also seems to be a much slower process than the glitches seen on MC1, with the misalignment happening over a few seconds (it is also a lot slower). I have to see if this is consistent with a glitch in the bias voltage to one of the coils which gets low passed by a 4xpole@1Hz filter.

Quote:

Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.

 

Attachment 1: MC2_UL_glitchy.png
MC2_UL_glitchy.png
Attachment 2: MC2_glitch_fast.png
MC2_glitch_fast.png
  13059   Mon Jun 12 10:34:10 2017 gautamUpdateCDSslow machine bootfest

Reboots for c1susaux, c1iscaux, c1auxex today. I took this opportunity to squish the Sat. Box. Cabling for MC2 (both on the Sat box end and also the vacuum feedthrough) as some work has been recently ongoing there, maybe something got accidently jiggled during the process and was causing MC2 alignment to jump around.

Relocked PMC to offload some of the DC offset, and re-aligned IMC after c1susaux reboot. PMC and IMC transmission back to nominal levels now. Let's see if MC2 is better behaved after this sat. box. voodoo.

Interestingly, since Feb 6, there were no slow machine reboots for almost 3 months, while there have been three reboots in the last three weeks. Not sure what (if anything) to make of that.

  13060   Mon Jun 12 17:42:39 2017 gautamUpdateASSETMY Oplev Pentek board pulled out

As part of my Oplev servo investigations, I have pulled out the Pentek Generic Whitening board (D020432) from the Y-end electronics rack. ETMY watchdog was shutdown for this, I will restore it once the Oplev is re-installed.

  13061   Mon Jun 12 22:23:20 2017 ranaUpdateIMCIMC wonkiness

wonder if its possible that the slow glitches in MC are just glitches in MC2 trans QPD? Steve sometimes dances on top of the MC2 chamber when he adjusts the MC2 camera.

I've re-enabled the WFS at 22:25 (I think Gautam had them off as part of the MC2 glitch investigation). WFS1 spot position seems way off in pitch & yaw.

From the turn on transient, it seems that the cross-coupled loops have a time constant of ~3 minutes for the MC2 spot, so maybe that's not consistent with the ~30 second long steps seen earlier.

  13062   Tue Jun 13 08:40:32 2017 SteveUpdateIMCIMC wonkiness

Happy MC after last glitch at 10:28 so the credit goes to Rana

GV edit 11:30am: I think the stuff at 10:28 is not a glitch but just the WFS servos coming on - the IMC was only hand aligned before this.

Quote:

It happened again. MC2 UL seems to have gotten the biggest glitch. It's a rather small jump in the signal level compared to what I have seen in the recent past in connection with suspect Satellite boxes, and LL and UR sensors barely see it.

I will squish Sat box cables and check the cabling at the coil driver board end as well, given that these are two areas where there has been some work recently. WFS loops will remain off till I figure this out. At least the (newly centered) DC spot positions on the WFS and MC2 TRANS QPD should serve as some kind of reference for good MC alignment.

GV edit 9pm: I tightened up all the cables, but doesn't seem to have helped. There was another, larger glitch just now. UR and LL basically don't see it at all (see Attachment #2). It also seems to be a much slower process than the glitches seen on MC1, with the misalignment happening over a few seconds (it is also a lot slower). I have to see if this is consistent with a glitch in the bias voltage to one of the coils which gets low passed by a 4xpole@1Hz filter.

Quote:

Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.

 

 

Attachment 1: happy_MC.png
happy_MC.png
Attachment 2: last_glitch.png
last_glitch.png
  13063   Wed Jun 14 18:15:06 2017 gautamUpdateASSETMY Oplev restored

I replaced the Pentek Generic Whitening Board and the Optical Lever PD Interface Board (D010033) which I had pulled out. The ETMY optical lever servo is operational again. I will post a more detailed elog with deviations from schematics + photos + noise and TF measurements shortly.

Quote:

As part of my Oplev servo investigations, I have pulled out the Pentek Generic Whitening board (D020432) from the Y-end electronics rack. ETMY watchdog was shutdown for this, I will restore it once the Oplev is re-installed.

 

  13064   Thu Jun 15 01:56:50 2017 gautamUpdateASSETMY Oplev restored

Summary:

I tried playing around with the Oplev loop shape on ITMY, in order to see if I could successfully engage the Coil Driver whitening. Unfortunately, I had no success tonight.

Details:

I was trying to guess a loop shape that would work - I guess this will need some more careful thought about loop shape optimization. I was basically trying to keep all the existing filters, and modify the low-passing that minimizes control noise injection. By adding a 4th order elliptic low pass with corner at 50Hz and stopband attenuation of 60dB yielded a stable loop with upper UGF of ~6Hz and ~25deg of phase margin (which is on the low side). But I was able to successfully engage this loop, and as seen in Attachment #1, the noise performance above 50Hz is vastly improved. But it also seems that there is some injection of noise around 6Hz. In any case, as soon as I tried to engage the dewhitening, the DAC output quickly saturated. The whitening filter for the ITMs has ~40dB of gain at ~40Hz already, so it looks like the high frequency roll-off has to be more severe.

I am not even sure if the Elliptic filter is the right choice here - it does have the steepest roll off for a given filter order, but I need to look up how to achieve good roll off without compromising on the phase margin of the overall loop. I am going to try and do the optimization in a more systematic way, and perhaps play around with some of the other filters' poles and zeros as well to get a stable controller that minimizes control noise injection everywhere.

Attachment 1: ITMY_OLspec.pdf
ITMY_OLspec.pdf
  13065   Thu Jun 15 14:24:48 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On

Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.

  13066   Thu Jun 15 18:56:31 2017 jigyasaUpdateComputer Scripts / ProgramsMC2 Pitch-Yaw offset

A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.

The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa

Apologies for any inconvenience.
Data analysis will follow.

  13067   Thu Jun 15 19:49:03 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On

It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night.

Quote:

Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.

 

  13068   Fri Jun 16 12:37:47 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On

Ottavia had been left running overnight and it seems to work fine. There has been no smell or any noticeable problems in the working. This morning Gautam, Kaustubh and I connected Ottavia to the Matrian Network through the Netgear switch in the 40m lab area. We were able to SSH into Ottavia through Pianosa and access directories. On the ottavia itself we were able to run ipython, access the internet. Since it seems to work out fine, Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now.  

Quote:

It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night.

Quote:

Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.

 

 

  13069   Fri Jun 16 13:53:11 2017 gautamUpdateCDSslow machine bootfest

Reboots for c1psl, c1iool0, c1iscaux today. MC autolocker log was complaining that the C1:IOO-MC_AUTOLOCK_BEAT EPICS channel did not exist, and running the usual slow machine check script revealed that these three machines required reboots. PMC was relocked, IMC Autolocker was restarted on Megatron and everything seems fine now.

 

  13071   Fri Jun 16 23:27:19 2017 Kaustubh, JigyasaUpdateComputersOttavia Connected to the Netgear Box

I just connected the Ottavia to the Netgear box and its working just fine. It'll remain switched on over the weekend.

Quote:

Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now.  

 

  13072   Mon Jun 19 18:32:18 2017 jigyasaUpdateComputer Scripts / ProgramsSoftware Installation for image analysis

The IRAF software from the National Optical Astronomy Observatory has been installed locally on Donatella(for testing) following the instructions listed here at http://www.astronomy.ohio-state.edu/~khan/iraf/iraf_step_by_step_installation_64bit
This is a step towards "aperture photometry" and would help identify point scatterers in the images of the test masses.

I will be testing this software, in particular, the use of DAOPHOT and if it seems to work out, we may install it on the shared directory.
Hope this isn't an inconvenience.

 

 

  13073   Mon Jun 19 18:41:12 2017 jigyasaUpdateComputer Scripts / ProgramsMC2 Pitch-Yaw offset

The previous run of the script had produced some dubious results!

The script has been modified and now scans the transmission sum for a longer duration to provide a better estimate on the average transmission. The pitch and yaw offsets have been set to the values that were randomly generated in the previous run as this would enable comparison with the current data.

I am starting it on Donatella and it should run for a couple of hours.

Apologies for the inconvenience.

Quote:

A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.

The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa

  13074   Tue Jun 20 14:58:08 2017 SteveUpdateCamerasGigE camera at ETMX

GigE can be connected to ethernet. AR coated 1064 f50 can arrive any day now.

Quote:

One of the additional GigE cameras has been IP configured for use and installation. 

Static IP assigned to the camera- 192.168.113.152
Subnet mask- 255.255.255.0
Gateway- 192.168.113.2
 

 

Attachment 1: ETMXgige.jpg
ETMXgige.jpg
  13075   Tue Jun 20 16:28:23 2017 SteveUpdateVACRGA scan
Attachment 1: RGAscan243d.png
RGAscan243d.png
Attachment 2: RGAscan.png
RGAscan.png
  13076   Tue Jun 20 17:44:12 2017 jigyasaUpdateComputer Scripts / ProgramsMC2 Pitch-Yaw offset

The script didn't run properly last night, due to an oversight of variable names! It's been started again and has been running for half an hour now.

Quote:

I am starting it on Donatella and it should run for a couple of hours.

Apologies for the inconvenience.

Quote:

A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.

The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa

 

  13078   Fri Jun 23 02:55:18 2017 KaustubhUpdateComputer Scripts / ProgramsScript Running

I am leaving a script running on the Pianoso for the night. For this purpose, even the AG4395A is kept on. I'll see the result of the script in the morning (it should be complete by then). Just check so before fiddling with the Analyzer.

Thank you.

  13079   Sun Jun 25 22:30:57 2017 gautamUpdateGeneralc1iscex timing troubles

I saw that the CDS overview screen indicated problems with c1iscex (also ETMX was erratic). I took a closer look and thought it might be a timing issue - a walk to the X-end confirmed this, the 1pps status light on the timing slave card was no longer blinking. 

I tried all versions of power cycling and debugging this problem known to me, including those suggested in this thread and from a more recent time. I am leaving things as it for the night, will look into this more tomorrow. I've also shutdown the ETMX watchdog for the time being. Looks like this has been down since 24Jun 8am UTC.

Attachment 1: c1iscex_status.png
c1iscex_status.png
  13081   Mon Jun 26 22:01:08 2017 KojiUpdateGeneralc1iscex timing troubles

I tried a couple of things, but no fundamental improvement of the missing LED light on the timing board.

- The power supply cable to the timing board at c1iscex indicated +12.3V

- I swapped the timing fiber to the new one (orange) in the digital cabinet. It didn't help.

- I swapped the opto-electronic I/F for the timing fiber with the Y-end one. The X-end one worked at Y-end, and Y-end one didn't work at X-end.

- I suspected the timing board itself -> I brought a "spare" timing board from the digital cabinet and tried to swap the board. This didn't help.

 

Some ideas:

- Bring the X-end fiber to C1SUS or C1IOO to see if the fiber is OK or not.

- We checked the opto-electronic I/F is OK

- Try to swap the IO chassis with the Y-end one.

- If this helps, swap the timing board only to see this is the problem or not.

  13082   Tue Jun 27 16:11:28 2017 gautamUpdateElectronicsCoil whitening

I got back to trying to engage the coil driver whitening today, the idea being to try and lock the DRMI in a lower noise configuration - from the last time we had the DRMI locked, it was determined that A2L coupling from the OL loops and coil driver noise were dominant from ~10-200Hz. All of this work was done on the Y-arm, while the X-arm CDS situation is being resolved.

To re-cap, every time I tried to do this in the last month or so, the optic would get kicked around. I suspected that the main cause was the insufficient low-pass filtering on the Oplev loops, which was causing the DAC rms to rail when the whitening was turned on. 

I had tried some loop-tweaking by hand of the OL loops without much success last week - today I had a little more success. The existing OL loops are comprised of the following:

  • Differentiator at low frequencies (zero at DC, 2 poles at 300Hz)
  • Resonant gain peaked around 0.6 Hz with a Q of ______ (to be filled in)
  • BR notches 
  • A 2nd order elliptic low pass with 2dB passband ripple and 20dB stopband attenutation

THe elliptic low pass was too shallow. For a first pass at loop shaping today, I checked if the resonant gain filter had any effect on the transmitted power RMS profile - turns out it had negligible effect. So I disabled this filter, replaced the elliptic low pass with a 5th order ELP with 2dB passband ripple and 80dB stopband attenuation. I also adjusted the overall loop gain to have an upper UGF for the OL loops around 2Hz. Looking at the spectrum of one coil output in this configuration (ITMY UL), I determined that the DAC rms was no longer in danger of railing.

However, I was still unable to smoothly engage the de-whitening. The optic again kept getting kicked around each time I tried. So I tried engaging the de-whitening on the ITM with just the local damping loop on, but with the arm locked. This transition was successful, but not smooth. Looking at the transmon spot on the camera, every time I engage the whitening, the spot gets a sizeable kick (I will post a video shortly).  In my ~10 trials this afternoon, the arm is able to stay locked when turning the whitening on, but always loses lock when turning the whitening off. 

The issue here is certainly not the DAC rms railing. I had a brief discussion with Gabriele just now about this, and he suggested checking for some electronic voltage offset between the two paths (de-whitening engaged and bypassed). I also wonder if this has something to do with some latency between the actual analog switching of paths (done by a slow machine) and the fast computation by the real time model? To be investigated.

GV 170628 11pm: I guess this isn't a viable explanation as the de-whitening switching is handled by the one of the BIO cards which is also handled by the fast FEs, so there isn't any question of latency.

With the Oplev loops disengaged, the initial kick given to the optic when engaging the whitening settles down in about a second. Once the ITM was stable again, I was able to turn on both Oplev loops without any problems. I did not investigate the new Oplev loop shape in detail, but compared to the original loop shape, there wasn't a significant difference in the TRY spectrum in this configuration (plot to follow). This remains to be done in a systematic manner. 

Plots to support all of this to follow later in the evening.

Attachment #1: Video of ETMY transmission CCD while engaging whitening. I confirmed that this "glitch" happens while engaging the whitening on the UL channel. This is reminiscent of the Satellite Box glitches seen recently. In that case, the problem was resolved by replacing the high-current buffer in the offending channel. Perhaps something similar is the problem here?

Attachment #2: Summary of the ITMY UL coil output spectra under various conditions.

 

Attachment 1: ETMYT_1182669422.mp4
Attachment 2: ITMY_whitening_studies.pdf
ITMY_whitening_studies.pdf
  13083   Tue Jun 27 16:18:59 2017 jigyasaUpdateCamerasGigE camera at ETMX

The 50mm lens has arrived. (Delivered yesterday).

Also the GigE has been wired and conencted to the Martian. Image acquisition is possible with Pylon.

Quote:

GigE can be connected to ethernet. AR coated 1064 f50 can arrive any day now.

 

  13084   Tue Jun 27 18:47:49 2017 jigyasaUpdateComputer Scripts / ProgramsMC2 Pitch-Yaw offset

The values generated from the script were analyzed and a 3D scatter plot in addition to a 2D map were plotted. 

Yesterday, Rana pointed me to another method of collecting and analyzing the data. So I worked on the code today and have left a script (MC2rerun.py) running on Ottavia which should run overnight.

Quote:

The script didn't run properly last night, due to an oversight of variable names! It's been started again and has been running for half an hour now.

 

 

  13085   Wed Jun 28 20:15:46 2017 gautamUpdateGeneralc1iscex timing troubles

[Koji, gautam]

Here is a summary of what we did today to fix the timing issue on c1iscex. The power supply to the timing card in the X end expansion chassis was to blame.

  1. We prepared the Y-end expansion chassis for transport to the X end. To do so, we disconnected the following from the expansion chassis
    • Cables going to the ADC/DAC adaptor boards
    • Dolphin connector
    • BIO connector
    • RFM fiber
    • Timing fiber
  2. We then carried the expansion chassis to the X end electronics rack. There we repeated the above steps for the X-end expansion chassis
  3. We swapped the X and Y end expansion chassis in the X end electronics rack. Powering the unit, we immediately saw the green lights on the front of the timing card turn on, suggesting that the Y-end expansion chassis works fine at the X end as well (as it should). To further confirm that all was well, we were able to successfully start all the RT models on c1iscex without running into any timing issues.
  4. Next, we decided to verify if the spare timing card is functional. So we swapped out the timing card in the expansion chassis brought over to the X end from the Y end with the spare. In this test too, all worked as expected. So at this stage, we concluded that
    • There was nothing wrong with the fiber bringing the timing signal to the X end
    • The Y-end expansion chassis works fine
    • The spare timing card works fine.
  5. Then we decided to try the original X-end expansion chassis timing card in the Y-end expansion chassis. This test too was successful - so there was nothing wrong with any of the timing card!
  6. Next, we decided to power the X-end timing chassis with its original timing card, which was just verified to work fine. Surprisingly, the indicator lights on the timing card did not turn on.
  7. The timing card has 3 external connections
    • A 40 pin IDE connector
    • Power
    • Fiber carrying the timing signal
  8. We went back to the Y-end expansion chassis, and checked that the indicator lights on the timing card turned on even when the 40 pin IDE connector was left unconnected (so the timing card just gets power and the timing signal).
  9. We concluded that the power supply in the X end expansion chassis was to blame. Indeed, when Koji jiggled the connector around a little, the indicator lights came on!
  10. The connection was diagnosed to be somewhat flaky - it employs the screw-in variety of terminal blocks, and one of the connections was quite loose - Koji was able to pull the cable out of the slot applying a little pressure.
  11. I replaced the cabling (swapped the wires for thicker gauge, more flexible variety), and re-tightened the terminal block screws. The connection was reasonably secure even when I applied some force. A quick test verified that the timing card was functional when the unit was powered.
  12. We then replaced the X and Y-end expansion chassis (complete with their original timing cards, so the spare is back in the CDS cabinet), in the racks. The models started up again without complaint, and the CDS overview screen is now in a good state [Attachment #1]. The arms are locked and aligned for maximum transmission now.
  13. There was some additional difficulty in getting the 40-pin IDE connector in on the Y-end expansion chassis. Looked like we had bent some of the pins on the timing board while pulling this cable out. But Koji was able to fix this with a screw driver. Care should be taken when disconnecting this cable in the future!

There were a few more flaky things in the Expansion chassis - the IDE connectors don't have "keys" that fix the orientation they should go in, and the whole timing card assembly is kind of difficult and not exactly secure. But for now, things are back to normal it seems.

Wouldn't it be nice if this fix also eliminates the mystery ETMX glitching problem? After all, seems like this flaky power supply has been a problem for a number of years. Let's keep an eye out.

Attachment 1: CDS_status_28Jun2017.png
CDS_status_28Jun2017.png
  13086   Thu Jun 29 00:13:08 2017 KaustubhUpdateComputer Scripts / ProgramsTransfer Function Testing

In continuation to my previous posts, I have been working on evaluating the data on transfer function. Recently, I have calculated the correlation values between the real and imaginary part of the transfer function. Also I have written the code for plotting the transfer function data stream at each frequency in the argand plane just for referring to. Also I have done a few calculations and found the errors in magnitude and phase using those in the real and imaginary parts of the transfer function. More details for the process are in this git repository.

The following attachments have been added:

  1. The correlation plot at different frequencies. This data is for a 100 data files.
  2. The Test files used to produce the abover plot along with the code for the plotting it as well as the text file containing the correlation values. (Most of the code is commented as that part wasn't needed fo rhte recent changes.)

 

Conclusion:

Seeing the correlation values, it sounds reasonable that the gaussian in real and imaginary parts approximation is actually holding. This is because the correlation values are mostly quite small. This can be seen by studying the distribution of the transfer function on the argand plane. The entire distribution can be seen to be somewhat, if not entirely, circular. Even when the ellipticity of the curve seems to be high, the curve still appears to be elliptical along the real and imaginary axes, i.e., correlation in them is still low.

 

To Do:

  1. Use a better way to estimate the errors in magnitude and phase as the method used right now is a only valid with the liner approximation and gives insane values which are totally out of bounds when the magnitude is extrmely small and the phase is varying as mad.
  2. Use the errors in the transfer function to estimate the coherence in the data for each frequency point. That is basically plot a cohernece Vs frequency plot showing how the coherence of the measurements vary as the frequency is varied.

 

In order to test the above again, with an even larger data set, I am leaving a script running on Ottavia. It should take more than just the night(I estimate around 10-11 hours) if there are no problems.

Attachment 1: Correlation_Plot.pdf
Correlation_Plot.pdf
Attachment 2: 2x100_Test_Files_and_Code_and_Correlation_Files.zip
  13087   Thu Jun 29 10:04:18 2017 jigyasaUpdateComputer Scripts / ProgramsMC2 Pitch-Yaw offset

The script is being executed again, now.

Quote:

I worked on the code today and have left a script (MC2rerun.py) running on Ottavia which should run overnight.

 

 

  13088   Fri Jun 30 02:13:23 2017 gautamUpdateGeneralDRMI locking attempt

Summary:

I attempted to re-lock the DRMI and try and realize some of the noise improvements we have identified. Summary elog, details to follow.

  1. Locked arms, ran ASS, centered OLs on ITMs and BS on their respective QPDs.
  2. Looked into changing the BS Oplev loop shape to match that of the ITMs - it looks like the analog electronics that take the QPD signals in for the BS Oplev is a little different, the 800Hz poles are absent. But I thought I had managed to do this successfully in that the error signal suppression improved and it didn't look like the performance of the modified loop was worse anywhere except possibly at the stack resonance of ~3Hz --- see Attachment #1 (will be rotated later). The TRX spectra before and after this modification also didn't raise any red flags.
  3. Re-aligned PRM - went to the AS table and centered beam on all REFL PDs
  4. Locked PRMI on carrier, ran MICH and AS dither alignment. PRC angular feedforward also seemed to work well.
  5. Re-aligned SRM, looked for DRMI locks - there was a brief lock of a couple of seconds, but after this, the BS behaviour changed dramatically.

Basically after this point, I was unable to repeat stuff I did earlier in the evening just a couple of hours ago. The single arm locks catch quickly, and seem stable over the hour timescale, but when I run the X arm dither, the BS PITCH loop starts to oscillate at ~0.1 Hz. Moreover, I am unable to acquire PRMI carrier lock. I must have changed a setting somewhere that I am not catching right now (although I've scripted most of these things for repeatability, so I am at a loss what I'm missing indecision). The only change I can think of is that I changed the BS Oplev loop shape. But I went back into the filter file archives and restored these to their original configuration. Hopefully I'll have better luck figuring this out tomorrow.

Attachment 1: BS_OLmods.pdf
BS_OLmods.pdf
ELOG V3.1.3-