40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 95 of 354  Not logged in ELOG logo
IDdown Date Author Type Category Subject
  13079   Sun Jun 25 22:30:57 2017 gautamUpdateGeneralc1iscex timing troubles

I saw that the CDS overview screen indicated problems with c1iscex (also ETMX was erratic). I took a closer look and thought it might be a timing issue - a walk to the X-end confirmed this, the 1pps status light on the timing slave card was no longer blinking. 

I tried all versions of power cycling and debugging this problem known to me, including those suggested in this thread and from a more recent time. I am leaving things as it for the night, will look into this more tomorrow. I've also shutdown the ETMX watchdog for the time being. Looks like this has been down since 24Jun 8am UTC.

  13078   Fri Jun 23 02:55:18 2017 KaustubhUpdateComputer Scripts / ProgramsScript Running

I am leaving a script running on the Pianoso for the night. For this purpose, even the AG4395A is kept on. I'll see the result of the script in the morning (it should be complete by then). Just check so before fiddling with the Analyzer.

Thank you.

  13077   Fri Jun 23 02:43:43 2017 KaustubhHowToComputer Scripts / ProgramsTaking Measurements From AG4395A

Summary:

I have written a code(a basic one which needs a lot of improvements, but still does the job) for taking multiple measurements from the AG4395A. I have also written a separate code for plotting the data taken from the previoius code along with the error bars upto 1 standard deviation.

 

Details on How To Operate AG4395A:

  1. Under 'Measurement' tab, press the 'Meas' button and select the Analyzer Type (Network Analyzer or Spectrum Analyzer).
  2. Then under the same options select which 'ratio' needs to be measured (A/R, B/R or A/B).
  3. Then press the 'Format' button to select what needs to be measured (Eg - Log|Mag|, Phase, etc.).
  4. In order to measure and see two channels at the same time (Eg - Log|Mag| and Phase), press the 'Display' button and select 'Dual Channel'.
  5. Using the 'Scale' button we can set the scale/div or use autoscale and also set the attenuator values of the different channels.
  6. The 'Bw/Avg' option gives us an averaging option which averages few sets of data to produce the result. In doing this we lose quiet a lot of data and the resulting plot isn't able to give us the information on the statistical errors.
  7. This option also allows us to set the 'Intermediate Frequency' Bandwidth. This basically dictates the sampling rate of the Analyzer. The lower the IF bw, the higher is lesser is the noise (due to less uncertainty in Frequency).
  8. The 'Cal' button helps us calibrate the Analyzer to the current connections and signals. This is done because there is usually a difference in the 'cable lengths' for the two channels which introduces an extra phase term depnding upon the rf frequency. The calibration can be simply done by removing the Device Under Test (DUT) and diectly connecting the coaxial cables to the channels. After this the 'Calibrate Menu' allows us to calibrate the response using the short, open and thru methods.
  9. Now, under the 'Sweep' tab, the 'Sweep' button allows us to select various sweep options such as 'Sweep Time' (Auto, or set a time), 'Number of Points' (b/w 201-801) and 'Sweep Type' (Linear, Log, List Freq. etc.).
  10. Using the 'Source' button we can set the source power in dBm units (Usually kept as -20 to -10 dBm).
  11. The Scan Range can be set in a few ways such as using the start and end points or using the center and span range/width.
  12. After setting up all of the above, we can take the measurement either from the analyzer itself or using one of the control PCs. The command to download the data from AG4395A is netgpibdata -i 192.168.113.105 -d AG4395A -a 10 -f [filename].

 

Brief Details on How the 'AGmeasure' command works:

AGmeasure is a python script developed by some of the people who work at 40m. It is set as a global command and can be used from within any directory. The source code is in the scripts folder on the network, or else it can also be found in Eric Quintero's git repository. This command accepts at the very least a parameter file. This is supposed to be a .yml file. A template (TFAG4395Atemplate.yml) can be found in the scripts folder or in Eric's repo. There are some other options that can be passed to this command, see the help for more details.

 

The Multi_Measurement Script:

This script calls the 'AGmeasure' command repetitively and keeps storing the data files in a folder. Right now, the script needs to be fed in th template file manually at prompt.

 

The Test_Plotting Script:

This script plots the a set of data files obtained from the above mentioned script and produces a plot along with the errors bands upto 1 standard deviation of the data. The format (names) and total number of text files need to be explicitly known, for now at least.

 

Attachments:

  1. The output test files and the two scripts.
  2. This is the 'Bode Plot' for a data set made using the above two scripts.

 

To Do:

  • Improve upon the two scripts to be as compatible as the AGmeasure function itself.
  • Try and incorporate the whole script into AGmeasure itself along with improving upon the templates.
  • The above details, with some edits perhaps, can go into the 40m wiki too(?).

 

Update: Increased the font size in the plot. Added a few comments to the two scripts

To Do: Need to consider the transfer function as a single physical quantity (both the magnitude and phase) and then take the averages and calculate the standard deviation and then plot these results. 

 

EDIT:

The attachment with the test files and the code now also contains a pdf with all the relations/equations I have used to calculate the averages and errors.

  13076   Tue Jun 20 17:44:12 2017 jigyasaUpdateComputer Scripts / ProgramsMC2 Pitch-Yaw offset

The script didn't run properly last night, due to an oversight of variable names! It's been started again and has been running for half an hour now.

Quote:

I am starting it on Donatella and it should run for a couple of hours.

Apologies for the inconvenience.

Quote:

A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.

The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa

 

  13075   Tue Jun 20 16:28:23 2017 SteveUpdateVACRGA scan
  13074   Tue Jun 20 14:58:08 2017 SteveUpdateCamerasGigE camera at ETMX

GigE can be connected to ethernet. AR coated 1064 f50 can arrive any day now.

Quote:

One of the additional GigE cameras has been IP configured for use and installation. 

Static IP assigned to the camera- 192.168.113.152
Subnet mask- 255.255.255.0
Gateway- 192.168.113.2
 

 

  13073   Mon Jun 19 18:41:12 2017 jigyasaUpdateComputer Scripts / ProgramsMC2 Pitch-Yaw offset

The previous run of the script had produced some dubious results!

The script has been modified and now scans the transmission sum for a longer duration to provide a better estimate on the average transmission. The pitch and yaw offsets have been set to the values that were randomly generated in the previous run as this would enable comparison with the current data.

I am starting it on Donatella and it should run for a couple of hours.

Apologies for the inconvenience.

Quote:

A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.

The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa

  13072   Mon Jun 19 18:32:18 2017 jigyasaUpdateComputer Scripts / ProgramsSoftware Installation for image analysis

The IRAF software from the National Optical Astronomy Observatory has been installed locally on Donatella(for testing) following the instructions listed here at http://www.astronomy.ohio-state.edu/~khan/iraf/iraf_step_by_step_installation_64bit
This is a step towards "aperture photometry" and would help identify point scatterers in the images of the test masses.

I will be testing this software, in particular, the use of DAOPHOT and if it seems to work out, we may install it on the shared directory.
Hope this isn't an inconvenience.

 

 

  13071   Fri Jun 16 23:27:19 2017 Kaustubh, JigyasaUpdateComputersOttavia Connected to the Netgear Box

I just connected the Ottavia to the Netgear box and its working just fine. It'll remain switched on over the weekend.

Quote:

Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now.  

 

  13070   Fri Jun 16 18:21:40 2017 jigyasaConfigurationCamerasGigE camera IP

One of the additional GigE cameras has been IP configured for use and installation. 

Static IP assigned to the camera- 192.168.113.152
Subnet mask- 255.255.255.0
Gateway- 192.168.113.2
 

  13069   Fri Jun 16 13:53:11 2017 gautamUpdateCDSslow machine bootfest

Reboots for c1psl, c1iool0, c1iscaux today. MC autolocker log was complaining that the C1:IOO-MC_AUTOLOCK_BEAT EPICS channel did not exist, and running the usual slow machine check script revealed that these three machines required reboots. PMC was relocked, IMC Autolocker was restarted on Megatron and everything seems fine now.

 

  13068   Fri Jun 16 12:37:47 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On

Ottavia had been left running overnight and it seems to work fine. There has been no smell or any noticeable problems in the working. This morning Gautam, Kaustubh and I connected Ottavia to the Matrian Network through the Netgear switch in the 40m lab area. We were able to SSH into Ottavia through Pianosa and access directories. On the ottavia itself we were able to run ipython, access the internet. Since it seems to work out fine, Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now.  

Quote:

It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night.

Quote:

Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.

 

 

  13067   Thu Jun 15 19:49:03 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On

It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night.

Quote:

Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.

 

  13066   Thu Jun 15 18:56:31 2017 jigyasaUpdateComputer Scripts / ProgramsMC2 Pitch-Yaw offset

A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.

The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa

Apologies for any inconvenience.
Data analysis will follow.

  13065   Thu Jun 15 14:24:48 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On

Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.

  13064   Thu Jun 15 01:56:50 2017 gautamUpdateASSETMY Oplev restored

Summary:

I tried playing around with the Oplev loop shape on ITMY, in order to see if I could successfully engage the Coil Driver whitening. Unfortunately, I had no success tonight.

Details:

I was trying to guess a loop shape that would work - I guess this will need some more careful thought about loop shape optimization. I was basically trying to keep all the existing filters, and modify the low-passing that minimizes control noise injection. By adding a 4th order elliptic low pass with corner at 50Hz and stopband attenuation of 60dB yielded a stable loop with upper UGF of ~6Hz and ~25deg of phase margin (which is on the low side). But I was able to successfully engage this loop, and as seen in Attachment #1, the noise performance above 50Hz is vastly improved. But it also seems that there is some injection of noise around 6Hz. In any case, as soon as I tried to engage the dewhitening, the DAC output quickly saturated. The whitening filter for the ITMs has ~40dB of gain at ~40Hz already, so it looks like the high frequency roll-off has to be more severe.

I am not even sure if the Elliptic filter is the right choice here - it does have the steepest roll off for a given filter order, but I need to look up how to achieve good roll off without compromising on the phase margin of the overall loop. I am going to try and do the optimization in a more systematic way, and perhaps play around with some of the other filters' poles and zeros as well to get a stable controller that minimizes control noise injection everywhere.

  13063   Wed Jun 14 18:15:06 2017 gautamUpdateASSETMY Oplev restored

I replaced the Pentek Generic Whitening Board and the Optical Lever PD Interface Board (D010033) which I had pulled out. The ETMY optical lever servo is operational again. I will post a more detailed elog with deviations from schematics + photos + noise and TF measurements shortly.

Quote:

As part of my Oplev servo investigations, I have pulled out the Pentek Generic Whitening board (D020432) from the Y-end electronics rack. ETMY watchdog was shutdown for this, I will restore it once the Oplev is re-installed.

 

  13062   Tue Jun 13 08:40:32 2017 SteveUpdateIMCIMC wonkiness

Happy MC after last glitch at 10:28 so the credit goes to Rana

GV edit 11:30am: I think the stuff at 10:28 is not a glitch but just the WFS servos coming on - the IMC was only hand aligned before this.

Quote:

It happened again. MC2 UL seems to have gotten the biggest glitch. It's a rather small jump in the signal level compared to what I have seen in the recent past in connection with suspect Satellite boxes, and LL and UR sensors barely see it.

I will squish Sat box cables and check the cabling at the coil driver board end as well, given that these are two areas where there has been some work recently. WFS loops will remain off till I figure this out. At least the (newly centered) DC spot positions on the WFS and MC2 TRANS QPD should serve as some kind of reference for good MC alignment.

GV edit 9pm: I tightened up all the cables, but doesn't seem to have helped. There was another, larger glitch just now. UR and LL basically don't see it at all (see Attachment #2). It also seems to be a much slower process than the glitches seen on MC1, with the misalignment happening over a few seconds (it is also a lot slower). I have to see if this is consistent with a glitch in the bias voltage to one of the coils which gets low passed by a 4xpole@1Hz filter.

Quote:

Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.

 

 

  13061   Mon Jun 12 22:23:20 2017 ranaUpdateIMCIMC wonkiness

wonder if its possible that the slow glitches in MC are just glitches in MC2 trans QPD? Steve sometimes dances on top of the MC2 chamber when he adjusts the MC2 camera.

I've re-enabled the WFS at 22:25 (I think Gautam had them off as part of the MC2 glitch investigation). WFS1 spot position seems way off in pitch & yaw.

From the turn on transient, it seems that the cross-coupled loops have a time constant of ~3 minutes for the MC2 spot, so maybe that's not consistent with the ~30 second long steps seen earlier.

  13060   Mon Jun 12 17:42:39 2017 gautamUpdateASSETMY Oplev Pentek board pulled out

As part of my Oplev servo investigations, I have pulled out the Pentek Generic Whitening board (D020432) from the Y-end electronics rack. ETMY watchdog was shutdown for this, I will restore it once the Oplev is re-installed.

  13059   Mon Jun 12 10:34:10 2017 gautamUpdateCDSslow machine bootfest

Reboots for c1susaux, c1iscaux, c1auxex today. I took this opportunity to squish the Sat. Box. Cabling for MC2 (both on the Sat box end and also the vacuum feedthrough) as some work has been recently ongoing there, maybe something got accidently jiggled during the process and was causing MC2 alignment to jump around.

Relocked PMC to offload some of the DC offset, and re-aligned IMC after c1susaux reboot. PMC and IMC transmission back to nominal levels now. Let's see if MC2 is better behaved after this sat. box. voodoo.

Interestingly, since Feb 6, there were no slow machine reboots for almost 3 months, while there have been three reboots in the last three weeks. Not sure what (if anything) to make of that.

  13058   Fri Jun 9 19:18:10 2017 gautamUpdateIMCIMC wonkiness

It happened again. MC2 UL seems to have gotten the biggest glitch. It's a rather small jump in the signal level compared to what I have seen in the recent past in connection with suspect Satellite boxes, and LL and UR sensors barely see it.

I will squish Sat box cables and check the cabling at the coil driver board end as well, given that these are two areas where there has been some work recently. WFS loops will remain off till I figure this out. At least the (newly centered) DC spot positions on the WFS and MC2 TRANS QPD should serve as some kind of reference for good MC alignment.

GV edit 9pm: I tightened up all the cables, but doesn't seem to have helped. There was another, larger glitch just now. UR and LL basically don't see it at all (see Attachment #2). It also seems to be a much slower process than the glitches seen on MC1, with the misalignment happening over a few seconds (it is also a lot slower). I have to see if this is consistent with a glitch in the bias voltage to one of the coils which gets low passed by a 4xpole@1Hz filter.

Quote:

Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.

 

  13057   Fri Jun 9 17:45:21 2017 Gautam, KaustubhUpdateIMCIMC wonkiness

 

Quote:

Once Steve restores the MC2 Trans cameras, I will hand-align the IMC again and see if the alignment holds for a few hours. If it does, I will reset all offsets for the WFS loops and see if they hold. In particular, the MC2 transmitted spot centering servo has a long time constant so could be something funny there.

 

Summary:

In order to switch on the angular alignment for the IMC mirrors, we needed to center the laser onto the quad-photodiodes at the IMC and the AS Table(WFS1 and WFS2)

I and Gautam went to the IMC table and did the dc centering for the quad-photodiode by varying the beamsplitter angles. After this, we turned the WFS loops off and performed beam centering for the Quad PDs at the AS Table, the WFS1 and WFS2.

Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.

  13056   Fri Jun 9 16:37:29 2017 jigyasaUpdateComputer Scripts / ProgramsOpenCV installation

OpenCV 3.1.0 has been installed by following the commands locally on Donatella

git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 3.1.0
git clone https://github.com/Itseez/opencv_contrib.git
cd opencv_contrib
git checkout 3.0.0
cd ~/opencv
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/~/opencv_contrib/modules/ ~/opencv/

In ~/opencv/release, make and sudo make install were executed.

This completed the installation. The version of the installation was verified pkg-config --modversion opencv which showed 3.1.0. Also verified the import of cv2 module in python and it seems to work fine. 

 

  13055   Fri Jun 9 15:31:45 2017 gautamUpdateIMCIMC wonkiness

I've been noticing some weird behaviour with the IMC over the last couple of days. In some lock stretches the WFS control signals ramp up to uncharacteristically huge values - at some point, the IMC loses lock, and doesn't re-acquire it (see Attachment #1). The fact that the IMC doesn't re-acquire lock indicates that there has been some kind of large alignment drift (this is also evident from looking at the (weak) flashes on the MCREFL camera while the IMC attempts to re-lock - I am asking Steve to restore the MC trans camera as well). These drifts don't seem to be correlated with anyone working near MC2.

The WFS servos haven't had their offsets/ DC alignments set in a while, so in order to check if these were to blame, I turned off the inputs to all the WFS servo filter modules (so no angular control of the IMC). I then tweaked the alignment manually. But the alignment seems to have drifted yet again, within a few minutes. Looking at the OSEM sensor signals, it looks like MC2 was the optic that drifted. Steve tells me no one was working near MC2 during this time. But the drift is gradual so this doesn't look like the infamous glitchy Satellite Box problem seen with MC1 in the recent past. The feedback signal to the NPRO / PCdrive look normal during this time, supporting the hypothesis that the problem is indeed related to angular alignment.

Once Steve restores the MC2 Trans cameras, I will hand-align the IMC again and see if the alignment holds for a few hours. If it does, I will reset all offsets for the WFS loops and see if they hold. In particular, the MC2 transmitted spot centering servo has a long time constant so could be something funny there.

*Another issue with the IMC autolocker I've noticed in the recent past: sometimes, the mcup script doesn't get run even though the MC catches a TEM00 mode. So the IMC servo remains in acquisition state (e.g. boosts and WFS servos don't get turned on). Looking at the autolocker log doesn't shed much light - the "saw a flash" log message gets printed, but while normally the mcup script gets run at this point, in these cases, the MC just remains in this weird state. 

  13054   Fri Jun 9 09:13:26 2017 SteveUpdateCamerasGigE camera lens with AR

We should move on with getting this lens from Edmonds #67-717  at 1064 R<3% 

Computar M5018-SWIR is an other choice

AR coatings 500 - 1100nm R<1% are expensive.

 

Quote:

50mm 1.8 lens with Basler camera at MC2 face with micro clamp 350617    Camera manuals plus

 

  13053   Thu Jun 8 12:43:42 2017 DhruvaUpdateOptical LeversBeam Profiling Results

 

Quote:

​Updates in the He-Ne beam profiling experiment. ​

New and improved plots for the He-Ne profiling experiment 

Font size has been increased to 30. 

The plots are maximum size (Following Rana's advice, I saved the plots as eps files(maximized) and converted them to pdf later).

There is a shaded region around the trendline that represents the parameter error. 

Function that I fit my data to (should have mentioned this in my earlier elog entries) 

P = \dfrac{P_0}{2}\Bigg[1+erf\Big(\dfrac{\sqrt2(X-X_0)}{w}\Big) \Bigg]

Description of my error analysis -

1. I have assumed a 20% deviation from markings in the micrometer error. 

2. Using the error in the micrometer, I have calculated the propogated error in the beam power :

\delta P = \sqrt{\dfrac{2}{\pi}}{P_0}\dfrac{\delta x}{w}\exp\Bigg({\frac{-2(X-X_0)^2}{w^2}}\Bigg)

I added this error to the stastistical error due to the fluctuation of the oscilloscope reading to obtain the total error in power. 

3. I found the Fisher Matrix by numerically differentiating the function at different data points P_b with respect to the parameters p_i =  P_0, X_0 and w.

F_{ij} = \sum_{b} {\frac{\partial P_b}{\partial p_i}\frac{\partial P_b}{\partial p_j}}\frac{1}{\sigma^2_b}

I then found the covariance matrix by inverting the Fisher Matrix and found the error in spot size estimation. 

EDIT : Residuals added to plots and all axes made equal 

  13052   Thu Jun 8 02:11:28 2017 gautamUpdateASSY-arm coil driver electronics investigation

Summary:

I investigated the analog electronics in the coil driver chain by using awggui to drive a given channel with Uniform noise between DC and 8kHz, with an overall gain of 1000 cts. This test was done for both ITMs and the BS. The Whitening/De-Whitening was off during the test. I measured the spectra in

  1. The digital domain (with DTT)
  2. At the output monitor of the AI board (with SR785)
  3. At the output of the coil driver board (with SR785)

Attachment #1 - There is good agreement between all 3 measurements. To convert the DTT spectrum to Vrms/rtHz, I multiplied the Y-axis by 10V / ( 2*sqrt(2) * 2^15 cts). Between DC and ~1kHz, the measured spectrum everywhere is flat, as expected given the test conditions. The AI filter response is also seen.

Attachment #2 - Zoomed in view of Attachment #1 (without the AI filter part).

*The DTT plots have been coarse-grained to keep the PDF file size managable. X (Y) axes are shared for all the plots in columns (rows).

 

Similar verification remains to be done for the ETMs, after which the test has to be repeated with the Whitening/DeWhitening engaged. But it's encouraging that things make sense so far (except perhaps the coil balancing can be better as suggested by the previous elog). 

 

I've left both arms locked. The Y-arm dither alignment is working well again, but for the X arm, the loops that actuate on the BS are still weird. Nothing obvious in the tests so far though.

GV 6pm 8 Jun 2017: I realized the X arm transmission was being monitored by the high-gain PD and not the QPD (which is how we usually run the ASS). The ASC mini screen suggested the transmitted beam was reasonably well centered on the X end QPD, and so I switched to this after which the X end dither alignment too converged. Possibly the beam was falling off the other PD, which is why the BS loops, which control the beam spot position on the ETM, were acting weirdly.

Quote:

will investigate the analog part of the drive electronics now.

 

Not related to this work:

I noticed the X-arm LSC servo was often hitting its limit - so I reduced the gain from 0.03 to 0.02. This reduced the control signal RMS, and re-acquiring lock at this lower gain wasn't a problem either. See attachment #3 (will be rotated later) for control signal spectra at this revised setting.

  13051   Wed Jun 7 17:45:11 2017 gautamUpdateASSY-arm coil driver electronics investigation

I repeated the test of driving C1:SUS-<Optic>_<coil>_EXC individually and measuring the transfer function to C1:SUS-<Optic>_OPLEV_PERROR for Optic in (ITMX, ITMY, ETMX, ETMY, BS), coil in (LLCOIL, LRCOIL, ULCOIL, URCOIL). 

There seems to be a few dB imbalance in the coils in both ETMs, as well as ITMX. ITMY and the BS seem to have pretty much identical TFs for all the coils - I will cross-check using OPLEV_YERROR, but is there any reason why we shouldn't adjust the gains in the coil output (not output matrix) filter banks to correct for this observed imbalance? The Oplev calibrations for the various optics are unknown, so it may not be fair to compare the TFs between optics (I guess the same applies to comparing TF magnitudes from coil to OPLEV_PERROR and OPLEV_YERROR, perhaps we should fix the OL calibrations before fiddling with coil gains...)

The anomalous behaviour of ITMY_UL (10dB greater than the others) was traced down to a rogue x3 gain in the filter module indecision. This has been removed, and now Y arm ASS works fine (with the original dither servo settings). X arm dither still doesn't converge - I double checked the digital filters and all seems in order, will investigate the analog part of the drive electronics now.

 

  13050   Wed Jun 7 15:41:51 2017 SteveUpdateComputerswindow laptop scanned

Randy Trudeau scanned our Window laptop Dell 13" Vostro and Steve's memory stick for virus. Nothing was found. The search continues...

Rana thinks that I'm creating these virus beasts with taking pictures with Dino Capture and /or Data Ray on the window machine........

 

 

  13049   Wed Jun 7 14:27:23 2017 SteveUpdateSummary Pagessummery pages not working

Last good page May 18, 2017

Not found, error message May 19 - June 4,2017

Blank plots,  June 5, 2017

  13048   Wed Jun 7 14:11:49 2017 gautamUpdateASSY-arm coil driver electronics investigation

Rana suggested taking a look at the Y-arm test mass actuator TFs (measured by driving the coils one at a time, with only local damping loops on, using the Oplev to measure the response to a given drive). Attached are the results from this measurement (I used the Oplev pitch error signal for all 8 measurements). Although the magnitude response for all coils have the expected 1/f^2 shape, there seems to be some significant (~10dB) asymmetry in both the ETM and ITM coils. The phase-response is also not well understood. If we are just measuring the TF of a pendulum with 1 Hz resonant frequency, then at and above 10Hz, I would expect the phase to be either 0 or 180 deg. Looks like there is a notch at 60 Hz somewhere, but it is unclear to me where the ~90 degree phase at ~100Hz is coming from.

For the ITM, the UL OSEM was replaced during the 2016 summer vent - the coil that is in there is now of the short OSEM variety, perhaps it has a different number of turns or something. I don't recall any coil balancing being done after this OSEM swap. For the ETM, it is unclear to me how long this situation has been like this.

Yesterday night, I tried to measure the ASS output matrix by stepping the ITM, ETM and TTs in PIT and YAW, and looking at the response in the various ASS error signals. During this test, I found the ETM and ITM pitch and yaw error signals to be highly coupled (the input matrix was diagonal). As Rana suggested, I think the whole coil driver signal chain from DAC output to coil driver board output has to be checked before attempting to fix ASS. Results from this investigation to follow.

Note: The OSEM calibration hasn't been done in a while (though the HeNes have been swapped out), but as Attachment #2 shows, if we believe the shadow sensor calibration, then the relative calibrations of the ITM and ETM Oplevs agree. So we can directly compare the TFs for the ITM and ETM.

 

  13047   Wed Jun 7 11:32:56 2017 SteveUpdateVACsmooth vac reboot

Gautam and Steve,

 

The medm monitor & vac control screens were totally blank since ~ May 24, 2017    Experienced vacuum knowledge is required for this job.

IDENTIFY valve configuration:

                        How to confirm valve configuration when all vac mons are blank?  Each valve has a manual-mechanical position indicator. Look at pressure readings and turbo pump controllers. VAC NORMAL configuration was confirmed based on these information.

Preparation: disconnect valves ( disconnect meaning: valve closes and stays paralized ) in this sequence VC2, VC1 power, VA6, V5, V4 & V1 power,      at ifo pressure 7.3E-6 Torr-it  ( it  = InstruTech cold cathode gauge )

                            This gauge is independent from all other rack  mounted   instrumentation and it is still not logged.

                            Switching to this valve configuration with disconnected valves will insure NOT  venting of the vacuum envelope by accidental glitching voltage drop or computer malfunction.

RESET  v1Vac1 .........in 2-3 minutes........ ( v1Vac1 - 2 )  the vac control screen started reading pressures & position

                    Connected cables to valves (meaning: valve will open if it was open before it was disconnected and it will be control able from computer ) in the following order: V4, V1 power, V5, VA6, VC2 & VC1 power,      at ifo 2E-5 Torr-it.....

                     ....vac configuration is reading VAC NORMAL,

                     ifo 7.4E-6 Torr-it

We have to hook up the it-cold cathode gauge to be monitored - logged !  this should be the substitute for the out of order CC1 pressure gauge.

  13046   Wed Jun 7 10:07:00 2017 SteveUpdatePEMair condition thermostate

The Y arm ac thermostate was calibrated after cooling water relay replacement by Mike.... yesterday. The set temp is remaind to be 70F

The east end south wall temp is reading 22C

  13045   Tue Jun 6 09:14:26 2017 SteveUpdateCamerasGigE installation at MC2

50mm 1.8 lens with Basler camera at MC2 face with micro clamp 350617    Camera manuals plus

Quote:

Thanks to Steve and Gautam, the IMC was locked.

I was able to capture images with the Rainbow 50 mm lens at exposure times of 100, 300, 1000, 3000, 10000 and 30 microseconds.(The pictures are in the same order). These pictures were taken at a gain of 300 and black level 64.

Special credits to Steve spent a lot of time help me a with setting up the hardware and focusing on the beam spot with the camera. 
I can't thank you enough Steve! :) 

Quote:

In the afternoon, Steve and I tried to install the camera near MC2 and get some images of the mirrors. Due to a restricted field of view of the lens on the camera, after many efforts to focus on the optic, we were able to get this image. MC2 was unlocked so this image captures some resonating higher order mode.

With MC2 locked, I will get some images of the mirror at different exposure times and try to get an HDR image.  
 

 

  13044   Mon Jun 5 21:53:55 2017 ranaUpdateComputersrossa: ubuntu 16.04

With the network config, mounting, and symlinks setup, rossa is able to be used as a workstation for dataviewer and MEDM. For DTT, no luck since there is so far no lscsoft support past the Ubuntu14 stage.

  13043   Mon Jun 5 18:40:12 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

[Gautam, Jigyasa]

This evening, Gautam helped me resolve the error I had been encountering. I had been trying to run the code on Allegra and that threw up the gst.elementfactory_make(“textoverlay”, “text0”); gst.ElementNotFoundError: textoverlay error.
As an attempt to resolve the error, I had set up the paths to match those mentioned in the document.
However as it turns out, it wasn't really needed.

 When Gautam ran the code from Pianosa, the following error showed up
gst.elementfactory_make(“x264enc”, “ en ”);gst.ElementNotFoundError: x264.

We found that the x264 and x264enc are different entities.
Gautam then installed the Ubuntu- restricted-extras package with the following
gstreamer0.10-plugins-bad-multiverse
gstreamer0.10-plugins-ugly-multiverse

And eventually on compilation, the message ‘starting server’ was displayed on the screen. This was interrupted by another error GenICAM_3_0_Basler_pylon_v5_0::RuntimeException’

 So there is apparently a problem executing the commands on Allegra, because the camera server starts running on Donatella and Pianosa. 

I will now be looking into this newly encountered error and also be setting up the symlinks for the various paths in the code. 

Quote:

Probably I could try putting all files in exactly the same directories as specified in the document. 

Quote:

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

 

 

  13042   Mon Jun 5 15:04:33 2017 ranaUpdateCamerasAttempt to run camera server Python code

Right - we want to be compatible with new version of the code, so instead of moving the files to where the code wants them you should make symlinks. The symlinkks go in the place that the code wants and points back to the place where we have the files now.

For the textoverlay, you can just comment it out for now. We can add it back in later once we decide on how to label the video.

  13041   Mon Jun 5 12:50:42 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

I think there might be a problem with the fact that the installation of the various components such as the .ini file and the Pylon software are in directories different from the ones Joe B. specifies in his paper. 

Instead of modifying the paths in the code itself, I tried creating the paths to match the code-

Update in /ligo directory 

/cds/caltech/c1/camera/L1-CAM-MC1.ini  created and then I ran the camera_server.py from scripts/GigE/SnapPy as

./camera_server.py -c /ligo/cds/caltech/c1/camera/L1-CAM-MC1.ini 

This prompted up the following on terminal- 

finished loading settings from /ligo/cds/caltech/c1/camera/L1-CAM-MC1.ini and lists the settings in the configuration file.


However the  gst.ElementNotFoundError: textoverlay still persists. 

Probably I could try putting all files in exactly the same directories as specified in the document. 

Quote:

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

 

  13040   Mon Jun 5 12:27:34 2017 jigyasaUpdateCamerasAttempt to run camera server Python code

While attempting to execute the Python/Pylon code for the camera server, camera_server.py, the compiler couldn’t locate the pylon-5.0.5.so file. So I included the path for the required .so file as

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rtcds/Caltech/c1/scripts/GigE/pylon5/lib64

So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
gst.ElementNotFoundError: textoverlay
 

The code reads- 

self.text= gst.elementfactory_make("textoverlay",text0")

Not sure what I am missing here. 

  13039   Mon Jun 5 10:30:45 2017 SteveUpdateSUSruby wire standoff pictures

Atm 1 & 5, showing the ruby R ~10 mm as it is seated on Al SOS test mass

Atm. 2, 3 & 4  chipped long edges with SOS sus wire OD 43 micron as  calibration

Quote:

Ruby wire standoff received from China. I looked one of them with our small USB camera.  They did a good job. The  long edges of the prism are chipped.

The v-groove cutter must avoid them. Pictures will follow.

 

 

  13038   Sun Jun 4 15:59:50 2017 gautamUpdateGeneralPower glitch - recovery

I think the CDS status is back to normal.

  • Bit 2 of the C1RFM status word was red, indicating something was wrong with "GE FANUC RFM Card 0".
  • You would think the RFM errors occur in pairs, in C1RFM and in some other model - but in this case, the only red light was on c1rfm.
  • While trying to re-align the IFO, I noticed that the TRY time series flatlined at 0 even though I could see flashes on the TRANSMON camera.
  • Quick trip to the Y-End with an oscilloscope confirmed that there was nothing wrong with the PD.
  • I crawled through some elogs, but didn't really find any instructions on how to fix this problem - the couple of references I did find to similar problems reported red indicator lights occurring in pairs on two or more models, and the problem was then fixed by restarting said models.
  • So on a hunch, I restarted all models on c1iscey (no hard or soft reboot of the FE was required)
  • This fixed the problem
  • I also had to start the monit process manually on some of the FEs like c1sus. 

Now IFO work like fixing ASS can continue...

  13037   Sun Jun 4 14:19:33 2017 ranaFrogsComputersNetwork slowdown: Martians are behind a waterwall

A few weeks ago we did some internet speed tests and found a dramatic difference between our general network and our internal Martian network in terms of access speed to the outside world.

As you can see, the speed from nodus is consistent with a Gigabit connection. But the speeds from any machine on the inside is ~100x slower. We need to take a look at our router / NAT setup to see if its an old hardware problem or just something in the software firewall. By comparison, my home internet download speed test is ~48 Mbit/s; ~6x faster than our CDS computers.


controls@megatron|~> speedtest
/usr/local/bin/speedtest:5: UserWarning: Module dap was already imported from None, but /usr/lib/python2.7/dist-packages is being added to sys.path
  from pkg_resources import load_entry_point
Retrieving speedtest.net configuration...
Testing from Caltech (131.215.115.189)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Race Communications (Los Angeles, CA) [29.63 km]: 6.52 ms
Testing download speed................................................................................
Download: 6.35 Mbit/s
Testing upload speed................................................................................................
Upload: 5.10 Mbit/s
controls@megatron|~> exit
logout
Connection to megatron closed.
controls@nodus|~ > speedtest
Retrieving speedtest.net configuration...
Testing from Caltech (131.215.115.52)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Phyber Communications (Los Angeles, CA) [29.63 km]: 2.196 ms
Testing download speed................................................................................
Download: 721.92 Mbit/s
Testing upload speed................................................................................................
Upload: 251.38 Mbit/s

  13036   Fri Jun 2 22:01:52 2017 gautamUpdateGeneralPower glitch - recovery

[Koji, Rana, Gautam]

Attachment #1 - CDS status at the end of todays efforts. There is one red indicator light showing an RFM error which couldn't be fixed by running "global diag reset" or "mxstream restart" scripts, but getting to this point was a journey so we decided to call it for today.


The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:

  1. Killed all models on all four other front ends other than c1ioo. 
  2. Hard reboot for c1ioo - at this point, we could ssh into c1ioo. With all other models killed, we restarted the c1ioo models one by one. They all came online smoothly.
  3. We then set about restarting the models on the other machines.
    • We started with the IOP models, and then restarted the others one by one
    • We then tried running "global diag reset", "mxstream restart" and "telnet fb 8087 -> shutdown" to get rid of all the red indicator fields on the CDS overview screen.
    • All models came back online, but the models on c1sus indicated a DC (data concentrator?) error. 
  4. After a few minutes, I noticed that all the models on c1iscex had stalled
    • dmesg pointed to a synchronization error when trying to initialize the ADC
    • The field that normally pulses at ~1pps on the CDS overview MEDM screen when the models are running normally was stuck
    • Repeated attempts to restart the models kept throwing up the same error in dmesg 
    • We even tried killing all models on all other frontends and restarting just those on c1iscex as detailed earlier in this elog for c1ioo - to no avail.
    • A walk to the end station to do a hard reboot of c1iscex revealed that both green indicator lights on the slave timing card in the expansion chassis were OFF.
    • The corresponding lights on the Master Timing Sequencer (which supplies the synchronization signal to all the front ends via optical fiber) were also off.
    • Sometime ago, Eric and I had noticed a similar problem. Back then, we simply switched the connection on the Master Timing Sequencer to the one unused available port, this fixed the problem. This time, switching the fiber connection on the Master Timing Sequencer had no effect.
    • Power cycling the Master Timing Sequencer had no effect
    • However, switching the optical fiber connections going to the X and Y ends lead to the green LED on the suspect port on the Master Timing Sequencer (originally the X end fiber was plugged in here) turning back ON when the Y end fiber was plugged in.
    • This suggested a problem with the slave timing card, and not the master. 
  5. Koji and I then did the following at the X-end electronics rack:
    • Shutdown c1iscex, toggled the switches in the front and back of the expansion chassis
    • Disconnect AC power from rear of c1iscex as well as the expansion chassis. This meant all LEDs in the expansion chassis went off, except a single one labelled "+5AUX" on the PCB - to make this go off, we had to disconnect a jumper on the PCB (see Attachment #2), and then toggle the power switches on the front and back of the expansion chassis (with the AC power still disconnected). Finally all lights were off.
    • Confident we had completely cut all power to the board, we then started re-connecting AC power. First we re-started the expansion chassis, and then re-booted c1iscex.
    • The lights on the slave timing card came on (including the one that pulses at ~1pps, which indicates normal operation)!
  6. Then we went back to the control room, and essentially repeated bullet points 2 and 3, but starting with c1iscex instead of c1ioo.
  7. The last twist in this tale was that though all the models came back online, the DC errors on c1sus models persisted. No amount of "mxstream restart", "global diag reset", or restarting fb would make these go away.
  8. Eventually, Koji noticed that there was a large discrepancy in the gpstimes indicated in c1x02 (the IOP model on c1sus), compared to all the other IOP models (even though the PDT displayed was correct). There were also a large number or IRIG-B errors indicated on the same c1x02 status screen, and the "TIM" indicator in the status word was red.
  9. Turns out, running ntpdate before restarting all the models somehow doesn't sync the gps time - so this was what was causing the DC errors. 
  10. So we did a hard reboot of c1sus (and for good measure, repeated the bullet points of 5 above on c1sus and its expansion chassis). Then, we tried starting the c1x02 model without running ntpdate first (on startup, there is an 8 hour mismatch between the actual time in Pasadena and the system time - but system time is 8 hours behind, so it isn't even somehow syncing to UTC or any other real timezone?)
    • Model started up smoothly
    • But there was still a 1 second discrepancy between the gpstime on c1x02 and all the other IOPs (and the 8 hour discrepancy between displayed PDT and actual time in Pasadena)
    • So we tried running ntpdate after starting c1x02 - this finally fixed the problem, gpstime and PDT on c1x02 agreed with the other frontends and the actual time in Pasadena.
    • However, the models on c1lsc and c1ioo crashed
    • So we restarted the IOPs on both these machines, and then the rest of the models.
  11. Finally, we ran "mxstream restart", "global diag reset", and restarted fb, to make the CDS overview screen look like it does now.

Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error? 

Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.

Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params. 

Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.

Attachment #4 - Warning lights on C1IOO

Quote:

Today's recovery seems to be a lot more complicated than usual.

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

 

  13035   Fri Jun 2 16:02:34 2017 gautamUpdateGeneralPower glitch

Today's recovery seems to be a lot more complicated than usual.

  • The vertex area of the lab is pretty warm - I think the ACs are not running. The wall switch-box (see Attachment #1) shows some red lights which I'm pretty sure are usually green. I pressed the push-buttons above the red light, hopefully this fixed the AC and the lab cools down soon.
  • Related to the above - C1IOO has a bunch of warning orange indicator lights ON that suggest it is feeling the heat. Not sure if that is why, but I am unable to bring any of the C1IOO models back online - the rtcds compilation just fails, after which I am unable to ssh back into the machine as well.
  • C1SUS was problematic as well. I found that the expansion chassis was not powered. Fortunately, this was fixed by simply switching to the one free socket on the power strip that powers a bunch of stuff on 1X4 - this brought the expansion chassis back alive, and after a soft reboot of c1sus, I was able to get these models up and running. Fortunately, none of the electronics seem to have been damaged. Perhaps it is time for surge-protecting power strips inside the lab area as well (if they aren't already)? 
  • I was unable to successfully resolve the dmesg problem alluded to earlier. Looking through some forums, I gather that the output of dmesg should be written to a file in /var/log/. But no such file exists on any of our 5 front-ends (but it does on Megatron, for example). So is this way of setting up the front end machines deliberate? Why does this matter? Because it seems that the buffer which we see when we simply run "dmesg" on the console gets preiodically cleared. So sometime back, when I was trying to verify that the installed DACs are indeed 16-bit DACs by looking at dmesg, running "dmesg | head" showed a first line that was written to well after the last reboot of the machine. Anyway, this probably isn't a big deal, and I also verified during the model recompilation that all our DACs are indeed 16-bit.
  • I was also trying to set up the Upstart processes on megatron such that the MC autolocker and FSS slow control scripts start up automatically when the machine is rebooted. But since C1IOO isn't co-operating, I wasn't able to get very far on this front either...

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)

 

Quote:

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

 

  13034   Fri Jun 2 12:32:16 2017 gautamUpdateGeneralPower glitch

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

  13033   Fri Jun 2 01:22:50 2017 gautamUpdateASSASS restoration work

I started by checking if shaking an optic in pitch really moves it in pitch - i.e. how much PIT to YAW coupling is there. The motivation being if we aren't really dithering the optics in orthogonal DoFs, the demodulated error signals carry mixed information which the dither alignment servos get confused by. First, I checked with a low frequency dither (~4Hz) and looked at the green transmission on the video monitors. The spot seemed to respond reasonably orthogonally to both pitch and yaw excitations on either ITMY or ETMY. But looking at the Oplev control signal spectra, there seems to be a significant amount of cross coupling. ITMY YAW, ETMY PIT, and ETMY YAW have the peak in the orthogonal degree of freedom at the excitation frequency roughly 20% of the height of the DoF being driven. But for ITMY PIT, the peaks in the orthogonal DoFs are almost of equal height. This remains true even when I changed the excitation frequencies to the nominal dither alignment servo frequencies.

I then tried to see if I could get parts of the ASS working. I tried to manually align the ITM, ETM and TTs as best as I could. There are many "alignment references" - prior to the coil driver board removal, I had centered all Oplevs and also checked that both X and Y green beams had nominal transmission levels (~0.4 for GTRY, ~0.5 for GTRX). Then there are the Transmon QPDs. After trying various combinations, I was able to get good IR transmission, and reasonable GTRY.

Next, I tried running the ASS loops that use error signals demodulated at the ETM dither frequencies (so actuation is on the ITM and TT1 as per the current output matrix which I did not touch for tonight). This worked reasonably well - Attachment #1 shows that the servos were able to recover good IR transmission when various optics in the Y arm were disturbed. I used the same oscillator frequencies as in the existing burt snapshot. But the amplitudes were tweaked.

Unfortunately I had no luck enabling the servos that demodulate the ITM dithers.

The plan for daytime work tomorrow is to check the linearity of the error signals in response to static misalignment of some optics, and then optimize the elements of the output matrix.

I am uploading a .zip file with Sensoray screen-grabs of all the test-masses in their best aligned state from tonight (except ITMX face, which for some reason I can't grab).

And for good measure, the Oplev spot positions - Attachment #3.

Quote:

While Gautam is working the restoration of Yarm ASS, I worked on Xarm.

 

  13032   Fri Jun 2 00:54:08 2017 KojiUpdateASSXarm ASS restoration work

While Gautam is working the restoration of Yarm ASS, I worked on Xarm.

Basically, I have changed the oscillator freqs and amps so as to have linear signals to the misalignment of the mirrors.
Also reduced the complexity of the input/output matrices to avoid any confusion.

Now the ITM dither takes care of the ITM alignment, and the ETM dither takes care of the ETM alignment.
The cavity alignment servos (4dofs) are running fine although the control band widths are still low (<0.1Hz).
The ETM spot positions should be controlled by the BS alignment, but it seems that these loops have suspicion about the signal quality.

While Gautam wa stouching the input TTs, we occasionally saw anomalously high transmission of the arm cavities (~1.2).
We decided to use this beam as this could have indicated partial clipping of the beam somewhere in the input optics chain.

Then the arm cavity was aligned to have reasonably high transmission for the green beam. i.e. Use the green power mon PD as a part of the alignment reference.

This resulted very stable transmission of both the IR and green beams. We liked them. We decide to use this a reference beam at least for now.

Attachment1: GTRX image at the end of the work.

Attachment2: ASSX screen shot

Attachment3: ASSX servo screen shot

Attachment4: Green ASX servo screen shot

Attachment 5: Screen shot of the ASS X strip tool

Attachment 6: Screen shot of the ASS X input matrix

Attachment 7: Screen shot of the ASS X output matrix

  13031   Thu Jun 1 20:16:11 2017 ranaUpdateCamerasGigE installation in the IFO area

Good installation. I think the images are still out of focus, so try to resolve into some small dots at the low exposure setting.

  13030   Thu Jun 1 16:21:55 2017 SteveUpdateSUS wire standoffs update

Ruby wire standoff received from China. I looked one of them with our small USB camera.  They did a good job. The  long edges of the prism are chipped.

The v-groove cutter must avoid them. Pictures will follow.

 

ELOG V3.1.3-