40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 224 of 344  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  14532   Wed Apr 10 23:37:59 2019 gautamUpdatePSLPSL fan is noisy

Attached is my phone recording of what it sounds like right now in the PSL enclosure - not good for frequency noise measurement! The culprit is the little PC fan that is hacked onto the back of the Innolight controller. 

  1. Is this necessary?
  2. If so, is it sufficient to replace this fan with one from our stock?
  14533   Thu Apr 11 01:10:05 2019 gautamUpdateALSLarge 2kHz peak (and harmonics) in ALS X

These weren't present last week. The peaks are present in the EX PDH error monitor signal, and so are presumably connected with the green locking system. My goal tonight was to see if the arm length control could be done using the ALS error signal as opposed to POX, but I was not successful.

  14541   Mon Apr 15 10:20:44 2019 gautamUpdateOptical LeversBS Oplev PIT was oscillating

The AS spot on the camera was oscillating at ~3 Hz. Looking at the Oplevs, the culprit was the BS PIT DoF. Started about 12 hours ago, not sure what triggered it. I disabled Oplev damping, and waited for the angular motion to settle down a bit, and then re-enabled the servo - damps fine now...

  14544   Mon Apr 15 22:39:10 2019 gautamUpdateFrequency noise measurementAlternate setup with PSL pickoff

[anjali, gautam]

just main points, anajli is going to fill out the details.

To rule out mode-matching as the reason for non-ideal output from the MZ, I suggested using the setup I have on the NW side of the PSL enclosure for the measurement. This uses two identical fiber collimators, and the distance between collimator and recombination BS is approximately the same, so the spatial modes should be pretty well matched. 

The spooled fiber we found was not suitable for use as it had a wide key connector and I couldn't find any wide-key FC/PC to narrow-key FC/APC adaptors. So we decided to give the fiber going to the Y end and back (~90m estimated length) a shot. We connected the two fibers at the EY table using a fiber mating sleeve (so the fiber usually bringing the IR pickoff from EY to the PSL table was disconnected from its collimator). 

In summary, we cannot explain why the contrast of the MZ is <5%. Spatial mode-overlap is definitely not to blame. Power asymmetry in the two arms of the MZ is one possible explanation, could also be unstable polarization, even though we think the entire fiber chain is PM. Anjali is investigating.

 


We saw today that the Thorlabs PM beam splitters (borrowed from Andrew until our AFW components arrive) do not treat the two special axes (fast and slow) of the fiber on equal footing. When we coupled light into the fast axis, we saw huge asymmetry between the two split arms of the beamsplitter (3:1 ratio in power instead of the expected 1:1 for a 50/50 BS). Looking at the patch cord with an IR viewer, we also saw light leaking through the core along it. Turns out this part is meant to be used with light coupled to the slow axis only.

  14545   Mon Apr 15 22:55:34 2019 gautamFrogsThermal CompensationLab thermostat adjusted

It is feeling cold in the office area. According to the digital wall clock near the coffee machine, it is 19C. Rana bumped the thermostat setpoint up by 2F (from 75F to 77F). We need to setup long-term monitoring.

  14546   Tue Apr 16 22:06:51 2019 gautamUpdateVACVac interlock tripped again

This happened again, about 30,000 seconds (~2:06pm local time according to the logfile) ago. The cited error was the same -

2019-04-16 14:06:05,538 - C1:Vac-error_status => VA6 closed. AC power loss.

Hard to believe there was any real power loss, nothing else in the lab seems to have been affected so I am inclined to suspect a buggy UPS communication channel. The PSL shutter was not closed - I believe the condition is for P1a to exceed 3 mtorr (it is at 1 mtorr right now), but perhaps this should be modified to close the PSL shutter in the event of any interlock tripping. Also, probably not a bad idea to send an email alert to the lab mailing list in the event of a vac interlock failure.

For tonight, I only plan to work with the EX ALS system anyways so I'm closing the PSL shutter, I'll work with Chub to restore the vacuum if he deems it okay tomorrow.

  14547   Wed Apr 17 00:43:38 2019 gautamUpdateFrequency noise measurementMZ interferometer ---> DAQ
  1. Delay fiber was replaced with 5m (~30 nsec delay)
    • The fringing of the MZ was way too large even with the free running NPRO (~3 fringes / sec)
    • Since the V/Hz is proportional to the delay, I borrowed a 5m patch cable from Andrew/ATF lab, wrapped it around a spool, and hooked it up to the setup
    • Much more satisfactory fringing rate (~1 wrap every 20 sec) was observed with no control to the NPRO
  2. MZ readout PDs hooked up to ALS channels
    • To facilitate further quantitative study, I hooked up the two PDs monitoring the two ports of the MZ to the channels normally used for ALS X.
    • ZHL3-A amps inputs were disconnected and were turned off. Then cables to their outputs were highjacked to pipe the DC PD signals to the 1Y3 rack
    • Unfortunately there isn't a DQ-ed fast version of this data (would require a model restart of c1lsc which can be tricky), but we can already infer the low freq fringing rate from overnight EPICS data and also use short segments of 16k data downloaded "live" for the frequency noise measurement.
    • Channels are C1:ALS-BEATX_FINE_I_IN1 and C1:ALS-BEATX_FINE_Q_IN1 for 16k data, and C1:ALS-BEATX_FINE_I_INMON and C1:ALS-BEATX_FINE_I_INMON for 16 Hz.

At some point I'd like to reclaim this setup for ALS, but meantime, Anjali can work on characterization/noise budgeting. Since we have some CDS signals, we can even think of temperature control of the NPRO using pythonPID to keep the fringe in the linear regime for an extended period of time.

  14548   Wed Apr 17 00:50:17 2019 gautamUpdateALSLarge 2kHz peak (and harmonics) in ALS X no more

I looked into this issue today. Initially, my thinking was that I'd somehow caused clipping in the beampath somewhere which was causing this 2kHz excitation. However, on looking at the spectrum of the in-loop error signal today (Attachment #1), I found no evidence of the peak anymore!

Since the vacuum system is in a non-nominal state, and also because my IR ALS beat setup has been hijacked for the MZ interferometer, I don't have an ALS spectrum, but the next step is to try single arm locking using the ALS error signal. To investigate whether the 2kHz peak is a time-dependent feature, I left the EX green locked to the arm (with the SLOW temperature offloading servo ON), hopefully it stays locked overnight...

Quote:

These weren't present last week. The peaks are present in the EX PDH error monitor signal, and so are presumably connected with the green locking system. My goal tonight was to see if the arm length control could be done using the ALS error signal as opposed to POX, but I was not successful.

  14549   Wed Apr 17 11:01:49 2019 gautamUpdateALSLarge 2kHz peak (and harmonics) in ALS X no more

EX green stayed locked to XARM length overnight without a problem. The spectrogram doesn't show any alarming time varying features around 2 kHz (or at any other frequency).

  14550   Wed Apr 17 18:12:06 2019 gautamUpdateVACVac interlock tripped again

After getting the go ahead from Chub and Jon, I restored the Vacuum state to "Vacuum normal", see Attachment #1. Steps:

  1. Interlock code modifications
    • Backed up /opt/target/python/interlocks/interlock_conditions.yaml to /opt/target/python/interlocks/interlock_conditions_UPS.yaml
    • The "power_loss" condition was removed for every valve and pump inside /opt/target/python/interlocks/interlock_conditions.yaml
    • The interlock service was restarted using sudo systemctl restart interlock.service
    • Looking at the status of the service, I saw that it was dying ~ every 1 second.
    • Traced this down to a problem in/opt/target/python/interlocks/interlock_conditions.yaml  when the "pump_managers" are initialized - the way this is coded up, doesn't play nice if there are no conditions specified in the yaml file. For now, I just commented this part out. The git diff  below:
  2. Restoring vacuum normal:
    • Spun up TP1, TP2 and TP3
    • Opened up foreline of TP1 to TP2, and then opened main volume to TP1
    • Opened up annulus foreline to TP3, and then opened the individual annular volumes to TP3.
controls@c1vac:/opt/target/python/interlocks$ git diff interlock.py
diff --git a/python/interlocks/interlock.py b/python/interlocks/interlock.py
index 28d3366..46a39fc 100755
--- a/python/interlocks/interlock.py
+++ b/python/interlocks/interlock.py
@@ -52,8 +52,8 @@ class Interlock(object):
         self.pumps = []
         for pump in interlocks['pumps']:
             pm = PumpManager(pump['name'])
-            for condition in pump['conditions']:
-                pm.register_condition(*condition)
+            #for condition in pump['conditions']:
+            #    pm.register_condition(*condition)
             self.pumps.append(pm)

So far the pressure is coming down smoothly, see Attachment #2. I'll keep an eye on it.

PSL shutter was opened at 645pm local time. IMC locked almost immediately.

Update 11pm: The pressure has reached 8.5e-6 torr without hiccup. 

  14551   Thu Apr 18 22:35:23 2019 gautamUpdateSUSETMY actuator diagnosis

[rana, gautam]

Rana did a checkout of my story about oddness of the ETMY suspension. Today, we focused on the actuators - the goal was to find the correct coefficients on the 4 face coils that would result in diagonal actuation (i.e. if we actuate on PIT, it only truly moves the PIT DoF, as witnessed by the Oplev, and so on for the other DoFs). Here are the details:

  1. Ramp times for filter modules:
    • All the filter modules in the output matrix did not have ramp times set.
    • We used python, cdsutils and ezca to script the writing of a 3 second ramp to all the elements of the 5x6 output matrix.
    • The script lives at /opt/rtcds/caltech/c1/scripts/cds/addRampTimes.py, can be used to implement similar scripts to initialize large numbers of channels (limiters, ramp times etc).
  2. Bounce mode checkout:
    • ​The motivation here was to check if there is anomalously large coupling of the bounce mode to any of the other DoFs for ETMY relative to the other optics
    • The ITMs have a different (~15.9 Hz) bounce mode frequency compared to the ETMs (~16.2 Hz).
    • I hypothesize that this is because the ETMs were re-suspended in 2016 using new suspension wire.
    • We should check out specs of the wires, look for either thickness differences or alloying composition variation (Steve has already documented some of this in the elog linked above). Possibly also check out the bounce mode for a 250g load on the table top.
  3. Step responses for PIT and YAW
    • With the Oplevs disabled (but other local damping loops engaged), we applied a step of 100 DAC counts to the PIT and YAW DoFs from the realtime system (one at a time)
    • We saw significant cross-coupling of the YAW step coupling to PIT, at the level of 50%.
  4. OSEM coil coefficient balancing
    • I had done this a couple of months ago looking at the DC gain of the 1/f^2 pendulum response.
    • Rana suggested an alternate methodology 
      • we used the lock-in amplifier infrastructure on the SUS screens to drive a sine wave
      • Frequencies were chosen to be ~10.5 Hz and ~13.5 Hz, to be outside the Oplev loop bandwidth
      • Tests were done with the Oplev loop engaged. The Oplev error signal was used as a diagnostic to investigate the PIT/YAW cross coupling.
      • In the initial tests, we saw coupling at the 20% level. If the Oplev head is rotated by 0.05 rad relative to the "true" horizontal-vertical coordinate system, we'd expect 5% cross coupling. So this was already a red flag (i.e. it is hard to believe that Oplev QPD shenanigans are responsible for our observations). We decided to re-diagonalize the actuation.
      • The output matrix elements for the lock-in-amplifier oscillator signals were adjusted by adding some amount of YAW to the PIT elements (script lives at /opt/rtcds/caltech/c1/scripts/SUS/stepOutMat.py), and vice versa, and we tried to reduce the height of the cross-coupled peaks (viewed on DTT using exponential weighting, 4 avgs, 0.1 Hz BW - note that the DTT cursor menu has a peak find option!). DTT Template saved at /users/Templates/SUS/ETMY-actDiag.xml
      • This worked really well for minimizing PIT response while driving YAW, not as well for minimizing YAW in PIT. 
      • Next, we added some YAW to a POS drive to minimize the any signal at this drive frequency in the Oplev YAW error signal. Once that was done, we minimized the peak in the Oplev PIT error signal by adding some amount of PIT actuation.
      • So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC. 
  5. Next steps:
    • All of our tests tonight were at AC - once the coil balancing has been done at AC, we have to check the cross coupling at DC. If everything is working correctly, the response should also be fairly well decoupled at DC, but if not, we have to come up with a hypothesis as to why the AC and DC responses are different.
    • Can we gain any additional info from driving the pringle mode and minimizing it in the Oplev error signals? Or is the problem overconstrained?
    • After the output matrix diagonalization is done, drive the optic in POS, PIT and YAW, and construct the input matrix this way (i.e. transfer function), as an alternative to the usual free-swinging ringdown method. Look at what kind of an input matrix we get.
    • Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286.
  14552   Thu Apr 18 23:10:12 2019 gautamUpdateLoss MeasurementX arm misaligned

Yehonathan wanted to take some measurements for loss determination. I misaligned the X arm completely and we installed a PD on the AS table so there is no light reaching the AS55 and AS110 PDs. Yehonathan will post the detailed elog.

  14554   Fri Apr 19 11:36:23 2019 gautamUpdateSUSNo consistent solution for output matrix

Ther isn't a consistent set of OSEM coil gains that explains the best actuation vectors we determined yesterday. Here are the explicit matrices:

  1. POS (tuned to minimize excitation at ~13.5 Hz in the Oplev PIT and YAW error signals): \begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 0.98 \\ 0.96 \\ 1.04 \\ 1.02 \\ \end{bmatrix}
  2. PIT (tuned to minimize cross coupled peak in the Oplev YAW error signal at ~10.5 Hz): ​\begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 0.64 \\ 1.12 \\ -1.12 \\ -0.64 \\ \end{bmatrix}
  3. YAW (tuned to minimize cross coupled peak in the Oplev PIT error signal at ~13.5 Hz): \begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 1.5 \\ -0.5 \\ 0.5 \\ -1.5 \\ \end{bmatrix}

There isn't a solution to the matrix equation \begin{bmatrix} \alpha_1 & \alpha_2 & \alpha_3 & \alpha_4 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & -1 \\ 1 & -1 & 1 \\ 1 & -1 & -1 \end{bmatrix} =\begin{bmatrix} 0.98 & 0.64 & 1.5 \\ 0.96 & 1.12 & -0.5 \\ 1.04 & -1.12 & 0.5 \\ 1.02 & -0.64 & -1.5 \end{bmatrix}, i.e. we cannot simply redistribute the actuation vectors we found as gains to the coils and preserve the naive actuation matrix. What this means is that in the OSEM coil basis, the actuation eigenvectors aren't the naive ones we would think for PIT and YAW and POS. Instead, we can put these custom eigenvectors into the output matrix, but I'm struggling to think of what the physical implication is. I.e. what does it mean for the actuation vectors for PIT, YAW and POS to not only be scaled, but also non-orthogonal (but still linearly independent) at ~10 Hz, which is well above the resonant frequencies of the pendulum? The PIT and YAW eigenvectors are the least orthogonal, with the angle between them ~40 degrees rather than the expected 90 degrees.

Quote:

So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC. 

  14556   Fri Apr 19 14:06:36 2019 gautamUpdatePSLInnolight NPRO shutoff

When I got back from lunch just now, I noticed that the PMC TRANS and REFL cameras were showing no spots. I went onto the PSL table, and saw that the NPRO was in fact turned off. I turned it back on.

The laser was definitely ON when I left for lunch around 130pm, and this happend around 140pm. Anjali says no one was in the lab in between. None of the FEs are dead, suggesting there wasn't a labwide power outage, and the EX and EY NPROs were not affected. I had pulled out the diagnostics connector logged by Acromag, I'm restoring it now in the hope we can get some more info on what exactly happened if this is a recurring event. So FSS_RMTEMP isn't working from now on. Sooner we get the PSL Acromag crate together, the better...

  14558   Fri Apr 19 16:19:42 2019 gautamUpdateSUSActuation matrix still not orthogonal

I repeated the exercise from yesterday, this time driving the butterfly mode [+1 -1 -1 +1] and adding the tuned PIT and YAW vectors from yesterday to it to minimize appearance in the Oplev error signals. 

The measured output matrix is \begin{bmatrix} 0.98 & 0.64 & 1.5 & 1.037 \\ 0.96 & 1.12 & -0.5 & -0.998 \\ 1.04 & -1.12 & 0.5 & -1.002 \\ 1.02 & -0.64 & -1.5 & 0.963 \end{bmatrix}, where rows are the coils in the order [UL,UR,LL,LR] and columns are the DOFs in the order [POS,PIT,YAW,Butterfly]. The conclusions from my previous elog still hold though - the orthogonality between PIT and YAW is poor, so this output matrix cannot be realized by a simple gain scaling of the coil output gains. The "adjustment matrix", i.e. the 4x4 matrix that we must multiply the "ideal" output matrix by to get the measured output matrix has a condition number of 134 (1 is a good condition number, signifies closeness to the identity matrix). 

Quote:

let us have 3 by 4, nevermore

so that the number of columns is no less

and no more

than the number of rows

so that forevermore we live as 4 by 4

  14560   Fri Apr 19 20:21:52 2019 gautamUpdatePSLInnolight NPRO shutoff

Happened again at ~730pm.

The NPRO diag channels don't really tell me what happened in a causal way, but the interlock channel seems suspicious. Why is the nominal value 0.04 V? From the manual, it looks like the TGUARD is an indication of deviations between the set temperature and actual diode laser temperature. Is it normal for it to be putting out 11V?

I'm not going to turn it on again right now while I ponder which of my hands I need to chop off.

Quote:
 

I'm restoring it now in the hope we can get some more info on what exactly happened if this is a recurring event.

  14562   Mon Apr 22 22:43:15 2019 gautamUpdateSUSETMY sensor diagnosis

Here are the results from this test. The data for 17 April is with the DC bias for ETMY set to the nominal values (which gives good Y arm cavity alignment), while on 18 April, I changed the bias values until all four shadow sensors reported values that were at least 100 cts different from 17 April. The times are indicated in the plot titles in case anyone wants to pull the data (I'll point to the directory where they are downloaded and stored later).

There are 3 visible peaks. There was negligible shift in position (<5 mHz)  / change in Q of any of these with the applied Bias voltage. I didn't attempt to do any fitting as it was not possible to determine which peak corresponds to which DoF by looking at the complex TFs between coils (at each peak, different combinations of 3 OSEMs have the same phase, while the fourth has ~180 deg phase lead/lag). FTR, the wiki leads me to expect the following locations for the various DoFs, and I've included the closest peak in the current measured data in parentheses:

DoF Frequency [Hz]
POS 0.982 (0.947)
PIT 0.86 (0.886)
YAW 0.894 (0.886)
SIDE 1.016 (0.996)

However, this particular SOS was re-suspended in 2016, and this elog reports substantially different peak positions, in particular, for the YAW DoF (there were still 4). The Qs of the peaks from last week's measurements are in the range 250-350.

Quote:

Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286.

  14566   Wed Apr 24 16:06:44 2019 gautamUpdatePSLInnolight NPRO shutoff

After discussing with Koji, I turned the NPRO back on again, at ~4PM local time. I first dialled the injection current down to 0A. Then powered the control unit state to "ON". Then I ramped up the power by turning the front panel dial. Lasing started at 0.5A, and I saw no abrupt swings in the power (I used PMC REFL as a monitor, there were some mode flashes which are the dips seen in the power, and the x-axis is in units of time not pump current). PMC was relocked and IMC autolocker locked the IMC almost immediately.

Now we wait and watch I guess.

  14567   Wed Apr 24 17:07:39 2019 gautamUpdateSUSc1susaux in-situ testing [and future of IFOtest]

[jon, gautam]

For the in-situ test, I decided that we will use the physical SRM to test the c1susaux Acromag replacement crate functionality for all 8 optics (PRM, BS, ITMX, ITMY, SRM, MC1, MC2, MC3). To facilitate this, I moved the backplane connector of the SRM SUS PD whitening board from the P1 connector to P2, per Koji's mods at ~5:10PM local time. Watchdog was shutdown, and the backplane connectors for the SRM coil driver board was also disconnected (this is interfaced now to the Acromag chassis).

I had to remove the backplane connector for the BS coil driver board in order to have access to the SRM backplane connector. Room in the back of these eurocrate boxes is tight in the existing config...

At ~6pm, I manually powered down c1susaux (as I did not know of any way to turn off the EPICS server run by the old VME crate in a software way). The point was to be able to easily interface with the MEDM screens. So the slow channels prefixed C1:SUS-* are now being served by the Supermicro called c1susaux2.

A critical wiring error was found. The channel mapping prepared by Johannes lists the watchdog enable BIO channels as "C1:SUS-<OPTIC>_<COIL>_ENABLE", which go to pins 23A-27A on the P1 connector, with returns on the corresponding C pins. However, we use the "TEST" inputs of the coil driver boards for sending in the FAST actuation signals. The correct BIO channels for switching this input is actually "C1:SUS-<OPTIC>_<COIL>_TEST", which go to pins 28A-32A on the P1 connector. For todays tests, I voted to fix this inside the Acromag crate for the SRM channels, and do our tests. Chub will unfortunately have to fix the remaining 7 optics, see Attachment #1 for the corrections required. I apportion 70% of the blame to Johannes for the wrong channel assignment, and accept 30% for not checking it myself.

The good news: the tests for the SRM channels all passed!

  • Attachment #2: Output of Jon's testing code. My contribution is the colored logs courtesy of python's coloredlogs package, but this needs a bit more work - mainly the PASS mssage needs to be green. This test applies bias voltages to PIT/YAW, and looks for the response in the PDmon channels. It backs out the correct signs for the four PDs based on the PIT/YAW actuation matrix, and checks that the optic has moved "sufficiently" for the applied bias. You can also see that the PD signals move with consistent signs when PIT/YAW misalignment is applied. Additionally, the DC values of the PDMon channels reported by the Acromag system are close to what they were using the VME system. I propose calling the next iteration of IFOtest "Sherlock".
  • Attachment #3: Confirmation (via spectra) that the SRM OSEM PD whitening can still be switched even after my move of the signals from the P1 connector to the P2 connector. I don't have an explanation right now for the shape of the SIDE coil spectrum.
  • Attachment #4: Applied 100 cts (~ 100*10/2**15/2 ~ 15mV at the monitor point) offset at the bias input of the coil output filters on SRM (this is a fast channel). Looked for the response in the Coil Vmon channels (these are SLOW channels). The correct coil showed consistent response across all 5 channels.

Additionally, I confirmed that the watchdog tripped when the RMS OSEM PD voltage exceeded 200 counts. Ideally we'd have liked to test the stability of the EPICS server, but we have shut it down and brought the crate back out to the electronics bench for Chub to work on tomorrow.

I restarted the old VME c1susaux at 915pm local time as I didn't want to leave the watchdogs in an undefined state. Unsurprisingly, ITMY is stuck. Also, the BS (cable #22) and SRM (cable #40) coil drivers are physically disconnected at the front DB15 output because of the undefined backplane inputs. I also re-opened the PSL shutter.

  14569   Thu Apr 25 00:30:45 2019 gautamUpdateSUSETMY BR mode

We briefly talked about the bounce and roll modes of the SOS optic at the meeting today. 

Attachment #1: BR modes for ETMY from my free-swinging run on 17 April. The LL coil has a very different behavior from the others.

Attachment #2: BR modes for ETMY from my free-swinging run on 18 April, which had a macroscopically different bias voltage for the PIT/YAW sliders. Here too, the LL coil has a very different behavior from the others.

Attachment #3: BR modes for ETMX from my free-swinging run on 27 Feb. There are many peaks in addition to the prominent ones visible here, compared to ITMY. The OSEM PD noise floor for UR and SIDE is mysteriously x2 lower than for the other 3 OSEMs???

In all three cases, a bounce mode around 16.4 Hz and a roll mode around 24.0 Hz are visible. The ratio between these is not sqrt(2), but is ~1.46, which is ~3% larger. But when I look at the database, I see that in the past, the bounce and roll modes were in fact at close to these frequencies.

In conclusion:

  1. the evidence thus far says that ETMY has 5 resonant modes in the free-swinging data between 0.5 Hz and 25 Hz.
  2. Either two modes are exactly degenerate, or there is a constraint in the system which removes 1 degree of freedom.
  3. How likely is the latter? As any mechanical constraint that removes one degree of freedom would presumably also damp the Qs of the other modes more than what we are seeing.
  4. Can some large piece of debris on the barrel change the PIT/YAW eigenvectors such that the eigenvalues became exactly degenerate?
  5. Furthermore, the AC actuation vectors for PIT and YAW are not close to orthogonal, but are rotated ~45 degrees relative to each other.

Because of my negligence and rushing the closeout procedure, I don't have a great close-out picture of the magnet positions in the face OSEMs, the best I can find is Attachment #4. We tried to replicate the OSEM arrangement (orientation of leads from the OSEM body) from July 2018 as closely as possible.

I will investigate the side coil actuation strength tomorrow, but if anyone can think of more in-air tests we should do, please post your thoughts/poetry here.

  14570   Thu Apr 25 01:03:29 2019 gautamUpdatePSLMC trans is ~1000 cts (~7%) lower than usual

When dialing up the current, I went up to 2.01 A on the front panel display, which is what I remember it being. The label on the controller is from when the laser was still putting out 2W, and says the pump current should be 2.1 A. Anyhow, the MC transmission is ~7% lower now (14500 cts compared to the usual 15000-15500 cts), even after tweaking the PMC alignment to minimize PMC REFL. Potentially there is less power coming out of the NPRO. I will measure it at the window tomorrow with a power meter.

  14573   Thu Apr 25 10:25:19 2019 gautamUpdateFrequency noise measurementHomodyne v Heterodyne

If I understand correctly, the Mach-Zehnder readout port power is only a function of the differential phase accumulated between the two interfering light beams. In the homodyne setup, this phase difference can come about because of either fiber length change OR laser frequency change. We cannot directly separate the two effects. Can you help me understand what advantage, if any, the heterodyne setup offers in this regard? Or is the point of going to heterodyne mainly for the feedback control, as there is presumably some easy way to combine the I and Q outputs of the heterodyne measurement to always produce an error signal that is a linear function of the differential phase, as opposed to the sin^2 in the free-running homodyne setup? What is the scheme for doing this operation in a high bandwidth way (i.e. what is supposed to happen to the demodulated outputs in Attachment #3 of your elog)? What is the advantage of the heterodyne scheme over applying temperature feedback to the NPRO with 0.5 Hz tracking bandwidth so that we always stay in the linear regime of the homodyne readout?

Also, what is the functional form of the curve labelled "Theory" in Attachment #2? How did you convert from voltage units in Attachment #1 to frequency units in Attachment #2? Does it make sense that you're apparently measuring laser frequency noise above 10 Hz? i.e. where do the "Dark Current Noise" and "Shot Noise" traces for the experiment lie relative to the blue curve in Attachment #2? Can you point to where the data is stored, and also add a photo of the setup?

  14575   Thu Apr 25 11:27:11 2019 gautamUpdateVACPSL shutter re-opened

This activity seems to have closed the PSL shutter (actually I'm not sure why that happened - the interlock should only trip if P1a exceeds 3 mtorr, and looking at the time series for the last 2 hours, it did not ever exceed this threshold). I saw no reason for it to remain closed so I re-opened it just now.

I vote for not remotely rebooting any of the vacuum / PSL subsystems. In the event of something going catastrophically wrong, someone should be on hand to take action in the lab.

  14577   Thu Apr 25 17:31:56 2019 gautamUpdatePSLInnolight NPRO shutoff

NPRO shutoff at ~1517  local time today afternoon. Again, not many clues from the NPRO diagnostics channel, but to my eye, the D1_POW channel shows the first variation from the "steady state", followed by the other channels. This is ~0.1 sec before the other channels register some change, so I don't know how much we can trust the synchronizaiton of the EPICS data streams. I won't turn it on again for now. I did check that the little fan on the back of the NPRO controller is still rotating.

gautam 10am 4/29: I also added a longer term trend of these diagnostic channels, no clear trends suggesting a fault are visible. The y-axis units for all plots are in Volts, and the data is sampled at 16 Hz.

Quote:

Now we wait and watch I guess.

  14582   Sun Apr 28 16:00:17 2019 gautamUpdateComputer Scripts / ProgramsList of suspension test

Here are some tests we should script.

  1. Checking Coil Vmons, OSEM PDmons, and watchdog enable wiring
    • Disable input to all the coil output filter modules using C1:SUS-<OPTIC>_<COIL>_SWSTAT (this is to prevent the damping loop control signals from being sent to the suspension)
    • Set ramptimes for these filter modules to 0 seconds.
    • Apply a step of 100 cts (~15 mV) using the offset field of this filter module (so the test signal is being generated by the fast CDS system).
    • Confirm that the step shows up in the correct coil Vmon channel with the appropriate size (in volts), and that other Vmons don't show any change (need to check the sign as well, based on the overall gain in this filter module).
    • Confirm that the largest response in the PDmon signals is for the same OSEM. There will be some cross-coupling but I think we always expect the largest response to be in the OSEM whose magnet we pushed provided the transimpedances are the same across all 5 coils (which is true except for PRM side), so this should be a robust criterion.
    • Take the step off using the watchdog enable field, C1:SUS-<OPTIC>_<COIL>_COMM. This allows us to confirm the watchdog signal wiring as well.
    • Reset ramptimes, watchdogs, input states to filter modules, and offsets to their pre-test values.
    • This test should tell us that the wiring assignments are correct, and that the Acromag ADC inputs are behaving as expected and are calibrated to volts.
    • This test should be done one channel at a time to check the wiring assignments are correct.
  2. Checking the SUS PD whitening
    • Measure spectrum of individual PD input (fast CDS) channels above 30 Hz with the whitening in a particular state
    • Toggle the whitening state.
    • Confirm that the whitened sensor noise above 30 Hz is below the unwhitened case (which is presumably ADC noise.
    • This test should be done one channel at a time to check the wiring assignments are correct.

Checking the Acromag DAC calibration is more complicated I think. There are measurements of the actuator calibration in units of nm/ct for the fast DACs. But these are only valid above the pendulum resonance frequency and I'm not sure we can synchronously drive a 10 Hz sine wave using the EPICs channels. The current test which drives the PIT/YAW DoFs with a DC misalingment and measures the response in the PDmon channels is a bit ad hoc in the way we set the "expected" response which is the PASS/FAIL criterion for the test. Moreover, the cross-coupling between the PDmon channels may be quite high. Needs some thought...

  14583   Mon Apr 29 16:25:22 2019 gautamUpdatePSLPSL turned on again

I turned the 2W NPRO back on again at ~4pm local time, dialing the injection current up from 0-2A in ~2 mins. I noticed today that the lasing only started at 1A, whereas just last week, it started lasing at 0.5A. After ~5 minutes of it being on, I measured 950 mW after the 11/55 MHz EOM on the PSL table. The power here was 1.06 W in January, so ~💯  mW lower now. 😮 

I found out today that the way the python FSS SLOW PID loop is scripted, if it runs into an EZCA error (due to the c1psl slow machine being dead), it doesn't handle this gracefully (it just gets stuck). I rebooted the crate for now and the MC autolcoker is running fine again. 

NPRO turned off again at ~8pm local time after Anjali was done with her data taking. I measured the power again, it was still 950mW, so at least the output power isn't degrading over 4 hours by an appreciable amount...

  14584   Mon Apr 29 16:34:27 2019 gautamUpdateElectronicsITMX/IMTY mis-labelling fixed at 1X4 and 1X5

After the X and Y arm naming conventions were changed, the labelling of the electronics in the eurocrates was not changed 😞 😔 😢 . This meant that when we hooked up the new Acromag crate, all the slow ITMX channels were in fact connected to the physical ITMY optic. I ♦️fixed♦️ the labelling now - Attachments #1 and #2 show the coil driver boards and SUS PD whitening boards correctly labelled. Our electronics racks are in desperate need of new photographs.

The "Y" arm runs in the EW direction, while the "X" arm runs in the NW direction as of April 29 2018.

ITMX was freed. ITMY is being worked on is also free..

  14587   Thu May 2 10:41:50 2019 gautamUpdateSUSSOS Magnet polarity

A concern was raised about the two ETMs and ITMX having the opposite response (relative to the other 7 SOS optics) in the OSEM PDmon channel in response to a given polarity of PIT/YAW offset being applied to the coils. Jon has factored into account all the digital gains in the actuation part of the CDS system in making this conclusion. I raised the possibility of the OSEM coil winding direction being opposite on the 15 OSEMs of the ETMs and ITMX, but I think it is more likely that the magnets are just glued on opposite to what they are "supposed" to be. See Attachment #6 of this elog (you'll have to rotate the photo either in your head or in your viewer) and note that it is opposite to what is specified in the assembly procedure, page 8. The net magnetic quadrupole moment is still 0, but the direction of actuation in response to current in the coil in a given direction would be opposite. I can't find magnet polarities for all the 10 SOS optics, but this hypothesis fits all the evidence so far..

  14591   Fri May 3 09:12:31 2019 gautamUpdateSUSAll vertex SUS watchdogs were tripped

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames?

  14592   Fri May 3 12:48:40 2019 gautamUpdateSUS1X4/1X5 cable admin

Chub and I crossed off some of these items today morning. The last bullet was addressed by Jon yesterday. I added a couple of new bullets.

The new power connectors will arrive next week, at which point we will install them. Note that there is no 24V Sorensen available, only 20V.

I am running a test on the 2W Mephisto for which I wanted the diagnostics connector plugged in again and Acromag channels to record them. So we set up the highly non-ideal but temporary set up shown in Attachment #1. This will be cleaned up by Monday evening latest.

update 1630 Monday 5/6: the sketchy PSL acromag setup has been disassembled.

Quote:
 
  • Take photos of the new setup, cabling.
  • Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
  • Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
  14593   Fri May 3 12:51:58 2019 gautamUpdatePSLPSL turned on again

Per instructions from Coherent, I made the some changes to the NPRO settings. The value we were operating at is in the column labelled "Operating value", while that in the Innolight test datasheet is in the rightmost column. I changed the Xtal temp and pump current to the values Innolight tested them at (but not the diode temps as they were close and they require a screwdriver to adjust), and turned the laser on again at ~1245pm local time. The acromag channels are recording the diagnostic information.

update 2:30pm - looking at the trend, I saw that D2 TGuard channel was reporting 0V. This wasn't the case before. Suspecting a loose contact, I tightened the DSub connectors at the controller and Acromag box ends. Now it too reports ~10V, which according to the manual signals normal operation. So if one sees an abrupt change in this channel in the long trend since 1245pm, that's me re-seating the connector. According to the manual, an error state would be signalled by a negative voltage at this pin, up to -12V. Also, the Innolight manual says pin 13 of the diagnostics connector is indicating the "Interlock" state, but doesn't say what the "expected" voltage should be. The newer manual Coherent sent me has pin13 listed as "Do not use".

Setting Operating value Value Innolight tested at
Diode 1 temp [C] 20.74 21.98
Diode 2 temp [C] 21.31 23.01
Xtal temp [C] 29.39 25.00
Pump current [A] 2.05

2.10

  14594   Fri May 3 15:40:33 2019 gautamUpdateGeneralCVI 2" beamsplitters delivered

Four new 2" CVI 50/50 beamsplitters (2 for p-pol and 2 for s-pol) were delivered. They have been stored in the optics cabinet, along with the "Test Data" sheets from CVI.

  14595   Mon May 6 10:51:43 2019 gautamUpdatePSLPSL turned off again

As we have seen in the last few weeks, the laser turned itself off after a few hours of running. So bypassing the lab interlock system / reverting laser crystal temperature to the value from Innolight's test datasheet did not fix the problem.

I do not understand why the "Interlock" and "TGUARD" channels come revert to their values when the laser was lasing a few minutes after the shutoff. Is this just an artefact of the way the diagnostics is set up, or is this telling us something about what is causing the shutoff?

  14599   Thu May 9 19:50:04 2019 gautamUpdatePSLPSL turned off again

This time, it stayed on for ~24 hours. I am not going to turn it on again today as the crane inspection is tomorrow and we plan to keep the VEA a laser safe area for speedy crane inspection.

But what is the next step? If these diode temps maximize the power output of the NPRO, then it isn't a good idea to raise the TEC setpoint futher, so should I just turn it on again with the same settings?

I did not turn the HEPA down on the PSL enclosure. I also turned off the NPROs at EX and EY so now all the four 1064nm lasers in the VEA are turned OFF (for crane inspection).

Quote:

locked PMC at 1900 PT; let's see how long it lasts.

My hunch is that the TECs are working too hard and can't offload the heat onto the heat sinks. As the diode's degrade, more of the electrical power is converted to heat in the diodes rather than 808 nm photons. So hopefully the increased airflow will help

 
  14602   Fri May 10 15:18:04 2019 gautamUpdatePSLSome work on/around PSL table
  1. In anticipation of installing the new fan on the PSL, I disconencted the old fan and finally removed the bench power supply from the top shelf.
  2. Moved said bench supply to under the south-west corner of the PSL table.
  3. Installed temporary Acromag crate, now with two ADC cards, under the PSL table and hooked it up to the bench suppy (+15 VDC). Also ran an ethernet cable from 1X3 to the box on over head cable tray and connected it.
  4. Brought other end of 25-pin D-sub cable used to monitor the NPRO diagnostics channels from 1X4/1X5 to the PSL table. Rolled the excess length up and cable tied it, the excess is sitting on top of the PSL enclosure. Key parts of the setup are shown in Attachments #1-3. This is not an ideal setup and is only meant to get us through to the install of the new c1psl/c1ioo Acromag crate.
  5. Edited the modbus config file at /cvs/cds/caltech/target/c1psl2/npro_config.cmd to add Jon's new ADC card to the list.
  6. Edited EPICS database file at /cvs/cds/caltech/target/c1psl2/psl.db to add entries for the C1:PSL-FSS_RMTEMP and C1:PSL-PMC_PMCTRANSPD channels.
  7. Hooked up said channels to the physical ADC inputs via a DB15 cable and breakout board on the PSL table.
    CH0 --- FSS_RMTEMP (Pins 5/18 of the DB25 connector on the interface box to pins 1/9 of the Acromag DB15 connector)
    CH1 --- PMC TRANS (BNC cable from PD to pomona minigrabber to pins 2/10 of the Acromag DB15 connector)
    CH2-6 are unsued currently and are available via the DB15 breakout board shown in Attachment #3. CH7 is not connected at the time of writing
    The pin-out for the temperature sensor interface box may be found here. Restarted the modbus process. The channels are now being recorded, see Attachment #4, although checking the status of the modbus process, I get some error message, not sure what that's about.

So now we can monitor both the temperature of the enclosure (as reported by the sensor on the PSL table) and the NPRO diagnostics channels. The new fan for the controller has not been installed yet, due to us not having a good mounting solution for the new fans, all of which have a bigger footprint than the installed fan. But since the laser isn't running right now, this is probably okay.

modbusPSL.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusPSL.service; disabled)
   Active:
active (running) since Fri 2019-05-10 13:17:54 PDT; 2h 13min ago
  Process: 8824 ExecStop=/bin/kill -9 ` cat /run/modbusPSL.pid`
(code=exited, status=1/FAILURE)
 Main PID: 8841 (procServ)
   CGroup: /system.slice/modbusPSL.service
           ├─8841 /usr/bin/procServ -f -L /home/controls/modbusPSL.log -p /run/modbusPSL.pid 8009 /cvs/cds/rtapps/epics-3.14.12.2_long/module...
           ├─8842 /cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/bin/linux-x86_64/modbusApp /cvs/cds/caltech/target/c1psl2/npro_config.c...
           └─8870 caRepeater

May 10 13:17:54 c1auxex systemd[1]: Started ModbusIOC Service via procServ.

  14603   Fri May 10 18:24:29 2019 gautamUpdateNoiseBudgetaligoNB

I pulled the aligoNB git repo to /ligo/GIT/aligoNB/aligoNB. There isn't a reqs.txt file in the repo so installing the dependencies on individual workstations to get this running is a bit of a pain. I found the easiest thing to do was to setup a virtual environment for the python3 stuff, this way we can run python2 for the cdsutils package (hopefully that gets updated soon). I'm setting up a C1 directory in there, plan is to budget some subsystems like Oplev, ALS for now, and develop the code for the eventual IFO locking. As a test, I ran the H1 noise budget (./aligonb H1), works, so looks like I got all the dependencies...

  14605   Mon May 13 10:45:38 2019 gautamUpdatePSLPSL turned ON again

I used some double-sided tape to attach a San Ace 60 9S0612H4011 to the Innolight controller (Attachment #1). This particular fan is rated to run with up to 13.8V, but I'm using a +15V Sorensen output - at best, this shortens the lifespan of the fan, but I don't have a better solution for now. Then I turned the laser on again (~1040 local time), using the same settings Rana configured earlier in this thread. PMC was locked, and the IMC also could be locked but I closed the shutter for now while the laser frequency/intensity stabilizes after startup. The purpose is to facilitate completion of the pre-vent alignment checklist in prep for the planned vent tomorrow. PMC Trans reports 0.63 after alignment was optimized, which is ~15% lower than in Oct 2016.

  14606   Mon May 13 18:48:32 2019 gautamUpdateGeneralVent prep
  1. c1auxey and c1aux VME crates were keyed.
  2. EX and EY NPROs were turned on.
  3. Y arm was aligned to the IR - best effort TRY ~0.75.
  4. EY green was aligned to the Y arm cavity. The spot is on the lower right quadrant on the CCD monitor, but GTRY ~0.35.
  5. #3 and #4 were repeated for XARM.
  6. All beams were centerd on Oplev and IP POS QPDs with this reference alignment - see Attachment #1. SOS Optic and TT DC bias positions were saved to burt snap files.
  7. I've never really used it but I updated all the SUS "driftmon" values - Attachment #2.
  8. Power going into the IMC was cut from 945 mW to 100 mW (both numbers measured with FieldMate power meter) by rotating the HWP installed last time for this purpose from 244 degrees (OLD) to 208 degrees (NEW). There was no beam dump for the reflected port of the PBS used to cut power, so I installed one, see Attachment #4.
  9. The T=90% BeamSplitter in the MC REFL path was replaced with a 2" HR mirror as is the norm for the low power IMC locking. Alignment of the MC REFL beam onto the MC REFL PD was tweaked.
  10. init.d file was edited and MCautolocker initctl process was restarted on Megatron to adopt the low power settings. It was locked, MCT ~1350 counts, see Attachment #3. Also adjusted the threshold level above which to have the slow PID offloading of FSS PZT voltage from 10000 to 1000.

I believe this completes the non-Chub portions of the pre-vent checklist, we will start letting air into the main volume ASAP tomorrow morning after crossing off the remaining items.

Main goal of this vent is to investigate the oddness of the YARM suspensions. I leave the PSL NPRO on overnight in the interest of data gathering, it's been running ~10 hrs now - I suspect it'll turn itself off before we are ready to vent in the AM.

  14607   Tue May 14 10:35:58 2019 gautamUpdateGeneralVent underway
  1. PSL had stayed on overnight. There was an EQ (M 4.6 near Costa Rica) which showed up on the Seis BLRMS, and I noticed that several optics were reporting Oplev spots off their QPDs (I had just centered these yesterday). So I did a quick alignment check:
    • IMC was readily locked
    • After moving test mass bias sliders to bring Oplev spots back to the center, the EX and EY green beams were readily locked to a TEM00 mode
    • IR flashes could be seen in TRX and TRY (though their levels are low, since we are operating with 1/10th the nominal power
    • The IP-POS QPD channels were reporting a "segmentation fault" so I keyed the c1iscaux crate and they came back. Still the QPD was reporting a low SUM value, but this too is because of the lower power. Conveniently, there was an ND2.0 filter in the beam path on a flip mount which I just flipped out of the way for the low-power tracking.
    • Then, PSL and green shutters were closed and Oplev loops were disengaged.
  2. Checked that we have an RGA scan from today
  3. During the walkthrough to check the jam nuts, Chub noticed that the outer nuts on the bellows between the OMC chamber and the IMC chamber were loose to the finger! He is tightening them now and checking the remaining jam nuts. AFAIK, Steve made it sound like this was always a formality. Should we be concerned? The other jam nuts are fine according to Chub.
  4. We valved off the pumpspool from the main volume and annuli, and started letting Nitrogen into the main volume at ~1045am.
  5. Started letting instrument grade air into the main volume at ~1130am. We are aiming for a pressure increase of 3 torr/min
  6. 4 cylinders of dry air were exhausted by ~330pm. It actually looks like we over-pressured the main volume by ~20torr - this is bad, we should've stopped the air inletting at 700 psi and then let it equilibriate to lab air pressure.
  7. At some point during the vent, the main volume pressure exceeded the working range of the cold cathode gauge CC1. It reports "Current Fail" on its LED display, which I'm assuming meant it auto-shutoff its HV to protect itself, Jon tells me the vacuum code isn't responsible for initiating any manual shutoff.
  8. A new vacuum state was added to reflect these conditions (pumpspool under vacuum, main volume at atmosphere).
  9. The annuli remain under vacuum for now. Tomorrow, when we remove the EY door, we will vent the EY annulus.

IMC was locked, MC2T ~ 1200cts after some alginment touch ups. The test mass oplevs indicate some drift, ~100urad. I didn't realign them.

The EY door removal will only be done tomorrow. I will take some free-swinging ETMY data today (suspension was kicked at 1241919438) to see if anything has changed (it shouldn't have). I need to think up a systematic debugging plan in the meantime.

  14608   Wed May 15 00:40:19 2019 gautamUpdateSUSETMY diagnosis plan

I collected some free-swinging data from earlier today evening. There are still only 3 peaks visible in the ASDs, see Attachment #1.

Plan for tomorrow:

TBH, I don't have any clear ideas as to what we are supposed to do to to fix the problem (or even what the problem is). So here is my plan for now:

  1. Take pictures of relative position of magnet and OSEM coil for all five coils
  2. Inspect positions of all EQ stops - back them well out if any look suspiciously close
  3. Inspect suspension wire for any kinks
  4. Inspect position of suspension wire in standoff

I anticipate that these will throw up some more clues 

  14609   Wed May 15 10:56:47 2019 gautamUpdatePSLPSL turned ON again

To test the hypothesis that the fan replacement had any effect on the NPRO shutoff phenomena, I turned the HEPA on the PSL table down to the nominal 30% setting at ~10am.

Tomorrow I will revert the laser crystal temperature to whatever the nominal value was. If the NPRO runs in that configuration (i.e. the only change from March 2019 are the diode TEC setpoints and the new fan on the back of the controller), then hurray.

  14610   Wed May 15 10:57:57 2019 gautamUpdateSUSEY chamber opened

[chub, gautam]

  1. Vented the EE annulus.
  2. Took the heavy door off, put it on the wooden rack, put a light door on at ~11am.
  14611   Wed May 15 17:46:24 2019 gautamUpdateSUSETMY inspection

I setup the usual mini-cleanroom setup around the ETMY chamber. Then I carried out the investigative plan outlined here.

Main finding: I saw a fiber of what looks like first contact on the bottom left (as viewed from HR side) of ETMY, connecting the optic to the cage. See Attachment #1. I don't know that this can explain the problem with the missing eigenmode, it's not a hard constraint.  Seems like something that should be addressed in any case. How do we want to remove this? Just use a tweezer and pull it off, or apply a larger FC patch and then pull it off? I'm pretty sure it's first contact and not a piece of PEEK mesh because I can see it is adhered to the HR side of the optic, but couldn't capture that detail in a photo.

There weren't any obvious problem with the magnet positioning inside the OSEM, or the suspension wire. All the EQ stop tips were >3mm away from the optic.

I also backed out the bottom EQ stops on the far (south side) of the optic by ~2 full turns of the screw. Taking another free-swinging dataset now to see if anything has changed. I will upload all the photos I took, with annotations, to the gPhotos later today eve. Light doors back on at ~1730.

Update 10pm: the photos have been uploaded. I've added a "description" to each photo which should convey the message of that particualr shot, it shows up in my browser on the bottom left of the photo but can also be accessed by clicking the "info" icon. Please have a look and comment if something sticks out as odd / requires correction.

Update 1045pm: I looked at the freeswinging data from earlier today. Still only 3 peaks around 1 Hz.

The following optics were kicked:
ETMY
Wed May 15 17:45:51 PDT 2019
1242002769
  14613   Thu May 16 13:07:14 2019 gautamUpdateSUSFirst contact residue removal

I  used a pair of tweezers to remove the stray fiber of first contact. As Koji predicted, this was rather dry and so it didn't have the usual elasticity, so while I was able to pull most of it off, there is a small spot remaining on the HR surface of the ETM. We will remove this with a fresh application of a small patch of FC.

I the meantime, I'm curious if this has actually fixed the suspension woes, so yet another round of freeswinging data collection is ongoing. From the first 5 mins, looks positive, I see 4 peaks around 1Hz cool!

The following optics were kicked:
ETMY
Thu May 16 13:06:39 PDT 2019
1242072418

Update 730pm: There are now four well-defined peaks around 1 Hz. Together with the Bounce and Roll modes, that makes six. The peak at 0.92 Hz, which I believe corresponds to the Yaw eigenmode, is significantly lower than the other three. I want to get some info about the input matrix but there was some NDS dropout and large segments of data aren't available using the python nds fetch method, so I am trying again, kicked ETMY at 1828 PDT. It may be that we could benefit from some adjustment of the OSEM positions, the coupling of bounce mode to LL is high. Also the SIDE/POS resonances aren't obviously deconvolved. The stray first contact has to be removed too. But overall I think it was a successful removal, and the suspension characteristics are more in line with what is "expected". 

  14614   Thu May 16 22:58:25 2019 gautamUpdateASSIn air ASS test with green?

We were wondering yesterday if we can somehow test the ASS system in air. Though the arm cavity can be locked with the low power IMC transmission, I think the dither would render the POY lock unstable. But I wonder if we can use the green beam for a test. The steering PZTs installed by Yuki can serve the role of TT1/TT2 and we can dither the arm cavity mirrors while the green TEM00 mode is locked to the arm no problem. This would at least give us confidence that the actuation of ETMY/ITMY are okay (in addition to the other suspension tests). Then on the sensing side, after pumping down, the only thing we'd be foiled by is in-vacuum clipping or some major gunk on ETMY - everything else should be de-buggable even after pumping down?

I think most of the CDS infrastructure for this is already in place.

  14615   Thu May 16 23:31:55 2019 gautamUpdateSUSETMY suspension characterization

Here is my analysis. I think there are still some problems with this suspension.

Attachment #1: Time domain plots of the ringdown. The LL coil has peak response ~half of the other face OSEMs. I checked that the signal isn't being railed, the lowest level is > 100 cts.

Attachment #2: Complex TF from UL to the other coils. While there are four peaks now, looking at the phase information, it isn't possible to clearly disentangle PIT or YAW motion - in fact, for all peaks, there are at least three face shadow sensors which report the same phase. The gains are also pretty poorly balanced - e.g. for the 0.77 Hz peak, the magnitude of UR->UL is ~0.3, while LR->UL is ~3. Is it reasonable that there is a factor of 10 imbalance?

Attachment #3: Nevertheless, I assumed the following mapping of the peaks (quoted f0 is from a lorentzian fit) and attempted to find the input matrix that best convers the Sensor basis into the Euler basis.

DoF f0 [Hz]
POS 1.004
PIT 0.771
YAW 0.920
SIDE 0.967

Unsurprisingly, the elements of this matrix are very different from unity (I have to fix the normalization of the rows).

Attachment #4: Pre and post diagonalization spectra. The null stream certainly looks cleaner, but then again, this is by design so I'm not sure if this matrix is useful to implement.

Next steps:

  1. Repeat the actuator diagnonality test detailed here.
  2. ???

In case anyone wants to repeat the analysis, the suspension was kicked at 1828 PDT today and this analysis uses 15000 seconds of data from then onwards.

​Update 18 May 3pm:  Attachment #5 better presentation of the data shown in Attachment #2, the remark about the odd phasing of the coils is more clearly seen in this zoomed in view.  Attachment #6 shows Lorentzian fits to the peaks - the Qs are comparable to that seen for the other optics, although the Q for the 0.77 Hz peak is rather low.

  14617   Fri May 17 10:57:01 2019 gautamUpdateSUSIY chamber opened

At ~930am, I vented the IY annulus by opening VAEV. I checked the particle count, seemed within the guidelines to allow door opening so I went ahead and loosened the bolts on the ITMY chamber.

Chub and I took the heavy door off with the vertex crane at ~1015am, and put the light door on.

Diagnosis plan is mainly inspection for now: take pictures of all OSEM/magnet positionings. Once we analyze those, we can decide which OSEMs we want to adjust in the holders (if any). I shut down the ITMY and SRM watchdogs in anticipation of in-chamber work.

Not related to this work: Since the annuli aren't being pumped on, the pressure has been slowly rising over the week. The unopened annuli are still at <1 torr, and the PAN region is at ~2 mtorr.

  14618   Fri May 17 16:07:25 2019 gautamSummaryEquipment loanBorrowed component

ZHL-3A (2 units) —-> QIL

Quote:

I borrowed one Marconi (2023 B) from 40 m lab to QIL lab.

  14620   Fri May 17 17:01:08 2019 gautamUpdateSUSETMY suspension characterization

To investigate my mapping of the eigenfrequencies to eigenmodes, I checked the Oplev spectra for the last few hours, when the Oplev spot has been on the QPD (but the optic is undamped).

  1. Based on Attachment #1, I can't figure out which peak corresponds to what motion.
    • The most prominent peak (judged by peak height) is at 0.771 Hz for both PITCH and YAW
    • Assuming the peak at 0.92 Hz is the other angular mode, the PIT/YAW decoupling is poor in both peaks, only ~factor of 2 in both cases.
  2. Why are the POS and SIDE resonances sensed so asymmetrically in the PIT and YAW channels? There's a factor of 10 difference there...

So, while I conclude that my first-contact residue removal removed a constraint from the system (hence the pendulum dynamics are accurate and there are 6 eigenmodes), more thought is needed in judging what is the appropriate course of action.

  14623   Mon May 20 11:33:46 2019 gautamUpdateSUSITMY inspection

With Chub providing illumination via the camera viewport, I was able to take photos of ITMY this morning. All the magnets look well clear of the OSEMs, with the possible exception of UR. I will adjust the position of this OSEM slightly. To test if this fix is effective, I will then cycle the bias voltage to the ITM between 0 and the maximum allowed, and check if the optic gets stuck.

ELOG V3.1.3-