40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 308 of 339  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14558   Fri Apr 19 16:19:42 2019 gautamUpdateSUSActuation matrix still not orthogonal

I repeated the exercise from yesterday, this time driving the butterfly mode [+1 -1 -1 +1] and adding the tuned PIT and YAW vectors from yesterday to it to minimize appearance in the Oplev error signals. 

The measured output matrix is \begin{bmatrix} 0.98 & 0.64 & 1.5 & 1.037 \\ 0.96 & 1.12 & -0.5 & -0.998 \\ 1.04 & -1.12 & 0.5 & -1.002 \\ 1.02 & -0.64 & -1.5 & 0.963 \end{bmatrix}, where rows are the coils in the order [UL,UR,LL,LR] and columns are the DOFs in the order [POS,PIT,YAW,Butterfly]. The conclusions from my previous elog still hold though - the orthogonality between PIT and YAW is poor, so this output matrix cannot be realized by a simple gain scaling of the coil output gains. The "adjustment matrix", i.e. the 4x4 matrix that we must multiply the "ideal" output matrix by to get the measured output matrix has a condition number of 134 (1 is a good condition number, signifies closeness to the identity matrix). 

Quote:

let us have 3 by 4, nevermore

so that the number of columns is no less

and no more

than the number of rows

so that forevermore we live as 4 by 4

  14559   Fri Apr 19 19:22:15 2019 ranaUpdateSUSActuation matrix still not orthogonal

If thy left hand troubles thee

then let the mirror show the right

for if it troubles enough to cut it off

it would not offend thy sight

  14561   Mon Apr 22 21:33:17 2019 JonUpdateSUSBench testing of c1susaux replacement

Today I bench-tested most of the Acromag channels in the replacement c1susaux. I connected a DB37 breakout board to each chassis feedthrough connector in turn and tested channels using a multimeter and calibrated voltage source. Today I got through all the digital output channels and analog input channels. Still remaining are the analog output channels, which I will finish tomorrow.

There have been a few wiring issues found so far, which are noted below.

Channel Type Issue
C1:SUS2-PRM_URVMon Analog input No response
C1:SUS2-PRM_LRVMon Analog input No response
C1:SUS2-BS_UL_ENABLE Digital output Crossed with LR
C1:SUS2-BS_LL_ENABLE Digital output Crossed with UR
C1:SUS2-BS_UR_ENABLE Digital output Crossed with LL
C1:SUS2-BS_LR_ENABLE Digital output Crossed with UL
C1:SUS2-ITMY_SideVMon Analog input Polarity reversed
C1:SUS2-MC2_UR_ENABLE Digital output Crossed with LR
C1:SUS2-MC2_LR_ENABLE Digital output Crossed with UR
     
     

 

  14562   Mon Apr 22 22:43:15 2019 gautamUpdateSUSETMY sensor diagnosis

Here are the results from this test. The data for 17 April is with the DC bias for ETMY set to the nominal values (which gives good Y arm cavity alignment), while on 18 April, I changed the bias values until all four shadow sensors reported values that were at least 100 cts different from 17 April. The times are indicated in the plot titles in case anyone wants to pull the data (I'll point to the directory where they are downloaded and stored later).

There are 3 visible peaks. There was negligible shift in position (<5 mHz)  / change in Q of any of these with the applied Bias voltage. I didn't attempt to do any fitting as it was not possible to determine which peak corresponds to which DoF by looking at the complex TFs between coils (at each peak, different combinations of 3 OSEMs have the same phase, while the fourth has ~180 deg phase lead/lag). FTR, the wiki leads me to expect the following locations for the various DoFs, and I've included the closest peak in the current measured data in parentheses:

DoF Frequency [Hz]
POS 0.982 (0.947)
PIT 0.86 (0.886)
YAW 0.894 (0.886)
SIDE 1.016 (0.996)

However, this particular SOS was re-suspended in 2016, and this elog reports substantially different peak positions, in particular, for the YAW DoF (there were still 4). The Qs of the peaks from last week's measurements are in the range 250-350.

Quote:

Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286.

Attachment 1: ETMY_sensorSpectra_consolidated.pdf
ETMY_sensorSpectra_consolidated.pdf ETMY_sensorSpectra_consolidated.pdf
  14563   Tue Apr 23 18:48:25 2019 JonUpdateSUSc1susaux bench testing completed

Today I tested the remaining Acromag channels and retested the non-functioning channels found yesterday, which Chub repaired this morning. We're still not quite ready for an in situ test. Here are the issues that remain.

Analog Input Channels

Channel Issue
C1:SUS-MC2_URPDMon No response
C1:SUS-MC2_LRPDMon No response

I further diagnosed these channels by connecting a calibrated DC voltage source directly to the ADC terminals. The EPICS channels do sense this voltage, so the problem is isolated to the wiring between the ADC and DB37 feedthrough.

Analog Output Channels

Channel Issue
C1:SUS-ITMX_ULBiasAdj No output signal
C1:SUS-ITMX_LLBiasAdj No output signal
C1:SUS-ITMX_URBiasAdj No output signal
C1:SUS-ITMX_LRBiasAdj No output signal
C1:SUS-ITMY_ULBiasAdj No output signal
C1:SUS-ITMY_LLBiasAdj No output signal
C1:SUS-ITMY_URBiasAdj No output signal
C1:SUS-ITMY_LRBiasAdj No output signal
C1:SUS-MC1_ULBiasAdj No output signal
C1:SUS-MC1_LLBiasAdj No output signal
C1:SUS-MC1_URBiasAdj No output signal
C1:SUS-MC1_LRBiasAdj No output signal

To further diagnose these channels, I connected a voltmeter directly to the DAC terminals and toggled each channel output. The DACs are outputting the correct voltage, so these problems are also isolated to the wiring between DAC and feedthrough.

In testing the DC bias channels, I did not check the sign of the output signal, but only that the output had the correct magnitude. As a result my bench test is insensitive to situations where either two degrees of freedom are crossed or there is a polarity reversal. However, my susPython scripting tests for exactly this, fetching and applying all the relevant signal gains between pitch/yaw input and coil bias output. It would be very time consuming to propagate all these gains by hand, so I've elected to wait for the automated in situ test.

Digital Output Channels

yes Everything works.

  14564   Tue Apr 23 19:31:45 2019 JonUpdateSUSWatchdog channels separated from autoBurt.req

For the new c1susaux, Gautam and I moved the watchdog channels from autoBurt.req to a new file named autoBurt_watchdogs.req. When the new modbus service starts, it loads the state contained in autoBurt.snap. We thought it best for the watchdogs to not be automatically enabled at this stage, but for an operator to manually have to do this. By moving the watchdog channels to a separate snap file, the entire SUS state can be loaded while leaving just the watchdogs disabled.

This same modification should be made to the ETMX and ETMY machines.

  14567   Wed Apr 24 17:07:39 2019 gautamUpdateSUSc1susaux in-situ testing [and future of IFOtest]

[jon, gautam]

For the in-situ test, I decided that we will use the physical SRM to test the c1susaux Acromag replacement crate functionality for all 8 optics (PRM, BS, ITMX, ITMY, SRM, MC1, MC2, MC3). To facilitate this, I moved the backplane connector of the SRM SUS PD whitening board from the P1 connector to P2, per Koji's mods at ~5:10PM local time. Watchdog was shutdown, and the backplane connectors for the SRM coil driver board was also disconnected (this is interfaced now to the Acromag chassis).

I had to remove the backplane connector for the BS coil driver board in order to have access to the SRM backplane connector. Room in the back of these eurocrate boxes is tight in the existing config...

At ~6pm, I manually powered down c1susaux (as I did not know of any way to turn off the EPICS server run by the old VME crate in a software way). The point was to be able to easily interface with the MEDM screens. So the slow channels prefixed C1:SUS-* are now being served by the Supermicro called c1susaux2.

A critical wiring error was found. The channel mapping prepared by Johannes lists the watchdog enable BIO channels as "C1:SUS-<OPTIC>_<COIL>_ENABLE", which go to pins 23A-27A on the P1 connector, with returns on the corresponding C pins. However, we use the "TEST" inputs of the coil driver boards for sending in the FAST actuation signals. The correct BIO channels for switching this input is actually "C1:SUS-<OPTIC>_<COIL>_TEST", which go to pins 28A-32A on the P1 connector. For todays tests, I voted to fix this inside the Acromag crate for the SRM channels, and do our tests. Chub will unfortunately have to fix the remaining 7 optics, see Attachment #1 for the corrections required. I apportion 70% of the blame to Johannes for the wrong channel assignment, and accept 30% for not checking it myself.

The good news: the tests for the SRM channels all passed!

  • Attachment #2: Output of Jon's testing code. My contribution is the colored logs courtesy of python's coloredlogs package, but this needs a bit more work - mainly the PASS mssage needs to be green. This test applies bias voltages to PIT/YAW, and looks for the response in the PDmon channels. It backs out the correct signs for the four PDs based on the PIT/YAW actuation matrix, and checks that the optic has moved "sufficiently" for the applied bias. You can also see that the PD signals move with consistent signs when PIT/YAW misalignment is applied. Additionally, the DC values of the PDMon channels reported by the Acromag system are close to what they were using the VME system. I propose calling the next iteration of IFOtest "Sherlock".
  • Attachment #3: Confirmation (via spectra) that the SRM OSEM PD whitening can still be switched even after my move of the signals from the P1 connector to the P2 connector. I don't have an explanation right now for the shape of the SIDE coil spectrum.
  • Attachment #4: Applied 100 cts (~ 100*10/2**15/2 ~ 15mV at the monitor point) offset at the bias input of the coil output filters on SRM (this is a fast channel). Looked for the response in the Coil Vmon channels (these are SLOW channels). The correct coil showed consistent response across all 5 channels.

Additionally, I confirmed that the watchdog tripped when the RMS OSEM PD voltage exceeded 200 counts. Ideally we'd have liked to test the stability of the EPICS server, but we have shut it down and brought the crate back out to the electronics bench for Chub to work on tomorrow.

I restarted the old VME c1susaux at 915pm local time as I didn't want to leave the watchdogs in an undefined state. Unsurprisingly, ITMY is stuck. Also, the BS (cable #22) and SRM (cable #40) coil drivers are physically disconnected at the front DB15 output because of the undefined backplane inputs. I also re-opened the PSL shutter.

Attachment 1: 2019-04-24_20-29.pdf
2019-04-24_20-29.pdf
Attachment 2: Screenshot_from_2019-04-24_20-05-54.png
Screenshot_from_2019-04-24_20-05-54.png
Attachment 3: SRM_OSEMPD_WHT_ACROMAG.pdf
SRM_OSEMPD_WHT_ACROMAG.pdf
Attachment 4: DCVmon.png
DCVmon.png
  14569   Thu Apr 25 00:30:45 2019 gautamUpdateSUSETMY BR mode

We briefly talked about the bounce and roll modes of the SOS optic at the meeting today. 

Attachment #1: BR modes for ETMY from my free-swinging run on 17 April. The LL coil has a very different behavior from the others.

Attachment #2: BR modes for ETMY from my free-swinging run on 18 April, which had a macroscopically different bias voltage for the PIT/YAW sliders. Here too, the LL coil has a very different behavior from the others.

Attachment #3: BR modes for ETMX from my free-swinging run on 27 Feb. There are many peaks in addition to the prominent ones visible here, compared to ITMY. The OSEM PD noise floor for UR and SIDE is mysteriously x2 lower than for the other 3 OSEMs???

In all three cases, a bounce mode around 16.4 Hz and a roll mode around 24.0 Hz are visible. The ratio between these is not sqrt(2), but is ~1.46, which is ~3% larger. But when I look at the database, I see that in the past, the bounce and roll modes were in fact at close to these frequencies.

In conclusion:

  1. the evidence thus far says that ETMY has 5 resonant modes in the free-swinging data between 0.5 Hz and 25 Hz.
  2. Either two modes are exactly degenerate, or there is a constraint in the system which removes 1 degree of freedom.
  3. How likely is the latter? As any mechanical constraint that removes one degree of freedom would presumably also damp the Qs of the other modes more than what we are seeing.
  4. Can some large piece of debris on the barrel change the PIT/YAW eigenvectors such that the eigenvalues became exactly degenerate?
  5. Furthermore, the AC actuation vectors for PIT and YAW are not close to orthogonal, but are rotated ~45 degrees relative to each other.

Because of my negligence and rushing the closeout procedure, I don't have a great close-out picture of the magnet positions in the face OSEMs, the best I can find is Attachment #4. We tried to replicate the OSEM arrangement (orientation of leads from the OSEM body) from July 2018 as closely as possible.

I will investigate the side coil actuation strength tomorrow, but if anyone can think of more in-air tests we should do, please post your thoughts/poetry here.

Attachment 1: ETMY_sensorSpectra_BRmode.pdf
ETMY_sensorSpectra_BRmode.pdf
Attachment 2: ETMY_sensorSpectra_BRmode.pdf
ETMY_sensorSpectra_BRmode.pdf
Attachment 3: ETMX_sensorSpectra_BRmode.pdf
ETMX_sensorSpectra_BRmode.pdf
Attachment 4: IMG_5993.JPG
IMG_5993.JPG
  14581   Fri Apr 26 19:35:16 2019 JonUpdateSUSNew c1susaux installed, passed first round of scripted testing

[Jon, Gautam]

Today we installed the c1susaux Acromag chassis and controller computer in the 1X4 rack. As noted in 14580 the prototype Acromag chassis had to first be removed to make room in the rack. The signal feedthroughs were connected to the eurocrates by 10' DB-37 cables via adapters to 96-pin DIN.

Once installed, we ran a scripted set of suspension actuation tests using PyIFOTest. BS, PRM, SRM, MC1, MC2, and MC3 all passed these tests. We were unable to test ITMX and ITMY because both appear to be stuck. Gautam will shake them loose on Monday.

Although the new c1susaux is now mounted in the rack, there is more that needs to be done to make the installation permanent:

  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.

On Monday we plan to continue with additional scripted tests of the suspensions.


gautam - some more notes:

  • Backplane connectors for the SUS PD whitening boards, which now only serve the purpose of carrying the fast BIO signals used for switching the whitening, were moved from the P1 connector to P2 connector for MC1, MC2, MC3, ITMX, ITMY, BS and PRM.
  • In the process, the connectors for BS and PRM were detatched from the ribbon cable (there wasn't any good way to unseat the connector from the shell that I know of). These will have to be repaired by Chub, and the signal integrity will have to be checked (as they have to be for the connectors that are allegedly intact).
  • While we were doing the wiring, I disconnected the outputs of the coil driver board going to the satellite box (front panel DB15 connector on D010001). These were restored after our work for the testing phase.
  • The backplane cables to the eurocrate housing the coil driver boards were also disconnected. They are currently just dangling, but we will have to clean it up if the new crate is performing alright.
  • In general the cable routing cleanliness has to be checked and approved by Chub or someone else qualified. In particular, the power leads to the eurocrate are in the way of the DIN96-DB37 adaptor board of Johannes' design, particularly on the SUS PD eurocrate.
  • Tapping new power rails for the Acromag chassis will have to be done carefully. Ideally we shouldn't have to turn off the Sorensens.
  • There are some software issues we encountered today connected with the networking that have to be understood and addressed in a permanent way.
  • Sooner rather than later, we want to reconnect the Acromag crate that was monitoring the PSL channels, particularly given the NPRO's recent flakiness.
  • The NPRO was turned back on (following the same procedure of slowly dialing up the injection current). Primary motivation to see if the mode cleaner cavity could be locked with the new SUS electronics. Looks like it could. I'm leaving it on over the weekend...
Attachment 1: IMG_3254.jpg
IMG_3254.jpg
Attachment 2: IMG_3256.jpg
IMG_3256.jpg
  14587   Thu May 2 10:41:50 2019 gautamUpdateSUSSOS Magnet polarity

A concern was raised about the two ETMs and ITMX having the opposite response (relative to the other 7 SOS optics) in the OSEM PDmon channel in response to a given polarity of PIT/YAW offset being applied to the coils. Jon has factored into account all the digital gains in the actuation part of the CDS system in making this conclusion. I raised the possibility of the OSEM coil winding direction being opposite on the 15 OSEMs of the ETMs and ITMX, but I think it is more likely that the magnets are just glued on opposite to what they are "supposed" to be. See Attachment #6 of this elog (you'll have to rotate the photo either in your head or in your viewer) and note that it is opposite to what is specified in the assembly procedure, page 8. The net magnetic quadrupole moment is still 0, but the direction of actuation in response to current in the coil in a given direction would be opposite. I can't find magnet polarities for all the 10 SOS optics, but this hypothesis fits all the evidence so far..

  14588   Thu May 2 10:59:58 2019 JonUpdateSUSc1susux in situ wiring testing completed

Summary

Yesterday Gautam and I ran final tests of the eight suspensions controlled by c1susaux, using PyIFOTest. All of the optics pass a set of basic signal-routing tests, which are described in more detail below. The only issue found was with ITMX having an apparent DC bias polarity reversal (all four front coils) relative to the other seven susaux optics. However, further investigation found that ETMX and ETMY have the same reversal, and there is documentation pointing to the magnets being oppositely-oriented on these two optics. It seems likely that this is the case for ITMX as well. 

I conclude that all the new c1susaux wiring/EPICS interfacing works correctly. There are of course other tests that can still be scripted, but at this point I'm satisfied that the new Acromag machine itself is correctly installed. PyIFOTest has been morphed into a powerful general framework for automating IFO tests. Anything involving fast/slow IO can now be easily scripted. I highly encourage others to think of more applications this may have at the 40m.

Usage and Design

The code is currently located in /users/jon/pyifotest although we should find a permanent location for it. From the root level it is executed as

$ ./IFOTest <PARAMETER_FILE>

where PARAMETER_FILE is the filepath to a YAML config file containing the test parameters. I've created a config file for each of the suspended optics. They are located in the root-level directory and follow the naming convention SUS-<OPTIC>.yaml.

The code climbs a hierarchical "ladder" of actuation/readback-paired tests, with the test at each level depending on signals validated in the preceding level. At the base is the fast data system, which provides an independent reference against which the slow channels are tested. There are currently three scripted tests for the slow SUS channels, listed in order of execution:

  1. VMon test:  Validates the low-frequency sensing of SUS actuation (VMon channels). A DC offset is applied in the final filter module of the fast coil outputs, one coil at a time. The test confirms that the VMon of the actuated coil, and only this VMon, senses the displacement, and that the response has the correct polarity. The screen output is a matrix showing the change in VMon responses with actuation of each coil. A passing test, roughly, is diagonal values >> 0 and off-diagonal values << diagonal.

  2. Coil Enable test:  Validates the slow watchdog control of the fast coil outputs (Coil-Enable channels). Analogously to (1), this test also applies a DC offset via the fast system to one coil at a time and analyzes the VMon responses. However, in this case, the offset is enabled to all five coils simulataneously and only one coil output is enabled at a time. The screen output is again a \Delta VMon matrix interpreted in the same way as above.

     

  3. PDMon/DC Bias test:  Validates slow alignment control and readback (BiasAdj and PDMon channels). A DC misalignment is introduced first in pitch, then in yaw, with the OSEM PDMon responses measured in both cases. Using the gains from the PIT/YAW---> COIL output coupling matrix, the script verifies that each coil moves in the correct direction and by a sufficiently large magnitude for the applied DC bias. The screen output shows the change in PDMon responses with a pure pitch actuation, and with a pure yaw actuation. The output filter matrix coefficients have already been divided out, so a passing test is a sufficiently large, positive change under both pitch and yaw actuations.

     

  14591   Fri May 3 09:12:31 2019 gautamUpdateSUSAll vertex SUS watchdogs were tripped

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames?

Attachment 1: SUSwatchdogs.png
SUSwatchdogs.png
  14592   Fri May 3 12:48:40 2019 gautamUpdateSUS1X4/1X5 cable admin

Chub and I crossed off some of these items today morning. The last bullet was addressed by Jon yesterday. I added a couple of new bullets.

The new power connectors will arrive next week, at which point we will install them. Note that there is no 24V Sorensen available, only 20V.

I am running a test on the 2W Mephisto for which I wanted the diagnostics connector plugged in again and Acromag channels to record them. So we set up the highly non-ideal but temporary set up shown in Attachment #1. This will be cleaned up by Monday evening latest.

update 1630 Monday 5/6: the sketchy PSL acromag setup has been disassembled.

Quote:
 
  • Take photos of the new setup, cabling.
  • Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
  • Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
Attachment 1: D38CC485-1EB6-4B34-9EB1-2CB1E809A21A.jpeg
D38CC485-1EB6-4B34-9EB1-2CB1E809A21A.jpeg
  14596   Mon May 6 11:05:23 2019 JonUpdateSUSAll vertex SUS watchdogs were tripped

Yes, this was a consequence of the systemd scripting I was setting up. Unlike the old susaux system, we decided for safety NOT to allow the modbus IOC to automatically enable the coil outputs. Thus when the modbus service starts/restarts, it automatically restores all state except the watchdog channels, which are left in their default disabled state. They then have to be manully enabled by an operator, as I should have done after finishing testing.

Quote:

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

  14608   Wed May 15 00:40:19 2019 gautamUpdateSUSETMY diagnosis plan

I collected some free-swinging data from earlier today evening. There are still only 3 peaks visible in the ASDs, see Attachment #1.

Plan for tomorrow:

TBH, I don't have any clear ideas as to what we are supposed to do to to fix the problem (or even what the problem is). So here is my plan for now:

  1. Take pictures of relative position of magnet and OSEM coil for all five coils
  2. Inspect positions of all EQ stops - back them well out if any look suspiciously close
  3. Inspect suspension wire for any kinks
  4. Inspect position of suspension wire in standoff

I anticipate that these will throw up some more clues 

Attachment 1: ETMY_sensorSpectra.pdf
ETMY_sensorSpectra.pdf
  14610   Wed May 15 10:57:57 2019 gautamUpdateSUSEY chamber opened

[chub, gautam]

  1. Vented the EE annulus.
  2. Took the heavy door off, put it on the wooden rack, put a light door on at ~11am.
  14611   Wed May 15 17:46:24 2019 gautamUpdateSUSETMY inspection

I setup the usual mini-cleanroom setup around the ETMY chamber. Then I carried out the investigative plan outlined here.

Main finding: I saw a fiber of what looks like first contact on the bottom left (as viewed from HR side) of ETMY, connecting the optic to the cage. See Attachment #1. I don't know that this can explain the problem with the missing eigenmode, it's not a hard constraint.  Seems like something that should be addressed in any case. How do we want to remove this? Just use a tweezer and pull it off, or apply a larger FC patch and then pull it off? I'm pretty sure it's first contact and not a piece of PEEK mesh because I can see it is adhered to the HR side of the optic, but couldn't capture that detail in a photo.

There weren't any obvious problem with the magnet positioning inside the OSEM, or the suspension wire. All the EQ stop tips were >3mm away from the optic.

I also backed out the bottom EQ stops on the far (south side) of the optic by ~2 full turns of the screw. Taking another free-swinging dataset now to see if anything has changed. I will upload all the photos I took, with annotations, to the gPhotos later today eve. Light doors back on at ~1730.

Update 10pm: the photos have been uploaded. I've added a "description" to each photo which should convey the message of that particualr shot, it shows up in my browser on the bottom left of the photo but can also be accessed by clicking the "info" icon. Please have a look and comment if something sticks out as odd / requires correction.

Update 1045pm: I looked at the freeswinging data from earlier today. Still only 3 peaks around 1 Hz.

The following optics were kicked:
ETMY
Wed May 15 17:45:51 PDT 2019
1242002769
Attachment 1: firstContactFiber.JPG
firstContactFiber.JPG
Attachment 2: ETMY_sensorSpectra.pdf
ETMY_sensorSpectra.pdf
  14612   Wed May 15 19:36:29 2019 KojiUpdateSUSETMY instepction

A pair of tweezer is OK as long as there is no magnets around. You need to (somewhat) constrain the mirror with the EQ stops so that you can pull the fiber without dragging the mirror.

  14613   Thu May 16 13:07:14 2019 gautamUpdateSUSFirst contact residue removal

I  used a pair of tweezers to remove the stray fiber of first contact. As Koji predicted, this was rather dry and so it didn't have the usual elasticity, so while I was able to pull most of it off, there is a small spot remaining on the HR surface of the ETM. We will remove this with a fresh application of a small patch of FC.

I the meantime, I'm curious if this has actually fixed the suspension woes, so yet another round of freeswinging data collection is ongoing. From the first 5 mins, looks positive, I see 4 peaks around 1Hz cool!

The following optics were kicked:
ETMY
Thu May 16 13:06:39 PDT 2019
1242072418

Update 730pm: There are now four well-defined peaks around 1 Hz. Together with the Bounce and Roll modes, that makes six. The peak at 0.92 Hz, which I believe corresponds to the Yaw eigenmode, is significantly lower than the other three. I want to get some info about the input matrix but there was some NDS dropout and large segments of data aren't available using the python nds fetch method, so I am trying again, kicked ETMY at 1828 PDT. It may be that we could benefit from some adjustment of the OSEM positions, the coupling of bounce mode to LL is high. Also the SIDE/POS resonances aren't obviously deconvolved. The stray first contact has to be removed too. But overall I think it was a successful removal, and the suspension characteristics are more in line with what is "expected". 

Attachment 1: etmy_sensors.pdf
etmy_sensors.pdf
Attachment 2: etmy_BRmode.pdf
etmy_BRmode.pdf
  14615   Thu May 16 23:31:55 2019 gautamUpdateSUSETMY suspension characterization

Here is my analysis. I think there are still some problems with this suspension.

Attachment #1: Time domain plots of the ringdown. The LL coil has peak response ~half of the other face OSEMs. I checked that the signal isn't being railed, the lowest level is > 100 cts.

Attachment #2: Complex TF from UL to the other coils. While there are four peaks now, looking at the phase information, it isn't possible to clearly disentangle PIT or YAW motion - in fact, for all peaks, there are at least three face shadow sensors which report the same phase. The gains are also pretty poorly balanced - e.g. for the 0.77 Hz peak, the magnitude of UR->UL is ~0.3, while LR->UL is ~3. Is it reasonable that there is a factor of 10 imbalance?

Attachment #3: Nevertheless, I assumed the following mapping of the peaks (quoted f0 is from a lorentzian fit) and attempted to find the input matrix that best convers the Sensor basis into the Euler basis.

DoF f0 [Hz]
POS 1.004
PIT 0.771
YAW 0.920
SIDE 0.967

Unsurprisingly, the elements of this matrix are very different from unity (I have to fix the normalization of the rows).

Attachment #4: Pre and post diagonalization spectra. The null stream certainly looks cleaner, but then again, this is by design so I'm not sure if this matrix is useful to implement.

Next steps:

  1. Repeat the actuator diagnonality test detailed here.
  2. ???

In case anyone wants to repeat the analysis, the suspension was kicked at 1828 PDT today and this analysis uses 15000 seconds of data from then onwards.

​Update 18 May 3pm:  Attachment #5 better presentation of the data shown in Attachment #2, the remark about the odd phasing of the coils is more clearly seen in this zoomed in view.  Attachment #6 shows Lorentzian fits to the peaks - the Qs are comparable to that seen for the other optics, although the Q for the 0.77 Hz peak is rather low.

Attachment 1: ETMY_sensors_timeDomain.pdf
ETMY_sensors_timeDomain.pdf
Attachment 2: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 3: matrixDiag.png
matrixDiag.png
Attachment 4: ETMY_diagComp.pdf
ETMY_diagComp.pdf
Attachment 5: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 6: ETMY_pkFitNaive.pdf
ETMY_pkFitNaive.pdf
  14617   Fri May 17 10:57:01 2019 gautamUpdateSUSIY chamber opened

At ~930am, I vented the IY annulus by opening VAEV. I checked the particle count, seemed within the guidelines to allow door opening so I went ahead and loosened the bolts on the ITMY chamber.

Chub and I took the heavy door off with the vertex crane at ~1015am, and put the light door on.

Diagnosis plan is mainly inspection for now: take pictures of all OSEM/magnet positionings. Once we analyze those, we can decide which OSEMs we want to adjust in the holders (if any). I shut down the ITMY and SRM watchdogs in anticipation of in-chamber work.

Not related to this work: Since the annuli aren't being pumped on, the pressure has been slowly rising over the week. The unopened annuli are still at <1 torr, and the PAN region is at ~2 mtorr.

  14620   Fri May 17 17:01:08 2019 gautamUpdateSUSETMY suspension characterization

To investigate my mapping of the eigenfrequencies to eigenmodes, I checked the Oplev spectra for the last few hours, when the Oplev spot has been on the QPD (but the optic is undamped).

  1. Based on Attachment #1, I can't figure out which peak corresponds to what motion.
    • The most prominent peak (judged by peak height) is at 0.771 Hz for both PITCH and YAW
    • Assuming the peak at 0.92 Hz is the other angular mode, the PIT/YAW decoupling is poor in both peaks, only ~factor of 2 in both cases.
  2. Why are the POS and SIDE resonances sensed so asymmetrically in the PIT and YAW channels? There's a factor of 10 difference there...

So, while I conclude that my first-contact residue removal removed a constraint from the system (hence the pendulum dynamics are accurate and there are 6 eigenmodes), more thought is needed in judging what is the appropriate course of action.

Attachment 1: etmy_oplevs.pdf
etmy_oplevs.pdf
  14623   Mon May 20 11:33:46 2019 gautamUpdateSUSITMY inspection

With Chub providing illumination via the camera viewport, I was able to take photos of ITMY this morning. All the magnets look well clear of the OSEMs, with the possible exception of UR. I will adjust the position of this OSEM slightly. To test if this fix is effective, I will then cycle the bias voltage to the ITM between 0 and the maximum allowed, and check if the optic gets stuck.

  14625   Mon May 20 17:12:57 2019 gautamUpdateSUSETMY LL adjustment

Following the observation that the response in the LL shadow sensor was lower than that of the others, I decided to pull it out a little to move the signal level with nominal DC bias voltage applied was closer to half the open-voltage. I also chose to rotate the SIDE OSEM by ~20 degrees CCW in its holder (viewed from the south side of the EY chamber), to match more closely its position from a photo prior to the haphazhard vent of the summer of 2018. For the SIDE OSEM, the theoretical "best" alignment in order to be insensitive to POS motion is the shadow sensor beam being horizontal - but without some shimming of the OSEM in the holder, I can't get the magnet clear of the teflon inside the OSEM.

While I was inside the chamber, I attempted to minimize the Bounce/Roll mode coupling to the LL and SIDE OSEM channels, by rotating the Coil inside the holder while keeping the shadow sensor voltage at half-light. To monitor the coupling "live", I set up DTT with 0.3 Hz bandwidth and 3 exponentially weighted averages. For the LL coil, I went through pi radians of rotation either side of the equilibrium, but saw no significant change in the coupling - I don't understand why.

In any case, this wasn't the most important objective so I pushed ahead with recovering half-light levels for all the shadow sensors and closed up with the light doors. I kicked the optic again at 1712:14 PDT, let's see what the matrix looks like now.


before starting this work, i had to key the unresponsive c1auxey VME crate.

  14627   Mon May 20 22:06:07 2019 gautamUpdateSUSITMY also kicked

For good measure:

The following optics were kicked:
ITMY
Mon May 20 22:05:01 PDT 2019
1242450319
  14628   Tue May 21 00:15:21 2019 gautamUpdateSUSMain objectives of vent achieved (?)

Summary:

  1. ETMY now shows four suspension eigenmodes, with sensible phasing between signals for the angular DoFs. However, the eigenfrequencies have shifted by ~10% compared to 16 May 2019.
  2. PIT and YAW for ETMY as witnessed by the Oplev are now much better separated.
  3. ITMY can have its bias voltage set to zero and back to nominal alignment without it getting stuck.
  4. The sensing matrix for ETMY that I get doesn't make much sense to me. Nevertheless, the optic damps even with the "naive" input matrix.

So the primary vent objectives have been achieved, I think. 


Details:

  1. ETMY free-swinging data after adjusting LL and SIDE coils such that these were closer to half-light values
    • Attachment #1 - oplev witnessing the angular motion of the optic. PIT and YAW are well decoupled.
    • Attachment #2 - complex TF between the suspension coils. There is still considerable imbalance between coils, but at least the phasing of the signals make sense for PIT and YAW now.
    • Attachment #3 - DoFs sensed using the naive and optimized sensing matrices.
    • Attachment #4 - sensing matrix that the free swinging data tells me to implement. If the local damping works with the naive input matrix but we get better diagonality in the actuation matrix, I think we may as well stick to the naive input matrix.
  2. BR mode coupling minimization:
    • As alluded to in my previous elog, I tried to reduce the bounce mode coupling into the shadow sensor by rotating the OSEM in its holder.
    • However, I saw negligible change in the coupling, even going through a full pi radian rotation. I imagine the coupling will change smoothly so we should have seen some change in one of the ~15 positions I sampled in between, but I saw none.
    • The anomalously high coupling of the bounce mode to the shadow sensor readout is telling us something - I'm just not sure what yet.
  3. ITMY:
    • The offender was the LL OSEM, whose rotational orientation was causing the magnet to get stuck to the teflon part of the OSEM coil when the bias voltage was changed by a sufficiently large amount.
    • I rectified this (required adjustment of all 5 OSEMs to get everything back to half light again).
    • After this, I was able to zero the bias voltage to the PIT/YAW DoFs and not have the optic get stuck - huzzah 😀 
    • While I have the chance, I'm collecting the free-swinging data to see what kind of sensing matrix this optic yields.

Tomorrow and later this week:

  1. Prepare ETMY for first contact cleaning to remove the residual piece. 
    • Drag wipe the HR surface with dehydrated acetone 
    • Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
    • This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
  2. Confirm ETMY actuation makes sense.
    • Use the green beam for an ASS proxy implementation?
  3. High quality close out pictures of OSEMs and general chamber layout.
  4. Anything else? Any other tests we can do to convince ourselves the suspensions are well-behaved?

While we have the chance:

  1. Fix the IPANG alignment? Because the TT drift/hysteresis problem is still of unknown cause.
  2. Check that the AS beam is centered on OMs 1-6?
  3. Recover the 70% AS light that is being diverted to the OMC?

Unrelated to this work: megatron is responding to ping but isn't ssh-able. I also noticed earlier to day that the IMC autolocker blinky wasn't blinking. So it probably requries a hard reboot. I left the lab for tonight so I'll reboot it tomorrow, but no nds data access in the meantime... 

Attachment 1: etmy_oplevs_20190520.pdf
etmy_oplevs_20190520.pdf
Attachment 2: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 3: ETMY_diagComp.pdf
ETMY_diagComp.pdf
Attachment 4: Screen_Shot_2019-05-21_at_12.37.08_AM.png
Screen_Shot_2019-05-21_at_12.37.08_AM.png
  14629   Tue May 21 21:33:27 2019 gautamUpdateSUSETMY HR face cleaned

[koji, gautam]

We executed this plan. Photos are here. Summary:

  1. Optic was EQ-stopped (face stops only)., with the OSEMs in situ. We tried to do this as evenly as possible to avoid any magnets getting stuck on OSEMs.
  2. We used the specially procured acetone from Chub to drag wipe the HR face. This was a definite improvement, we should always get the correct grade of solvents when we attempt cleaning optics.
  3. It was observed that drag-wiping did not really have the desired cleaning effect. So Koji went in with hemostat / lens tissue soaked in acetone and wiped the HR face. This improved the situation.
  4. Applied a layer of F.C. Waited for it to dry, and then peeled it off. Under the green flashlight, the optic still looks horrific - but we decided against further drag-wiping/first-contacting. If the loss is truly 50 ppm, this is totally not a show-stopper for now.
  5. Suspension cage was replaced. EQ stops were released. Bias voltages were adjusted to bring the Oplev spot back to the center of the QPD. Now a free-swinging data collection is ongoing...
The following optics were kicked:
ETMY
Tue May 21 22:58:18 PDT 2019
1242539916

So if nothing, we got to practise this new wiping technique with OSEMs in situ successfully.

Quote:
 
  1. Prepare ETMY for first contact cleaning to remove the residual piece. 
    • Drag wipe the HR surface with dehydrated acetone 
    • Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
    • This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
  14630   Wed May 22 11:53:50 2019 gautamUpdateSUSETMY EQ stops backed out

Yesterday we noticed that the POS and SIDE eigenmodes were degenerate (with 1mHz spectral resolution). Moreover, the YAW peak had shifted down by ~500 mHz compared to earlier this week, although there was still good separation between PIT and YAW in the Oplev error signals. Ideas were (i) check if EQ stops were not backed out sufficiently, and (ii) look for any fibers/other constraints in the system. Today morning, I inspected the optic again. I felt the EQ stop viton tips were a bit close to the optic, so I backed them out further. Apart from this, I adjusted the LR and SIDE OSEM position in their respective holders to make the sensor voltages closer to half-light. Kicked the optic again just now, let's see if there is any change.

Remaining tasks:

  1. Check EY table leveling.
  2. Check EY actuation matrix diagonality using this technique.
  3. Check that IR resonances are seen (and all the usual pre-pumpdown alignment checks).
  4. Take close out pictures.
  5. Heavy doors on, pump down.

If everything goes smoothly, I think we should plan for the heavy doors going back on and commencing the pumpdown tomorrow. After discussion with Koji, we came to the conclusion that it isn't necessary to investigate IPANG (high likelihood of it falling off the steering optics during the pumpdown) / AS beam clipping (no strong evidence that this is a problem) for this vent.

Update 1235: Indeed, the eigenmodes are back to their positions from earlier this week. Indeed, the POS and SIDE modes are actually better separated! So, the OSEM/magnet and EQstop/optic interactions are non-negligible in the analysis of the dynamics of the pendulum.

Attachment 1: ETMY_eigenmodes.pdf
ETMY_eigenmodes.pdf
  14725   Thu Jul 4 10:54:21 2019 KojiSummarySUSSuspension damping recovered, ITMX stuck

So Cal Earthquake. All suspension watchdogs tripped.

Tried to recover the OSEM damping. 

=> The watchdogs for all suspensions except for ITMX were restored. ITMX seems to be stuck. No further action by me for now.

  14727   Fri Jul 5 20:57:04 2019 KojiUpdateSUSAnother M7.1 EQ

[Kruthi, Koji]

Koji came to the lab to align the IMC/IFO, but found the mirrors are dancing around. Kruthi told me that there was M7.1 EQ at Ridgecrest. Looks like there are aftershocks of this EQ going on. So we need to wait for an hour to start the alignment work.

ITMX and ETMX are stuck.

Attachment 1: Screenshot_from_2019-07-05_21-03-06.png
Screenshot_from_2019-07-05_21-03-06.png
  14728   Fri Jul 5 21:53:10 2019 KojiUpdateSUSAnother M7.1 EQ

- ITM unstuck now
- IMC briefly locked at TEM00

A series of aftershocks came. I could unstick ITMX by turning on the damping during one of the aftershocks.
Between the aftershocks, MC1~3 were aligned to the previous dof values. This allowed the IMC flashing. Once I got the lock of a low order TEM mode, it was easy to recover the alignment to have a weak TEM00.
Now at least temporarily the full alignment of the IMC was recovered.

  14729   Fri Jul 5 22:21:13 2019 KojiUpdateSUSAnother M7.1 EQ

In fact, ETMX was not stuck until the M7.1 EQ today. After that it got stuck, but during the after shocks, all the OSEMs occasionally showed full swing of the light levels. So I believe the magnets are OK.

Attachment 1: Screenshot_from_2019-07-05_22-19-57.png
Screenshot_from_2019-07-05_22-19-57.png
  14730   Fri Jul 5 23:28:52 2019 rana, kruthiSummarySUSETMX unstuck by shaking the stack

We unstuck ETMX by shaking the stack. Most effective was to apply large periodic human sized force to the north STACIS mounts.

At first, we noticed that the face OSEMs showed nearly zero variation.

We tried unsticking it through the usual ways of putting large excitations through AWG into the pit/yaw/side DOFs. This produced only ~0.2 microns of motion as seen by the OSEMs.

After the stack shake, we used the IFO ALIGN sliders to get the oplev beam back on the QPD.

The ETMX sensor trends observed before and after the earthquake are attached.

** plots deleted; SOMEONE, tried to take raster images and turn them into PDF as if this would somehow satisfy our vetor graphics requirement. Boo. lpots must be actual vector graphics PDF

  14736   Tue Jul 9 08:33:31 2019 gautamSummarySUSETMX PIT bias voltage changed by ~1V

After this activity, the DC bias voltage required on ETMX to restore good X arm cavity alignment has changed by ~1.3 V. Assuming a full actuation range of 30 mrad for +/- 10 V, this implies that the pitch alignment of the stack has changed by ~2 mrad? Or maybe the suspension wires shifted in the standoff grooves by a small amount? This is ~x10 larger than the typical change imparted while working on the table, e.g. during a vent.

Main point is that this kind of range requirement should probably be factored in when thinking about the high-voltage coil driver actuation.

Quote:

We unstuck ETMX by shaking the stack. Most effective was to apply large periodic human sized force to the north STACIS mounts.

  14742   Wed Jul 10 10:04:09 2019 gautamUpdateSUSTip-Tilt moved from South clean cabinet to bake lab cleanroom

Arnaud and I moved one of the two spare TT suspensions from the south clean cabinet to the bake lab clean room. The main purpose was to inspect the contents of the packaging. According to the label, this suspension was cleaned to Class A standards, so we tried to be clean while handling it (frocks, gloves, masks etc). We found that the foil wrapping contained one suspension cage, with what looked like all the parts in a semi-assembled state. There were no OSEMs or electronics together with the suspension cage. Pictures were taken and uploaded to gPhoto. Arnaud is going to plan his tests, so in the meantime, this unit has been stored in Cabinet #6 in the bake lab cleanroom.

  14745   Wed Jul 10 16:53:22 2019 gautamUpdateSUSPRM watchdog condition modified

[koji, gautam]

We noticed that the PRM watchdog was tripping frequently. This is a period of enhanced seismic activity. The reason PRM in particular trips often is because the SIDE OSEM has 5x increased transimpedance. We implemented a workaround by modifying the watchdog tripping condition to scale the SD channel RMS by a factor of 0.2 (relative to the UL and LL channels). We restarted the modbus process on c1susaux and tested that the new logic works. Here is the relevant snippet of code:

# Disable fast DAC if variation tests too high
# PRM Side is special, see elog 14745
record(calc,"C1:SUS-PRM_LOGIC")
{
    field(DESC,"Tests whether RMS too high")
    field(SCAN,"1 second")
    field(PHAS,"1")
    field(PREC,"0")
    field(HOPR,"1")
        field(LOPR,"0")
        field(CALC,"(A<B)&(C<B)&(0.2*D<B)")
        field(INPA,"C1:SUS-PRM_ULPD_VAR  NPP  NMS")
        field(INPB,"C1:SUS-PRM_PD_MAX_VAR  NPP  NMS")
        field(INPC,"C1:SUS-PRM_LLPD_VAR  NPP  NMS")
        field(INPD,"C1:SUS-PRM_SDPD_VAR  NPP  NMS")
}

The db file has a note about this as well so that future debuggers aren't mystified by a factor of 0.2.

  14755   Fri Jul 12 07:37:48 2019 gautamUpdateSUSM4.9 EQ in Ridgecrest

All suspension watchdogs were tripped ~90mins ago. I restored the damping. IMC is locked.

ITMX was stuck. I set it free. But notice that the UL Sensor RMS is higher than the other 4? I thought ITMY UL was problematic, but maybe ITMX has also failed, or maybe it's coincidence? Something for IFOtest to figure out I guess. I don't think there is a cable switch between ITMX/ITMY as when I move the ITMX actuators, the ITMX sensors respond and I can also see the optic moving on the camera.

Took me a while to figure out what's going on because we don't have the seis BLRMS - i moved the usual projector striptool traces to the TV screen for better diagnostic ability.

Update 16 July 1515: Even though the RMS is computed from the slow readback channels, for diagnosis, I looked at the spectra of the fast PD monitoring channels (i.e. *_SENSOR_*) for ITMX - looks like the increased UL RMS is coming from enhanced BR-mode coupling and not of any issues with the whitening switching (which seems to work as advertised, see Attachment #3, where the LL traces are meant to be representative of LL, LR, SD and UR channels).

Attachment 1: 56.png
56.png
Attachment 2: ITMXunstick.png
ITMXunstick.png
Attachment 3: ITMX_UL.pdf
ITMX_UL.pdf
  14763   Tue Jul 16 15:00:03 2019 gautamUpdateSUSMultiple small EQs

There were several small/medium earthquakes in Ridgecrest and one medium one in Blackhawk CA at about 2000 UTC (i.e. ~ 2 hours ago), one of which caused BS, ITMY, and ETM watchdogs to trip. I restored the damping just now.

  14776   Fri Jul 19 12:50:10 2019 gautamUpdateSUSDC bias actuation options for SOS

Rana and I talked about some (genius) options for the large range DC bias actuation on the SOS, which do not require us to supply high-voltage to the OSEMs from outside the vacuum.

What we came up with (these are pretty vague ideas at the moment):

  1. Some kind of thermal actuation.
  2. Some kind of electrical actuation where we supply normal (+/- 10 V) from outside the vacuum, and some mechanism inside the chamber integrates (and hence also low-pass filters) the applied voltage to provide a large DC force without injecting a ton of sensor noise.
  3. Use the blue piers as a DC actuator to correct for the pitch imbalance --- Kruthi and Milind are going to do some experiments to investigate this possibility later today.

For the thermal option, I remembered that (exactly a year ago to the day!) when we were doing cavity mode scans, once the heaters were turned on, I needed to apply significant correction to the DC bias voltage to bring the cavity alignment back to normal. The mechanism of this wasn't exactly clear to me - furthermore, we don't have a FLIRcam picture of where the heater radiation patter was centered prior to my re-centering of it on the optic earlier this year, so we don't know what exactly we were heating. Nevertheless, I decided to look at the trend data from that night's work - see Attachment #1. This is a minute trend of some ETMY channels from 0000 UTC on 18 July 2018, for 24 hours. Some remarks:

  1. We did multiple trials that night, both with the elliptical reflector and the cylindrical setup that Annalisa and Terra implemented. I think the most relevant part of this data is starting at 1500 UTC (i.e. ~8am PDT, which is around when we closed shop and went home). So that's when the heaters were turned off, and the subsequent drift of PIT/YAW are, I claim, due to whatever thermal transients were at play.
  2. Just prior to that time, we were running the heater at close to its maximum rated current - so this relaxation is indicative of the range we can get out of this method of actuation.
  3. I had wrongly claimed in my discussion with Rana this morning that the change in alignment was mostly in pitch - in fact, the data suggests the change is almost equal in the two DoFs. Oplev and OSEMs report different changes though, by almost a factor of 2....
  4. The timescale of the relaxation is ~20 minutes - what part(s) of the suspension take this timescale to heat up/cool down? Unlikely to be the wire/any metal parts because the thermal conductivity is high? 
  5. In the optimistic scenario, let's say we get 100 urad of actuation range - over 40m, this corresponds to a beam spot motion of ~8mm, which isn't a whole lot. Since the mechanism of what is causing this misalignment is unclear, we may end up with significantly less actuation range as well.
  6. I will repeat the test (i.e. drive the heater and look for drift in the suspension alignment using OSEMs/Oplev) in the afternoon - now I claim the radation pattern is better centered on the optic so maybe we will have a better understanding of what mechanisms are at play.

Also see this elog by Terra.

Attachment #2 shows the results from today's heating. I did 4 steps, which are obvious in the data - I=0.6A, I=0.76A, I=0.9A, and I=1.05A.


In science, one usually tries to implement some kind of interpretation. so as to translate the natural world into meaning.

Attachment 1: heaterPitch_2018.pdf
heaterPitch_2018.pdf
Attachment 2: Screenshot_from_2019-07-19_16-39-21.png
Screenshot_from_2019-07-19_16-39-21.png
  14798   Mon Jul 22 13:32:55 2019 KruthiUpdateSUSTest mass pitch adjustment test

[Kruthi, Milind]

On Friday, Milind and I performed the pitch adjustment test Rana had asked us to do. Only 1 blue beam in case of ITMX and two in case of ETMY, ETMX and ITMY were accessible. Milind (of mass 72 kg as of 10 May 2019) stood on each of the accessible blue beams of the test mass chambers for one minute and I recorded the corresponding gps time. Before moving to the next beam, we spared more than a minute for relaxation after the standing end time. Following are the recorded gps times. 

 

ETMX

ITMX

ETMY

ITMY

 

Beam 1

Beam 2

Beam 1

Beam 1

Beam 2

Beam 1

Beam 2

Standing start time (gps)

1247620911

1247621055

1247621984

1247622394

1247622585

1247622180

1247622814

Standing end time (gps)

1247620974

1247621118

1247622058

1247622459

1247622647

1247622250

1247622880

PS: For each blue beam relaxation time ~ 1 min after the standing end time

Attachment 1: ETMX.pdf
ETMX.pdf
Attachment 2: itmx.pdf
itmx.pdf
Attachment 3: ETMY.pdf
ETMY.pdf
Attachment 4: ITMY.pdf
ITMY.pdf
Attachment 5: 3f1a82f2-b86a-469e-8914-9278a216c5f9.jpg
3f1a82f2-b86a-469e-8914-9278a216c5f9.jpg
Attachment 6: 1d174307-d940-42e6-812b-83417d0f5f6a.jpg
1d174307-d940-42e6-812b-83417d0f5f6a.jpg
  14977   Fri Oct 18 17:35:07 2019 gautamUpdateSUSETMX sat box disconnected

Koji suggested systematic investigation of the ETMX suspension electronics. The tests to be done are:

  1. Characterization of PD whitening amplifiers - with the satellite box disconnected, we will look for glitches in the OSEM channels.
  2. Characterization of LT1125s in the PD chain of the amplifiers - with the in-vacuum OSEMs disconnected, we will look for glitches due to the on-board transimpedance amplifiers of the satellite box.
  3. Characterization using the satellite box tester - this will signal problems with the physical OSEMs.
  4. Characterization of the suspension coil driver electronics - this will happen later.

So the ETMX satellite box is unplugged now, starting 530 pm PDT.

The satellite box was reconnected and the suspension was left with watchdog off but OSEM roughly centered. We will watch for glitches over the weekend.

  14982   Mon Oct 21 16:02:21 2019 gautamUpdateSUSETMX over the weekend

Looking at the sensor and oplev trends over the weekend, there was only one event where the optic seems to have been macroscopically misaligned, at ~11:05:00 UTC on Oct 19 (early Saturday morning PDT). I attach a plot of the 2kHz time series data that has the mean value subtracted and a 0.6-1.2 Hz notch filter applied to remove the pendulum motion for better visualization. The y-axis calibration for the top plot assumes 1 ct ~= 1 um. This "glitch" seems to have a timescale of a few seconds, which is consistent with what we see on the CCD monitors when the cavity is locked - the alignment drifts away over a few seconds.

As usual, this tells us nothing conclusive. Anyways, I am re-enabling the watchdog and pushing on with locking activity and hope the suspension cooperates.

Quote:
 

The satellite box was reconnected and the suspension was left with watchdog off but OSEM roughly centered. We will watch for glitches over the weekend.

Attachment 1: filteredData.pdf
filteredData.pdf
  15002   Wed Oct 30 19:20:27 2019 gautamUpdateSUSPRM suspension issues

While I was trying to lock the PRMI this evening, I noticed that I couldn't move the REFL beamspot on the CCD field of view by adjusting the slow bias voltages to the PRM. Other suspensions controlled by c1susaux seem to respond okay so at first glance it isn't a problem with the Acromag. Looking at the OSEM sensor input levels, I noticed that UL is much lower than the others - see Attachment #1, seems to have happened ~100 days ago. I plugged the tester box in to check if the problem is with the electronics or if this is an actual shorting of some pins on the physical OSEM as we had in the past. So PRM watchdog is shutdown for now and there is no control of the optic available as the cables are detached. I will replace the connections later in the evening.

Update 10pm:

  1. Measured coil inductances with breakout board and LCR meter - all 5 coils returned ~3.28-3.32 mH.
  2. Measured coil resistances with breakout board and DMM - all 5 coils returned ~16-17 ohms.
  3. Checked OSEM PD capacitance (with no bias voltage) using the LCR meter - each PD returned ~1nF.
  4. Checked resistance between LED Cathode and Anode for all 5 LEDs using DMM - each returned Hi-Z.
  5. Checked resistance between PD Cathode and Anode for all 5 PDs using DMM - each returned ~430 kohms.
  6. Checked that I could change the slow bias voltages and see a response at the expected pins (with the suspension disconnected).

Since I couldn't find anything wrong, I plugged the suspension back in - and voila, the suspect UL PD voltage level came back to a level consistent with the others! See Attachment #2.

Anyway, I had some hours of data with the tester box plugged in - see Attachment #3 for a comparison of the shadow sensor readout with the tester box (all black traces) vs with the suspension plugged in, local damping loops active (coloured traces). The sensing noise re-injection will depend on the specifics of the  local damping loop shapes but I suspect it will limit feedforward subtraction possibilities at low frequencies.

However, I continue to have problems aligning the optic using the slow bias sliders (but the fast ones work just fine) - problem seems to be EPICS related. In Attachment #4, I show that even though I change the soft PITCH bias voltage adjust channel for the PRM, the linked channels which control the actual voltages to the coils take several seconds to show any response, and do so asynchronously. I tried restarting the modbus process on c1susaux, but the problem persists. Perhaps it needs a reboot of the computer and/or the acromag chassis? I note that the same problem exists for the BS and PRM suspensions, but not for ITMX or ITMY (didn't check the IMC optics). Perhaps a particular Acromag DAC unit is faulty / has issues with the internal subnet?

Attachment 1: PRMUL.pdf
PRMUL.pdf
Attachment 2: PRMnormal.pdf
PRMnormal.pdf
Attachment 3: PRM-Sensors_noise.pdf
PRM-Sensors_noise.pdf PRM-Sensors_noise.pdf
Attachment 4: PRMsuspensionWonky.png
PRMsuspensionWonky.png
  15003   Wed Oct 30 23:12:27 2019 KojiUpdateSUSPRM suspension issues

Sigh... hard loch

  15155   Sun Jan 26 13:30:19 2020 gautamUpdateSUSAll watchdogs tripped, now restored

Looks like a M=4.6 earthquate in Barstow,CA tripped all the suspensions. ITMX got stuck. I restored the local damping on all the suspensions just now, and freed ITMX. Looks like all the suspensions damp okay, so I think we didn't suffer any lasting damage. IMC was re-aligned and is now locked.

Attachment 1: EQ_Jan25.pdf
EQ_Jan25.pdf
  15173   Wed Jan 29 03:05:47 2020 rana, gautamUpdateSUSMC misalignments / sat box games

In the last couple days, as the IMC ringdowns have been going on, we have noticed that the MC is behaving bad. Misaligning, drifting, etc.

Gautam told me a horror story about him, Koji, and melted wires inside the sat boxes.

I said, "Its getting too hot in there. So let's take the lids off!"

So then we:

  1. Removed the lid (only 4 screws were still there)
  2. cut off some of the shield - ground wires and insulated them with electrical tape
  3. squished the IDC connectors on tightly
  4. left it this way to see if MC would get better - certainly the painfully hot heatinks inside the box were now just 110 F or so

After some minutes, we saw no drifting. So maybe my theory of "hot heatsink partially shorting a coil current to GND through partially melted ribbon cable" makes sense? IF this seems better after a month, lets de-lid all the optics.

Let's look at some longer trends and be very careful next to MC2 for the next 3 days! I have put a dangerous mousetrap there to catch anyone who walks near the vacuum chamber.

gautam: the grounding situation per my assessment is that the shield of all the IDC cables are connected to a common metal strip at 1X5 - but in my survey, I didn't see any grounding of this strip to a common ground.

Attachment 1: IMG_8366.JPG
IMG_8366.JPG
  15261   Sat Mar 7 15:18:30 2020 gautamUpdateSUSEQ tripped some suspensions

An earthquake around 330 UTC (=730pm yesterday eve) tripped ITMX, ITMY and ETMX watchdogs. ITMX got stuck. I released the stuck optic and re-enabled the local damping loops just now.

Attachment 1: EQ_6Mar.png
EQ_6Mar.png
  15262   Tue Mar 10 14:30:16 2020 yehonathanUpdateSUS 

ETMX was grossly misaligned.

I re-aligned it and the X arm now locks.

7:00PM with Koji

Both the alignment of the X and Y arms was recovered.

~>z avg 10 C1:LSC-TRX_OUT C1:LSC-TRY_OUT
C1:LSC-TRX_OUT 0.9914034307003021
C1:LSC-TRY_OUT 0.9690877735614777

We are running ass for the X arm to recover the X arm alignment.

Meanwhile, i want to block the Y arm trans PD (Thorlabs). To do it, the PD<->QPD thresholds were changed from 5.0/3.0 to 0.5/0.3.

Attachment 1: Screenshot_from_2020-03-10_19-02-31.png
Screenshot_from_2020-03-10_19-02-31.png
  15263   Tue Mar 10 19:58:16 2020 yehonathanUpdateSUS 

I returned the triggering threshold to normal values (5/3).

Meanwhile, i want to block the Y arm trans PD (Thorlabs). To do it, the PD<->QPD thresholds were changed from 5.0/3.0 to 0.5/0.3.

  15335   Fri May 15 19:10:42 2020 gautamUpdateSUSAll watchdogs tripped, now restored

This EQ in Nevada seems to have tripped all watchdogs. ITMX was stuck. It was released, and all the watchdogs were restored. Now the IMC is locked.

ELOG V3.1.3-