Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.
I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...
I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow.
[koji, chub, jon, rana, gautam]
Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to
The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.
[chub, bob, gautam]
[Attachment #1]: ITMY HR face after cleaning. I determined this to be sufficiently clean and re-installed the optic.
[Attachment #2]: ETMY HR face after cleaning. This is what the HR face looks like after 3 rounds of First-Contact application. After the first round, we noticed some arc-shaped lines near the center of the optic's clear aperture. We were worried this was a scratch, but we now believe it to be First-Contact residue, because we were able to remove it after drag wiping with acetone and isopropanol. However, we mistrust the quality of the solvents used - they are not any special dehydrated kind, and we are looking into acquiring some dehydrated solvents for future cleaning efforts.
[Attachment #3]: Top view of ETMY cage meant to show increased clearance between the IFO axis and the elliptical reflector.
Many more photos (including table leveling checks) on the google-photos page for this vent. The estimated time between F.C. peeling and pumpdown is ~24 hours for ITMY and ~15 hours for ETMY, but for the former, the heavy doors were put on ~1 hour after the peeling.
The first task is to fix the damping of ETMY.
[jon, koji, gautam]
I'm leaving all suspension watchdogs tripped over the weekend as part of the suspension diagonalization campaign...
I looked at the free-swinging sensor data from two nights ago, and am struggling with the interpretation.
[Attachment #1] - Fine resolution spectral densities of the 5 shadow sensor signals (y-axis assumes 1ct ~1um). The puzzling feature is that there are only 3 resonant peaks visible around the 1 Hz region, whereas we would expect 4 (PIT, YAW, POS and SIDE). afaik, Lydia looked into the ETMY suspension diagonalization last, in 2016. Compared to her plots (which are in the Euler basis while mine are in the OSEM basis), the ~0.73 Hz peak is nowhere to be seen. I also think the frequency resolution (<1 mHz) is good enough to be able to resolve two closely spaced peaks, so it looks like due to some reason (mechanical or otherwise), there are only 3 independent modes being sensed around 1 Hz.
[Attachment #2] - Koji arrived and we looked at some transfer functions to see if we could make sense of all this. During this investigation, we also think that the UL coil actuator electronics chain has some problem. This test was done by driving the individual coils and looking for the 1/f^2 pendulum transfer function shape in the Oplev error signals. The ~ 4dB difference between UR/LL and LR is due to a gain imbalance in the coil output filter bank, once we have solved the other problems, we can reset the individual coil balancing using this measurement technique.
[Attachment #3] - Downsampled time-series of the data used to make Attachment #1. The ringdown looks pretty clean, I don't see any evidence of any stuck magnets looking at these signals. The X-axis is in kilo-seconds.
We found that the POS and SIDE local damping loops do not result in instability building up. So one option is to use only Oplevs for angular control, while using shadow-sensor damping for POS and SIDE.
I did some tests of the electronics chain today.
Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.
The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2.
As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow.
The forthcoming Acromag c1susaux is supposed to use the backplane connectors of the sus euro card modules.
However, the backplane connectors of the vertex sus coil drivers were already used by the fast switches (dewhitening) of c1sus.
Our plan is to connect the Acromag cables to the upper connectors, while the switch channels are wired to the lower connector by soldering jumper wires between the upper and lower connectors on board.
To make the lower 96pin DIN connector available for this, we needed DIN 41612 (96pin) shroud. Tyco Electronics 535074-2 is the correct component for this purpose. The shrouds have been installed to the backplane pins of the coil driver circuit D010001. The shroud has the 180deg rotation dof. The direction of the shroud was matched with the ones on the upper connectors.
Now the sus PD whitening bards are ready to move the back plane connectoresto the lower row and to plug the acromag interface board to the upper low.
Sus PD whitening boards on 1X5 rack (D000210-A1) had slow and fast channels mix in a single DIN96 connector. As we are going to use the rear-side backplane connector for Acromag access, we wanted to migrate the fast channel somewhere. For this purpose, the boards were modified to duplicate the fast signals to the lower DIN96 connector.
The modification was done on the back layer of the board (Attachment 1).
The 28A~32A and 28C~32C of P1 are connected to the corresponding pins of P2 (Attachment 2). The connections were thouroughly checked by a multimeter.
After the modification the boards were returned to the same place of the crate. The cables, which had been identified and noted before disconnection, were returned to the connectors.
The functionarity of the 40 (8sus*5ch) whitening switches were confimred using DTT one by one by looking at the transfer functions between SUS LSC EXC to the PD input filter IN1. All the switches showed the proper whitening in the measurments.
The PD slow mon (like C1:SUS-XXX_xxPDMon) channels were also checked and they returned to the values before the modification, except for the BS UL PD. As the fast version of the signal returned to the previous value, the monitor circuit was suspicious. Therefore the opamp of the monitor channels (LT1125) were replaced and the value came back to the previous value (attachment 3).
Will advise when I'm finished, will be by 1 pm for ALS work to begin.
Testing is finished.
In anticipation of needing to test hundreds of suspension signals after the c1susaux upgrade, I've started developing a Python package to automate these tests: susPython
The core of this package is not any particular test, but a general framework within which any scripted test can be "nested." Built into this framework is extensive signal trapping and exception handling, allowing actuation tests to be performed safely. Namely it protects against crashes of the test scripts that would otherwise leave the suspensions in an arbitrary state (e.g., might destroy alignment).
The package is designed to be used as a standalone from the command line. From within the root directory, it is executed with a single positional argument specifying the suspension to test:
$ python -m suspython ITMY
Currently the package requires Python 2 due to its dependence on the cdsutils package, which does not yet exist for Python 3.
So far I've implemented a cross-consistency test between the DC-bias outputs to the coils and the shadow sensor readbacks. The suspension is actuated in pitch, then in yaw, and the changes in PDMon signals are measured. The expected sign of the change in each coil's PDMon is inferred from the output filter matrix coefficients. I believe this test is sensitive to two types of signal-routing errors: no change in PDMon response (actuator is not connected), incorrect sign in either pitch or yaw response, or in both (two actuators are cross-wired).
The next test I plan to implement is a test of the slow system using the fast system. My idea is to inject a 3-8 Hz excitation into the coil output filter modules (either bandlimited noise or a sine wave), with all coil outputs initially disabled. One coil at a time will be enabled and the change in all VMon signals monitored, to verify the correct coil readback senses the excitation. In this way, a signal injected from the independent and unchanged fast system provides an absolute reference for the slow system.
I'm also aware of ideas for more advanced tests, which go beyond testing the basic signal routing. These too can be added over time within the susPython framework.
Rana did a checkout of my story about oddness of the ETMY suspension. Today, we focused on the actuators - the goal was to find the correct coefficients on the 4 face coils that would result in diagonal actuation (i.e. if we actuate on PIT, it only truly moves the PIT DoF, as witnessed by the Oplev, and so on for the other DoFs). Here are the details:
Ther isn't a consistent set of OSEM coil gains that explains the best actuation vectors we determined yesterday. Here are the explicit matrices:
There isn't a solution to the matrix equation , i.e. we cannot simply redistribute the actuation vectors we found as gains to the coils and preserve the naive actuation matrix. What this means is that in the OSEM coil basis, the actuation eigenvectors aren't the naive ones we would think for PIT and YAW and POS. Instead, we can put these custom eigenvectors into the output matrix, but I'm struggling to think of what the physical implication is. I.e. what does it mean for the actuation vectors for PIT, YAW and POS to not only be scaled, but also non-orthogonal (but still linearly independent) at ~10 Hz, which is well above the resonant frequencies of the pendulum? The PIT and YAW eigenvectors are the least orthogonal, with the angle between them ~40 degrees rather than the expected 90 degrees.
So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC.
let us have 3 by 4, nevermore
so that the number of columns is no less
and no more
than the number of rows
so that forevermore we live as 4 by 4
I'm struggling to think
I repeated the exercise from yesterday, this time driving the butterfly mode [+1 -1 -1 +1] and adding the tuned PIT and YAW vectors from yesterday to it to minimize appearance in the Oplev error signals.
The measured output matrix is , where rows are the coils in the order [UL,UR,LL,LR] and columns are the DOFs in the order [POS,PIT,YAW,Butterfly]. The conclusions from my previous elog still hold though - the orthogonality between PIT and YAW is poor, so this output matrix cannot be realized by a simple gain scaling of the coil output gains. The "adjustment matrix", i.e. the 4x4 matrix that we must multiply the "ideal" output matrix by to get the measured output matrix has a condition number of 134 (1 is a good condition number, signifies closeness to the identity matrix).
so that forevermore we live as 4 by 4
If thy left hand troubles thee
then let the mirror show the right
for if it troubles enough to cut it off
it would not offend thy sight
Today I bench-tested most of the Acromag channels in the replacement c1susaux. I connected a DB37 breakout board to each chassis feedthrough connector in turn and tested channels using a multimeter and calibrated voltage source. Today I got through all the digital output channels and analog input channels. Still remaining are the analog output channels, which I will finish tomorrow.
There have been a few wiring issues found so far, which are noted below.
Crossed with LR
Crossed with UR
Crossed with LL
Crossed with UL
Here are the results from this test. The data for 17 April is with the DC bias for ETMY set to the nominal values (which gives good Y arm cavity alignment), while on 18 April, I changed the bias values until all four shadow sensors reported values that were at least 100 cts different from 17 April. The times are indicated in the plot titles in case anyone wants to pull the data (I'll point to the directory where they are downloaded and stored later).
There are 3 visible peaks. There was negligible shift in position (<5 mHz) / change in Q of any of these with the applied Bias voltage. I didn't attempt to do any fitting as it was not possible to determine which peak corresponds to which DoF by looking at the complex TFs between coils (at each peak, different combinations of 3 OSEMs have the same phase, while the fourth has ~180 deg phase lead/lag). FTR, the wiki leads me to expect the following locations for the various DoFs, and I've included the closest peak in the current measured data in parentheses:
However, this particular SOS was re-suspended in 2016, and this elog reports substantially different peak positions, in particular, for the YAW DoF (there were still 4). The Qs of the peaks from last week's measurements are in the range 250-350.
Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286.
Today I tested the remaining Acromag channels and retested the non-functioning channels found yesterday, which Chub repaired this morning. We're still not quite ready for an in situ test. Here are the issues that remain.
I further diagnosed these channels by connecting a calibrated DC voltage source directly to the ADC terminals. The EPICS channels do sense this voltage, so the problem is isolated to the wiring between the ADC and DB37 feedthrough.
No output signal
To further diagnose these channels, I connected a voltmeter directly to the DAC terminals and toggled each channel output. The DACs are outputting the correct voltage, so these problems are also isolated to the wiring between DAC and feedthrough.
In testing the DC bias channels, I did not check the sign of the output signal, but only that the output had the correct magnitude. As a result my bench test is insensitive to situations where either two degrees of freedom are crossed or there is a polarity reversal. However, my susPython scripting tests for exactly this, fetching and applying all the relevant signal gains between pitch/yaw input and coil bias output. It would be very time consuming to propagate all these gains by hand, so I've elected to wait for the automated in situ test.
For the new c1susaux, Gautam and I moved the watchdog channels from autoBurt.req to a new file named autoBurt_watchdogs.req. When the new modbus service starts, it loads the state contained in autoBurt.snap. We thought it best for the watchdogs to not be automatically enabled at this stage, but for an operator to manually have to do this. By moving the watchdog channels to a separate snap file, the entire SUS state can be loaded while leaving just the watchdogs disabled.
This same modification should be made to the ETMX and ETMY machines.
For the in-situ test, I decided that we will use the physical SRM to test the c1susaux Acromag replacement crate functionality for all 8 optics (PRM, BS, ITMX, ITMY, SRM, MC1, MC2, MC3). To facilitate this, I moved the backplane connector of the SRM SUS PD whitening board from the P1 connector to P2, per Koji's mods at ~5:10PM local time. Watchdog was shutdown, and the backplane connectors for the SRM coil driver board was also disconnected (this is interfaced now to the Acromag chassis).
I had to remove the backplane connector for the BS coil driver board in order to have access to the SRM backplane connector. Room in the back of these eurocrate boxes is tight in the existing config...
At ~6pm, I manually powered down c1susaux (as I did not know of any way to turn off the EPICS server run by the old VME crate in a software way). The point was to be able to easily interface with the MEDM screens. So the slow channels prefixed C1:SUS-* are now being served by the Supermicro called c1susaux2.
A critical wiring error was found. The channel mapping prepared by Johannes lists the watchdog enable BIO channels as "C1:SUS-<OPTIC>_<COIL>_ENABLE", which go to pins 23A-27A on the P1 connector, with returns on the corresponding C pins. However, we use the "TEST" inputs of the coil driver boards for sending in the FAST actuation signals. The correct BIO channels for switching this input is actually "C1:SUS-<OPTIC>_<COIL>_TEST", which go to pins 28A-32A on the P1 connector. For todays tests, I voted to fix this inside the Acromag crate for the SRM channels, and do our tests. Chub will unfortunately have to fix the remaining 7 optics, see Attachment #1 for the corrections required. I apportion 70% of the blame to Johannes for the wrong channel assignment, and accept 30% for not checking it myself.
The good news: the tests for the SRM channels all passed!
Additionally, I confirmed that the watchdog tripped when the RMS OSEM PD voltage exceeded 200 counts. Ideally we'd have liked to test the stability of the EPICS server, but we have shut it down and brought the crate back out to the electronics bench for Chub to work on tomorrow.
I restarted the old VME c1susaux at 915pm local time as I didn't want to leave the watchdogs in an undefined state. Unsurprisingly, ITMY is stuck. Also, the BS (cable #22) and SRM (cable #40) coil drivers are physically disconnected at the front DB15 output because of the undefined backplane inputs. I also re-opened the PSL shutter.
We briefly talked about the bounce and roll modes of the SOS optic at the meeting today.
Attachment #1: BR modes for ETMY from my free-swinging run on 17 April. The LL coil has a very different behavior from the others.
Attachment #2: BR modes for ETMY from my free-swinging run on 18 April, which had a macroscopically different bias voltage for the PIT/YAW sliders. Here too, the LL coil has a very different behavior from the others.
Attachment #3: BR modes for ETMX from my free-swinging run on 27 Feb. There are many peaks in addition to the prominent ones visible here, compared to ITMY. The OSEM PD noise floor for UR and SIDE is mysteriously x2 lower than for the other 3 OSEMs???
In all three cases, a bounce mode around 16.4 Hz and a roll mode around 24.0 Hz are visible. The ratio between these is not sqrt(2), but is ~1.46, which is ~3% larger. But when I look at the database, I see that in the past, the bounce and roll modes were in fact at close to these frequencies.
Because of my negligence and rushing the closeout procedure, I don't have a great close-out picture of the magnet positions in the face OSEMs, the best I can find is Attachment #4. We tried to replicate the OSEM arrangement (orientation of leads from the OSEM body) from July 2018 as closely as possible.
I will investigate the side coil actuation strength tomorrow, but if anyone can think of more in-air tests we should do, please post your thoughts/poetry here.
Today we installed the c1susaux Acromag chassis and controller computer in the 1X4 rack. As noted in 14580 the prototype Acromag chassis had to first be removed to make room in the rack. The signal feedthroughs were connected to the eurocrates by 10' DB-37 cables via adapters to 96-pin DIN.
Once installed, we ran a scripted set of suspension actuation tests using PyIFOTest. BS, PRM, SRM, MC1, MC2, and MC3 all passed these tests. We were unable to test ITMX and ITMY because both appear to be stuck. Gautam will shake them loose on Monday.
Although the new c1susaux is now mounted in the rack, there is more that needs to be done to make the installation permanent:
On Monday we plan to continue with additional scripted tests of the suspensions.
gautam - some more notes:
A concern was raised about the two ETMs and ITMX having the opposite response (relative to the other 7 SOS optics) in the OSEM PDmon channel in response to a given polarity of PIT/YAW offset being applied to the coils. Jon has factored into account all the digital gains in the actuation part of the CDS system in making this conclusion. I raised the possibility of the OSEM coil winding direction being opposite on the 15 OSEMs of the ETMs and ITMX, but I think it is more likely that the magnets are just glued on opposite to what they are "supposed" to be. See Attachment #6 of this elog (you'll have to rotate the photo either in your head or in your viewer) and note that it is opposite to what is specified in the assembly procedure, page 8. The net magnetic quadrupole moment is still 0, but the direction of actuation in response to current in the coil in a given direction would be opposite. I can't find magnet polarities for all the 10 SOS optics, but this hypothesis fits all the evidence so far..
Yesterday Gautam and I ran final tests of the eight suspensions controlled by c1susaux, using PyIFOTest. All of the optics pass a set of basic signal-routing tests, which are described in more detail below. The only issue found was with ITMX having an apparent DC bias polarity reversal (all four front coils) relative to the other seven susaux optics. However, further investigation found that ETMX and ETMY have the same reversal, and there is documentation pointing to the magnets being oppositely-oriented on these two optics. It seems likely that this is the case for ITMX as well.
I conclude that all the new c1susaux wiring/EPICS interfacing works correctly. There are of course other tests that can still be scripted, but at this point I'm satisfied that the new Acromag machine itself is correctly installed. PyIFOTest has been morphed into a powerful general framework for automating IFO tests. Anything involving fast/slow IO can now be easily scripted. I highly encourage others to think of more applications this may have at the 40m.
The code is currently located in /users/jon/pyifotest although we should find a permanent location for it. From the root level it is executed as
$ ./IFOTest <PARAMETER_FILE>
where PARAMETER_FILE is the filepath to a YAML config file containing the test parameters. I've created a config file for each of the suspended optics. They are located in the root-level directory and follow the naming convention SUS-<OPTIC>.yaml.
The code climbs a hierarchical "ladder" of actuation/readback-paired tests, with the test at each level depending on signals validated in the preceding level. At the base is the fast data system, which provides an independent reference against which the slow channels are tested. There are currently three scripted tests for the slow SUS channels, listed in order of execution:
I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?
On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames?
Chub and I crossed off some of these items today morning. The last bullet was addressed by Jon yesterday. I added a couple of new bullets.
The new power connectors will arrive next week, at which point we will install them. Note that there is no 24V Sorensen available, only 20V.
I am running a test on the 2W Mephisto for which I wanted the diagnostics connector plugged in again and Acromag channels to record them. So we set up the highly non-ideal but temporary set up shown in Attachment #1. This will be cleaned up by Monday evening latest.
update 1630 Monday 5/6: the sketchy PSL acromag setup has been disassembled.
Yes, this was a consequence of the systemd scripting I was setting up. Unlike the old susaux system, we decided for safety NOT to allow the modbus IOC to automatically enable the coil outputs. Thus when the modbus service starts/restarts, it automatically restores all state except the watchdog channels, which are left in their default disabled state. They then have to be manully enabled by an operator, as I should have done after finishing testing.
I collected some free-swinging data from earlier today evening. There are still only 3 peaks visible in the ASDs, see Attachment #1.
Plan for tomorrow:
TBH, I don't have any clear ideas as to what we are supposed to do to to fix the problem (or even what the problem is). So here is my plan for now:
I anticipate that these will throw up some more clues
I setup the usual mini-cleanroom setup around the ETMY chamber. Then I carried out the investigative plan outlined here.
Main finding: I saw a fiber of what looks like first contact on the bottom left (as viewed from HR side) of ETMY, connecting the optic to the cage. See Attachment #1. I don't know that this can explain the problem with the missing eigenmode, it's not a hard constraint. Seems like something that should be addressed in any case. How do we want to remove this? Just use a tweezer and pull it off, or apply a larger FC patch and then pull it off? I'm pretty sure it's first contact and not a piece of PEEK mesh because I can see it is adhered to the HR side of the optic, but couldn't capture that detail in a photo.
There weren't any obvious problem with the magnet positioning inside the OSEM, or the suspension wire. All the EQ stop tips were >3mm away from the optic.
I also backed out the bottom EQ stops on the far (south side) of the optic by ~2 full turns of the screw. Taking another free-swinging dataset now to see if anything has changed. I will upload all the photos I took, with annotations, to the gPhotos later today eve. Light doors back on at ~1730.
Update 10pm: the photos have been uploaded. I've added a "description" to each photo which should convey the message of that particualr shot, it shows up in my browser on the bottom left of the photo but can also be accessed by clicking the "info" icon. Please have a look and comment if something sticks out as odd / requires correction.
Update 1045pm: I looked at the freeswinging data from earlier today. Still only 3 peaks around 1 Hz.
A pair of tweezer is OK as long as there is no magnets around. You need to (somewhat) constrain the mirror with the EQ stops so that you can pull the fiber without dragging the mirror.
I used a pair of tweezers to remove the stray fiber of first contact. As Koji predicted, this was rather dry and so it didn't have the usual elasticity, so while I was able to pull most of it off, there is a small spot remaining on the HR surface of the ETM. We will remove this with a fresh application of a small patch of FC.
I the meantime, I'm curious if this has actually fixed the suspension woes, so yet another round of freeswinging data collection is ongoing. From the first 5 mins, looks positive, I see 4 peaks around 1Hz !
Update 730pm: There are now four well-defined peaks around 1 Hz. Together with the Bounce and Roll modes, that makes six. The peak at 0.92 Hz, which I believe corresponds to the Yaw eigenmode, is significantly lower than the other three. I want to get some info about the input matrix but there was some NDS dropout and large segments of data aren't available using the python nds fetch method, so I am trying again, kicked ETMY at 1828 PDT. It may be that we could benefit from some adjustment of the OSEM positions, the coupling of bounce mode to LL is high. Also the SIDE/POS resonances aren't obviously deconvolved. The stray first contact has to be removed too. But overall I think it was a successful removal, and the suspension characteristics are more in line with what is "expected".
Here is my analysis. I think there are still some problems with this suspension.
Attachment #1: Time domain plots of the ringdown. The LL coil has peak response ~half of the other face OSEMs. I checked that the signal isn't being railed, the lowest level is > 100 cts.
Attachment #2: Complex TF from UL to the other coils. While there are four peaks now, looking at the phase information, it isn't possible to clearly disentangle PIT or YAW motion - in fact, for all peaks, there are at least three face shadow sensors which report the same phase. The gains are also pretty poorly balanced - e.g. for the 0.77 Hz peak, the magnitude of UR->UL is ~0.3, while LR->UL is ~3. Is it reasonable that there is a factor of 10 imbalance?
Attachment #3: Nevertheless, I assumed the following mapping of the peaks (quoted f0 is from a lorentzian fit) and attempted to find the input matrix that best convers the Sensor basis into the Euler basis.
Unsurprisingly, the elements of this matrix are very different from unity (I have to fix the normalization of the rows).
Attachment #4: Pre and post diagonalization spectra. The null stream certainly looks cleaner, but then again, this is by design so I'm not sure if this matrix is useful to implement.
In case anyone wants to repeat the analysis, the suspension was kicked at 1828 PDT today and this analysis uses 15000 seconds of data from then onwards.
Update 18 May 3pm: Attachment #5 better presentation of the data shown in Attachment #2, the remark about the odd phasing of the coils is more clearly seen in this zoomed in view. Attachment #6 shows Lorentzian fits to the peaks - the Qs are comparable to that seen for the other optics, although the Q for the 0.77 Hz peak is rather low.
At ~930am, I vented the IY annulus by opening VAEV. I checked the particle count, seemed within the guidelines to allow door opening so I went ahead and loosened the bolts on the ITMY chamber.
Chub and I took the heavy door off with the vertex crane at ~1015am, and put the light door on.
Diagnosis plan is mainly inspection for now: take pictures of all OSEM/magnet positionings. Once we analyze those, we can decide which OSEMs we want to adjust in the holders (if any). I shut down the ITMY and SRM watchdogs in anticipation of in-chamber work.
Not related to this work: Since the annuli aren't being pumped on, the pressure has been slowly rising over the week. The unopened annuli are still at <1 torr, and the PAN region is at ~2 mtorr.
To investigate my mapping of the eigenfrequencies to eigenmodes, I checked the Oplev spectra for the last few hours, when the Oplev spot has been on the QPD (but the optic is undamped).
So, while I conclude that my first-contact residue removal removed a constraint from the system (hence the pendulum dynamics are accurate and there are 6 eigenmodes), more thought is needed in judging what is the appropriate course of action.
With Chub providing illumination via the camera viewport, I was able to take photos of ITMY this morning. All the magnets look well clear of the OSEMs, with the possible exception of UR. I will adjust the position of this OSEM slightly. To test if this fix is effective, I will then cycle the bias voltage to the ITM between 0 and the maximum allowed, and check if the optic gets stuck.
Following the observation that the response in the LL shadow sensor was lower than that of the others, I decided to pull it out a little to move the signal level with nominal DC bias voltage applied was closer to half the open-voltage. I also chose to rotate the SIDE OSEM by ~20 degrees CCW in its holder (viewed from the south side of the EY chamber), to match more closely its position from a photo prior to the haphazhard vent of the summer of 2018. For the SIDE OSEM, the theoretical "best" alignment in order to be insensitive to POS motion is the shadow sensor beam being horizontal - but without some shimming of the OSEM in the holder, I can't get the magnet clear of the teflon inside the OSEM.
While I was inside the chamber, I attempted to minimize the Bounce/Roll mode coupling to the LL and SIDE OSEM channels, by rotating the Coil inside the holder while keeping the shadow sensor voltage at half-light. To monitor the coupling "live", I set up DTT with 0.3 Hz bandwidth and 3 exponentially weighted averages. For the LL coil, I went through pi radians of rotation either side of the equilibrium, but saw no significant change in the coupling - I don't understand why.
In any case, this wasn't the most important objective so I pushed ahead with recovering half-light levels for all the shadow sensors and closed up with the light doors. I kicked the optic again at 1712:14 PDT, let's see what the matrix looks like now.
before starting this work, i had to key the unresponsive c1auxey VME crate.
For good measure:
So the primary vent objectives have been achieved, I think.
Tomorrow and later this week:
While we have the chance:
Unrelated to this work: megatron is responding to ping but isn't ssh-able. I also noticed earlier to day that the IMC autolocker blinky wasn't blinking. So it probably requries a hard reboot. I left the lab for tonight so I'll reboot it tomorrow, but no nds data access in the meantime...
We executed this plan. Photos are here. Summary:
So if nothing, we got to practise this new wiping technique with OSEMs in situ successfully.
Yesterday we noticed that the POS and SIDE eigenmodes were degenerate (with 1mHz spectral resolution). Moreover, the YAW peak had shifted down by ~500 mHz compared to earlier this week, although there was still good separation between PIT and YAW in the Oplev error signals. Ideas were (i) check if EQ stops were not backed out sufficiently, and (ii) look for any fibers/other constraints in the system. Today morning, I inspected the optic again. I felt the EQ stop viton tips were a bit close to the optic, so I backed them out further. Apart from this, I adjusted the LR and SIDE OSEM position in their respective holders to make the sensor voltages closer to half-light. Kicked the optic again just now, let's see if there is any change.
If everything goes smoothly, I think we should plan for the heavy doors going back on and commencing the pumpdown tomorrow. After discussion with Koji, we came to the conclusion that it isn't necessary to investigate IPANG (high likelihood of it falling off the steering optics during the pumpdown) / AS beam clipping (no strong evidence that this is a problem) for this vent.
Update 1235: Indeed, the eigenmodes are back to their positions from earlier this week. Indeed, the POS and SIDE modes are actually better separated! So, the OSEM/magnet and EQstop/optic interactions are non-negligible in the analysis of the dynamics of the pendulum.
So Cal Earthquake. All suspension watchdogs tripped.
Tried to recover the OSEM damping.
=> The watchdogs for all suspensions except for ITMX were restored. ITMX seems to be stuck. No further action by me for now.
Koji came to the lab to align the IMC/IFO, but found the mirrors are dancing around. Kruthi told me that there was M7.1 EQ at Ridgecrest. Looks like there are aftershocks of this EQ going on. So we need to wait for an hour to start the alignment work.
ITMX and ETMX are stuck.
- ITM unstuck now
- IMC briefly locked at TEM00
A series of aftershocks came. I could unstick ITMX by turning on the damping during one of the aftershocks.
Between the aftershocks, MC1~3 were aligned to the previous dof values. This allowed the IMC flashing. Once I got the lock of a low order TEM mode, it was easy to recover the alignment to have a weak TEM00.
Now at least temporarily the full alignment of the IMC was recovered.
In fact, ETMX was not stuck until the M7.1 EQ today. After that it got stuck, but during the after shocks, all the OSEMs occasionally showed full swing of the light levels. So I believe the magnets are OK.
We unstuck ETMX by shaking the stack. Most effective was to apply large periodic human sized force to the north STACIS mounts.
At first, we noticed that the face OSEMs showed nearly zero variation.
We tried unsticking it through the usual ways of putting large excitations through AWG into the pit/yaw/side DOFs. This produced only ~0.2 microns of motion as seen by the OSEMs.
After the stack shake, we used the IFO ALIGN sliders to get the oplev beam back on the QPD.
The ETMX sensor trends observed before and after the earthquake are attached.
** plots deleted; SOMEONE, tried to take raster images and turn them into PDF as if this would somehow satisfy our vetor graphics requirement. Boo. lpots must be actual vector graphics PDF
After this activity, the DC bias voltage required on ETMX to restore good X arm cavity alignment has changed by ~1.3 V. Assuming a full actuation range of 30 mrad for +/- 10 V, this implies that the pitch alignment of the stack has changed by ~2 mrad? Or maybe the suspension wires shifted in the standoff grooves by a small amount? This is ~x10 larger than the typical change imparted while working on the table, e.g. during a vent.
Main point is that this kind of range requirement should probably be factored in when thinking about the high-voltage coil driver actuation.