ID |
Date |
Author |
Type |
Category |
Subject |
14588
|
Thu May 2 10:59:58 2019 |
Jon | Update | SUS | c1susux in situ wiring testing completed | Summary
Yesterday Gautam and I ran final tests of the eight suspensions controlled by c1susaux, using PyIFOTest. All of the optics pass a set of basic signal-routing tests, which are described in more detail below. The only issue found was with ITMX having an apparent DC bias polarity reversal (all four front coils) relative to the other seven susaux optics. However, further investigation found that ETMX and ETMY have the same reversal, and there is documentation pointing to the magnets being oppositely-oriented on these two optics. It seems likely that this is the case for ITMX as well.
I conclude that all the new c1susaux wiring/EPICS interfacing works correctly. There are of course other tests that can still be scripted, but at this point I'm satisfied that the new Acromag machine itself is correctly installed. PyIFOTest has been morphed into a powerful general framework for automating IFO tests. Anything involving fast/slow IO can now be easily scripted. I highly encourage others to think of more applications this may have at the 40m.
Usage and Design
The code is currently located in /users/jon/pyifotest although we should find a permanent location for it. From the root level it is executed as
$ ./IFOTest <PARAMETER_FILE>
where PARAMETER_FILE is the filepath to a YAML config file containing the test parameters. I've created a config file for each of the suspended optics. They are located in the root-level directory and follow the naming convention SUS-<OPTIC>.yaml .
The code climbs a hierarchical "ladder" of actuation/readback-paired tests, with the test at each level depending on signals validated in the preceding level. At the base is the fast data system, which provides an independent reference against which the slow channels are tested. There are currently three scripted tests for the slow SUS channels, listed in order of execution:
- VMon test: Validates the low-frequency sensing of SUS actuation (VMon channels). A DC offset is applied in the final filter module of the fast coil outputs, one coil at a time. The test confirms that the VMon of the actuated coil, and only this VMon, senses the displacement, and that the response has the correct polarity. The screen output is a matrix showing the change in VMon responses with actuation of each coil. A passing test, roughly, is diagonal values >> 0 and off-diagonal values << diagonal.

- Coil Enable test: Validates the slow watchdog control of the fast coil outputs (Coil-Enable channels). Analogously to (1), this test also applies a DC offset via the fast system to one coil at a time and analyzes the VMon responses. However, in this case, the offset is enabled to all five coils simulataneously and only one coil output is enabled at a time. The screen output is again a \Delta VMon matrix interpreted in the same way as above.

- PDMon/DC Bias test: Validates slow alignment control and readback (BiasAdj and PDMon channels). A DC misalignment is introduced first in pitch, then in yaw, with the OSEM PDMon responses measured in both cases. Using the gains from the PIT/YAW---> COIL output coupling matrix, the script verifies that each coil moves in the correct direction and by a sufficiently large magnitude for the applied DC bias. The screen output shows the change in PDMon responses with a pure pitch actuation, and with a pure yaw actuation. The output filter matrix coefficients have already been divided out, so a passing test is a sufficiently large, positive change under both pitch and yaw actuations.

|
14591
|
Fri May 3 09:12:31 2019 |
gautam | Update | SUS | All vertex SUS watchdogs were tripped | I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?
On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames? |
14592
|
Fri May 3 12:48:40 2019 |
gautam | Update | SUS | 1X4/1X5 cable admin | Chub and I crossed off some of these items today morning. The last bullet was addressed by Jon yesterday. I added a couple of new bullets.
The new power connectors will arrive next week, at which point we will install them. Note that there is no 24V Sorensen available, only 20V.
I am running a test on the 2W Mephisto for which I wanted the diagnostics connector plugged in again and Acromag channels to record them. So we set up the highly non-ideal but temporary set up shown in Attachment #1. This will be cleaned up by Monday evening latest.
update 1630 Monday 5/6: the sketchy PSL acromag setup has been disassembled.
Quote: |
- Take photos of the new setup, cabling.
- Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
- New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
All 24 new DB-37 signal cables need to be labeled.
New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
|
|
14596
|
Mon May 6 11:05:23 2019 |
Jon | Update | SUS | All vertex SUS watchdogs were tripped | Yes, this was a consequence of the systemd scripting I was setting up. Unlike the old susaux system, we decided for safety NOT to allow the modbus IOC to automatically enable the coil outputs. Thus when the modbus service starts/restarts, it automatically restores all state except the watchdog channels, which are left in their default disabled state. They then have to be manully enabled by an operator, as I should have done after finishing testing.
Quote: |
I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?
|
|
14608
|
Wed May 15 00:40:19 2019 |
gautam | Update | SUS | ETMY diagnosis plan | I collected some free-swinging data from earlier today evening. There are still only 3 peaks visible in the ASDs, see Attachment #1.
Plan for tomorrow:
TBH, I don't have any clear ideas as to what we are supposed to do to to fix the problem (or even what the problem is). So here is my plan for now:
- Take pictures of relative position of magnet and OSEM coil for all five coils
- Inspect positions of all EQ stops - back them well out if any look suspiciously close
- Inspect suspension wire for any kinks
- Inspect position of suspension wire in standoff
I anticipate that these will throw up some more clues |
14610
|
Wed May 15 10:57:57 2019 |
gautam | Update | SUS | EY chamber opened | [chub, gautam]
- Vented the EE annulus.
- Took the heavy door off, put it on the wooden rack, put a light door on at ~11am.
|
14611
|
Wed May 15 17:46:24 2019 |
gautam | Update | SUS | ETMY inspection | I setup the usual mini-cleanroom setup around the ETMY chamber. Then I carried out the investigative plan outlined here.
Main finding: I saw a fiber of what looks like first contact on the bottom left (as viewed from HR side) of ETMY, connecting the optic to the cage. See Attachment #1. I don't know that this can explain the problem with the missing eigenmode, it's not a hard constraint. Seems like something that should be addressed in any case. How do we want to remove this? Just use a tweezer and pull it off, or apply a larger FC patch and then pull it off? I'm pretty sure it's first contact and not a piece of PEEK mesh because I can see it is adhered to the HR side of the optic, but couldn't capture that detail in a photo.
There weren't any obvious problem with the magnet positioning inside the OSEM, or the suspension wire. All the EQ stop tips were >3mm away from the optic.
I also backed out the bottom EQ stops on the far (south side) of the optic by ~2 full turns of the screw. Taking another free-swinging dataset now to see if anything has changed. I will upload all the photos I took, with annotations, to the gPhotos later today eve. Light doors back on at ~1730.
Update 10pm: the photos have been uploaded. I've added a "description" to each photo which should convey the message of that particualr shot, it shows up in my browser on the bottom left of the photo but can also be accessed by clicking the "info" icon. Please have a look and comment if something sticks out as odd / requires correction.
Update 1045pm: I looked at the freeswinging data from earlier today. Still only 3 peaks around 1 Hz.
The following optics were kicked:
ETMY
Wed May 15 17:45:51 PDT 2019
1242002769 |
14612
|
Wed May 15 19:36:29 2019 |
Koji | Update | SUS | ETMY instepction | A pair of tweezer is OK as long as there is no magnets around. You need to (somewhat) constrain the mirror with the EQ stops so that you can pull the fiber without dragging the mirror. |
14613
|
Thu May 16 13:07:14 2019 |
gautam | Update | SUS | First contact residue removal | I used a pair of tweezers to remove the stray fiber of first contact. As Koji predicted, this was rather dry and so it didn't have the usual elasticity, so while I was able to pull most of it off, there is a small spot remaining on the HR surface of the ETM. We will remove this with a fresh application of a small patch of FC.
I the meantime, I'm curious if this has actually fixed the suspension woes, so yet another round of freeswinging data collection is ongoing. From the first 5 mins, looks positive, I see 4 peaks around 1Hz !
The following optics were kicked:
ETMY
Thu May 16 13:06:39 PDT 2019
1242072418
Update 730pm: There are now four well-defined peaks around 1 Hz. Together with the Bounce and Roll modes, that makes six. The peak at 0.92 Hz, which I believe corresponds to the Yaw eigenmode, is significantly lower than the other three. I want to get some info about the input matrix but there was some NDS dropout and large segments of data aren't available using the python nds fetch method, so I am trying again, kicked ETMY at 1828 PDT. It may be that we could benefit from some adjustment of the OSEM positions, the coupling of bounce mode to LL is high. Also the SIDE/POS resonances aren't obviously deconvolved. The stray first contact has to be removed too. But overall I think it was a successful removal, and the suspension characteristics are more in line with what is "expected". |
14615
|
Thu May 16 23:31:55 2019 |
gautam | Update | SUS | ETMY suspension characterization | Here is my analysis. I think there are still some problems with this suspension.
Attachment #1: Time domain plots of the ringdown. The LL coil has peak response ~half of the other face OSEMs. I checked that the signal isn't being railed, the lowest level is > 100 cts.
Attachment #2: Complex TF from UL to the other coils. While there are four peaks now, looking at the phase information, it isn't possible to clearly disentangle PIT or YAW motion - in fact, for all peaks, there are at least three face shadow sensors which report the same phase. The gains are also pretty poorly balanced - e.g. for the 0.77 Hz peak, the magnitude of UR->UL is ~0.3, while LR->UL is ~3. Is it reasonable that there is a factor of 10 imbalance?
Attachment #3: Nevertheless, I assumed the following mapping of the peaks (quoted f0 is from a lorentzian fit) and attempted to find the input matrix that best convers the Sensor basis into the Euler basis.
DoF |
f0 [Hz] |
POS |
1.004 |
PIT |
0.771 |
YAW |
0.920 |
SIDE |
0.967 |
Unsurprisingly, the elements of this matrix are very different from unity (I have to fix the normalization of the rows).
Attachment #4: Pre and post diagonalization spectra. The null stream certainly looks cleaner, but then again, this is by design so I'm not sure if this matrix is useful to implement.
Next steps:
- Repeat the actuator diagnonality test detailed here.
- ???
In case anyone wants to repeat the analysis, the suspension was kicked at 1828 PDT today and this analysis uses 15000 seconds of data from then onwards.
Update 18 May 3pm: Attachment #5 better presentation of the data shown in Attachment #2, the remark about the odd phasing of the coils is more clearly seen in this zoomed in view. Attachment #6 shows Lorentzian fits to the peaks - the Qs are comparable to that seen for the other optics, although the Q for the 0.77 Hz peak is rather low. |
14617
|
Fri May 17 10:57:01 2019 |
gautam | Update | SUS | IY chamber opened | At ~930am, I vented the IY annulus by opening VAEV. I checked the particle count, seemed within the guidelines to allow door opening so I went ahead and loosened the bolts on the ITMY chamber.
Chub and I took the heavy door off with the vertex crane at ~1015am, and put the light door on.
Diagnosis plan is mainly inspection for now: take pictures of all OSEM/magnet positionings. Once we analyze those, we can decide which OSEMs we want to adjust in the holders (if any). I shut down the ITMY and SRM watchdogs in anticipation of in-chamber work.
Not related to this work: Since the annuli aren't being pumped on, the pressure has been slowly rising over the week. The unopened annuli are still at <1 torr, and the PAN region is at ~2 mtorr. |
14620
|
Fri May 17 17:01:08 2019 |
gautam | Update | SUS | ETMY suspension characterization | To investigate my mapping of the eigenfrequencies to eigenmodes, I checked the Oplev spectra for the last few hours, when the Oplev spot has been on the QPD (but the optic is undamped).
- Based on Attachment #1, I can't figure out which peak corresponds to what motion.
- The most prominent peak (judged by peak height) is at 0.771 Hz for both PITCH and YAW
- Assuming the peak at 0.92 Hz is the other angular mode, the PIT/YAW decoupling is poor in both peaks, only ~factor of 2 in both cases.
- Why are the POS and SIDE resonances sensed so asymmetrically in the PIT and YAW channels? There's a factor of 10 difference there...
So, while I conclude that my first-contact residue removal removed a constraint from the system (hence the pendulum dynamics are accurate and there are 6 eigenmodes), more thought is needed in judging what is the appropriate course of action. |
14623
|
Mon May 20 11:33:46 2019 |
gautam | Update | SUS | ITMY inspection | With Chub providing illumination via the camera viewport, I was able to take photos of ITMY this morning. All the magnets look well clear of the OSEMs, with the possible exception of UR. I will adjust the position of this OSEM slightly. To test if this fix is effective, I will then cycle the bias voltage to the ITM between 0 and the maximum allowed, and check if the optic gets stuck. |
14625
|
Mon May 20 17:12:57 2019 |
gautam | Update | SUS | ETMY LL adjustment | Following the observation that the response in the LL shadow sensor was lower than that of the others, I decided to pull it out a little to move the signal level with nominal DC bias voltage applied was closer to half the open-voltage. I also chose to rotate the SIDE OSEM by ~20 degrees CCW in its holder (viewed from the south side of the EY chamber), to match more closely its position from a photo prior to the haphazhard vent of the summer of 2018. For the SIDE OSEM, the theoretical "best" alignment in order to be insensitive to POS motion is the shadow sensor beam being horizontal - but without some shimming of the OSEM in the holder, I can't get the magnet clear of the teflon inside the OSEM.
While I was inside the chamber, I attempted to minimize the Bounce/Roll mode coupling to the LL and SIDE OSEM channels, by rotating the Coil inside the holder while keeping the shadow sensor voltage at half-light. To monitor the coupling "live", I set up DTT with 0.3 Hz bandwidth and 3 exponentially weighted averages. For the LL coil, I went through pi radians of rotation either side of the equilibrium, but saw no significant change in the coupling - I don't understand why.
In any case, this wasn't the most important objective so I pushed ahead with recovering half-light levels for all the shadow sensors and closed up with the light doors. I kicked the optic again at 1712:14 PDT, let's see what the matrix looks like now.
before starting this work, i had to key the unresponsive c1auxey VME crate. |
14627
|
Mon May 20 22:06:07 2019 |
gautam | Update | SUS | ITMY also kicked | For good measure:
The following optics were kicked:
ITMY
Mon May 20 22:05:01 PDT 2019
1242450319 |
14628
|
Tue May 21 00:15:21 2019 |
gautam | Update | SUS | Main objectives of vent achieved (?) | Summary:
- ETMY now shows four suspension eigenmodes, with sensible phasing between signals for the angular DoFs. However, the eigenfrequencies have shifted by ~10% compared to 16 May 2019.
- PIT and YAW for ETMY as witnessed by the Oplev are now much better separated.
- ITMY can have its bias voltage set to zero and back to nominal alignment without it getting stuck.
- The sensing matrix for ETMY that I get doesn't make much sense to me. Nevertheless, the optic damps even with the "naive" input matrix.
So the primary vent objectives have been achieved, I think.
Details:
- ETMY free-swinging data after adjusting LL and SIDE coils such that these were closer to half-light values
- Attachment #1 - oplev witnessing the angular motion of the optic. PIT and YAW are well decoupled.
- Attachment #2 - complex TF between the suspension coils. There is still considerable imbalance between coils, but at least the phasing of the signals make sense for PIT and YAW now.
- Attachment #3 - DoFs sensed using the naive and optimized sensing matrices.
- Attachment #4 - sensing matrix that the free swinging data tells me to implement. If the local damping works with the naive input matrix but we get better diagonality in the actuation matrix, I think we may as well stick to the naive input matrix.
- BR mode coupling minimization:
- As alluded to in my previous elog, I tried to reduce the bounce mode coupling into the shadow sensor by rotating the OSEM in its holder.
- However, I saw negligible change in the coupling, even going through a full pi radian rotation. I imagine the coupling will change smoothly so we should have seen some change in one of the ~15 positions I sampled in between, but I saw none.
- The anomalously high coupling of the bounce mode to the shadow sensor readout is telling us something - I'm just not sure what yet.
- ITMY:
- The offender was the LL OSEM, whose rotational orientation was causing the magnet to get stuck to the teflon part of the OSEM coil when the bias voltage was changed by a sufficiently large amount.
- I rectified this (required adjustment of all 5 OSEMs to get everything back to half light again).
- After this, I was able to zero the bias voltage to the PIT/YAW DoFs and not have the optic get stuck - huzzah 😀
- While I have the chance, I'm collecting the free-swinging data to see what kind of sensing matrix this optic yields.
Tomorrow and later this week:
- Prepare ETMY for first contact cleaning to remove the residual piece.
- Drag wipe the HR surface with dehydrated acetone
- Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
- This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
- Confirm ETMY actuation makes sense.
- Use the green beam for an ASS proxy implementation?
- High quality close out pictures of OSEMs and general chamber layout.
- Anything else? Any other tests we can do to convince ourselves the suspensions are well-behaved?
While we have the chance:
- Fix the IPANG alignment? Because the TT drift/hysteresis problem is still of unknown cause.
- Check that the AS beam is centered on OMs 1-6?
- Recover the 70% AS light that is being diverted to the OMC?
Unrelated to this work: megatron is responding to ping but isn't ssh-able. I also noticed earlier to day that the IMC autolocker blinky wasn't blinking. So it probably requries a hard reboot. I left the lab for tonight so I'll reboot it tomorrow, but no nds data access in the meantime... |
14629
|
Tue May 21 21:33:27 2019 |
gautam | Update | SUS | ETMY HR face cleaned | [koji, gautam]
We executed this plan. Photos are here. Summary:
- Optic was EQ-stopped (face stops only)., with the OSEMs in situ. We tried to do this as evenly as possible to avoid any magnets getting stuck on OSEMs.
- We used the specially procured acetone from Chub to drag wipe the HR face. This was a definite improvement, we should always get the correct grade of solvents when we attempt cleaning optics.
- It was observed that drag-wiping did not really have the desired cleaning effect. So Koji went in with hemostat / lens tissue soaked in acetone and wiped the HR face. This improved the situation.
- Applied a layer of F.C. Waited for it to dry, and then peeled it off. Under the green flashlight, the optic still looks horrific - but we decided against further drag-wiping/first-contacting. If the loss is truly 50 ppm, this is totally not a show-stopper for now.
- Suspension cage was replaced. EQ stops were released. Bias voltages were adjusted to bring the Oplev spot back to the center of the QPD. Now a free-swinging data collection is ongoing...
The following optics were kicked:
ETMY
Tue May 21 22:58:18 PDT 2019
1242539916
So if nothing, we got to practise this new wiping technique with OSEMs in situ successfully.
Quote: |
- Prepare ETMY for first contact cleaning to remove the residual piece.
- Drag wipe the HR surface with dehydrated acetone
- Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
- This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
|
|
14630
|
Wed May 22 11:53:50 2019 |
gautam | Update | SUS | ETMY EQ stops backed out | Yesterday we noticed that the POS and SIDE eigenmodes were degenerate (with 1mHz spectral resolution). Moreover, the YAW peak had shifted down by ~500 mHz compared to earlier this week, although there was still good separation between PIT and YAW in the Oplev error signals. Ideas were (i) check if EQ stops were not backed out sufficiently, and (ii) look for any fibers/other constraints in the system. Today morning, I inspected the optic again. I felt the EQ stop viton tips were a bit close to the optic, so I backed them out further. Apart from this, I adjusted the LR and SIDE OSEM position in their respective holders to make the sensor voltages closer to half-light. Kicked the optic again just now, let's see if there is any change.
Remaining tasks:
- Check EY table leveling.
- Check EY actuation matrix diagonality using this technique.
- Check that IR resonances are seen (and all the usual pre-pumpdown alignment checks).
- Take close out pictures.
- Heavy doors on, pump down.
If everything goes smoothly, I think we should plan for the heavy doors going back on and commencing the pumpdown tomorrow. After discussion with Koji, we came to the conclusion that it isn't necessary to investigate IPANG (high likelihood of it falling off the steering optics during the pumpdown) / AS beam clipping (no strong evidence that this is a problem) for this vent.
Update 1235: Indeed, the eigenmodes are back to their positions from earlier this week. Indeed, the POS and SIDE modes are actually better separated! So, the OSEM/magnet and EQstop/optic interactions are non-negligible in the analysis of the dynamics of the pendulum. |
14725
|
Thu Jul 4 10:54:21 2019 |
Koji | Summary | SUS | Suspension damping recovered, ITMX stuck | So Cal Earthquake. All suspension watchdogs tripped.
Tried to recover the OSEM damping.
=> The watchdogs for all suspensions except for ITMX were restored. ITMX seems to be stuck. No further action by me for now. |
14727
|
Fri Jul 5 20:57:04 2019 |
Koji | Update | SUS | Another M7.1 EQ | [Kruthi, Koji]
Koji came to the lab to align the IMC/IFO, but found the mirrors are dancing around. Kruthi told me that there was M7.1 EQ at Ridgecrest. Looks like there are aftershocks of this EQ going on. So we need to wait for an hour to start the alignment work.
ITMX and ETMX are stuck. |
14728
|
Fri Jul 5 21:53:10 2019 |
Koji | Update | SUS | Another M7.1 EQ | - ITM unstuck now
- IMC briefly locked at TEM00
A series of aftershocks came. I could unstick ITMX by turning on the damping during one of the aftershocks.
Between the aftershocks, MC1~3 were aligned to the previous dof values. This allowed the IMC flashing. Once I got the lock of a low order TEM mode, it was easy to recover the alignment to have a weak TEM00.
Now at least temporarily the full alignment of the IMC was recovered. |
14729
|
Fri Jul 5 22:21:13 2019 |
Koji | Update | SUS | Another M7.1 EQ | In fact, ETMX was not stuck until the M7.1 EQ today. After that it got stuck, but during the after shocks, all the OSEMs occasionally showed full swing of the light levels. So I believe the magnets are OK. |
14730
|
Fri Jul 5 23:28:52 2019 |
rana, kruthi | Summary | SUS | ETMX unstuck by shaking the stack | We unstuck ETMX by shaking the stack. Most effective was to apply large periodic human sized force to the north STACIS mounts.
At first, we noticed that the face OSEMs showed nearly zero variation.
We tried unsticking it through the usual ways of putting large excitations through AWG into the pit/yaw/side DOFs. This produced only ~0.2 microns of motion as seen by the OSEMs.
After the stack shake, we used the IFO ALIGN sliders to get the oplev beam back on the QPD.
The ETMX sensor trends observed before and after the earthquake are attached.
** plots deleted; SOMEONE, tried to take raster images and turn them into PDF as if this would somehow satisfy our vetor graphics requirement. Boo. lpots must be actual vector graphics PDF |
14736
|
Tue Jul 9 08:33:31 2019 |
gautam | Summary | SUS | ETMX PIT bias voltage changed by ~1V | After this activity, the DC bias voltage required on ETMX to restore good X arm cavity alignment has changed by ~1.3 V. Assuming a full actuation range of 30 mrad for +/- 10 V, this implies that the pitch alignment of the stack has changed by ~2 mrad? Or maybe the suspension wires shifted in the standoff grooves by a small amount? This is ~x10 larger than the typical change imparted while working on the table, e.g. during a vent.
Main point is that this kind of range requirement should probably be factored in when thinking about the high-voltage coil driver actuation.
Quote: |
We unstuck ETMX by shaking the stack. Most effective was to apply large periodic human sized force to the north STACIS mounts.
|
|
14742
|
Wed Jul 10 10:04:09 2019 |
gautam | Update | SUS | Tip-Tilt moved from South clean cabinet to bake lab cleanroom | Arnaud and I moved one of the two spare TT suspensions from the south clean cabinet to the bake lab clean room. The main purpose was to inspect the contents of the packaging. According to the label, this suspension was cleaned to Class A standards, so we tried to be clean while handling it (frocks, gloves, masks etc). We found that the foil wrapping contained one suspension cage, with what looked like all the parts in a semi-assembled state. There were no OSEMs or electronics together with the suspension cage. Pictures were taken and uploaded to gPhoto. Arnaud is going to plan his tests, so in the meantime, this unit has been stored in Cabinet #6 in the bake lab cleanroom. |
14745
|
Wed Jul 10 16:53:22 2019 |
gautam | Update | SUS | PRM watchdog condition modified | [koji, gautam]
We noticed that the PRM watchdog was tripping frequently. This is a period of enhanced seismic activity. The reason PRM in particular trips often is because the SIDE OSEM has 5x increased transimpedance. We implemented a workaround by modifying the watchdog tripping condition to scale the SD channel RMS by a factor of 0.2 (relative to the UL and LL channels). We restarted the modbus process on c1susaux and tested that the new logic works. Here is the relevant snippet of code:
# Disable fast DAC if variation tests too high
# PRM Side is special, see elog 14745
record(calc,"C1:SUS-PRM_LOGIC")
{
field(DESC,"Tests whether RMS too high")
field(SCAN,"1 second")
field(PHAS,"1")
field(PREC,"0")
field(HOPR,"1")
field(LOPR,"0")
field(CALC,"(A<B)&(C<B)&(0.2*D<B)")
field(INPA,"C1:SUS-PRM_ULPD_VAR NPP NMS")
field(INPB,"C1:SUS-PRM_PD_MAX_VAR NPP NMS")
field(INPC,"C1:SUS-PRM_LLPD_VAR NPP NMS")
field(INPD,"C1:SUS-PRM_SDPD_VAR NPP NMS")
}
The db file has a note about this as well so that future debuggers aren't mystified by a factor of 0.2. |
14755
|
Fri Jul 12 07:37:48 2019 |
gautam | Update | SUS | M4.9 EQ in Ridgecrest | All suspension watchdogs were tripped ~90mins ago. I restored the damping. IMC is locked.
ITMX was stuck. I set it free. But notice that the UL Sensor RMS is higher than the other 4? I thought ITMY UL was problematic, but maybe ITMX has also failed, or maybe it's coincidence? Something for IFOtest to figure out I guess. I don't think there is a cable switch between ITMX/ITMY as when I move the ITMX actuators, the ITMX sensors respond and I can also see the optic moving on the camera.
Took me a while to figure out what's going on because we don't have the seis BLRMS - i moved the usual projector striptool traces to the TV screen for better diagnostic ability.
Update 16 July 1515: Even though the RMS is computed from the slow readback channels, for diagnosis, I looked at the spectra of the fast PD monitoring channels (i.e. *_SENSOR_*) for ITMX - looks like the increased UL RMS is coming from enhanced BR-mode coupling and not of any issues with the whitening switching (which seems to work as advertised, see Attachment #3, where the LL traces are meant to be representative of LL, LR, SD and UR channels). |
14763
|
Tue Jul 16 15:00:03 2019 |
gautam | Update | SUS | Multiple small EQs | There were several small/medium earthquakes in Ridgecrest and one medium one in Blackhawk CA at about 2000 UTC (i.e. ~ 2 hours ago), one of which caused BS, ITMY, and ETM watchdogs to trip. I restored the damping just now. |
14776
|
Fri Jul 19 12:50:10 2019 |
gautam | Update | SUS | DC bias actuation options for SOS | Rana and I talked about some (genius) options for the large range DC bias actuation on the SOS, which do not require us to supply high-voltage to the OSEMs from outside the vacuum.
What we came up with (these are pretty vague ideas at the moment):
- Some kind of thermal actuation.
- Some kind of electrical actuation where we supply normal (+/- 10 V) from outside the vacuum, and some mechanism inside the chamber integrates (and hence also low-pass filters) the applied voltage to provide a large DC force without injecting a ton of sensor noise.
- Use the blue piers as a DC actuator to correct for the pitch imbalance --- Kruthi and Milind are going to do some experiments to investigate this possibility later today.
For the thermal option, I remembered that (exactly a year ago to the day!) when we were doing cavity mode scans, once the heaters were turned on, I needed to apply significant correction to the DC bias voltage to bring the cavity alignment back to normal. The mechanism of this wasn't exactly clear to me - furthermore, we don't have a FLIRcam picture of where the heater radiation patter was centered prior to my re-centering of it on the optic earlier this year, so we don't know what exactly we were heating. Nevertheless, I decided to look at the trend data from that night's work - see Attachment #1. This is a minute trend of some ETMY channels from 0000 UTC on 18 July 2018, for 24 hours. Some remarks:
- We did multiple trials that night, both with the elliptical reflector and the cylindrical setup that Annalisa and Terra implemented. I think the most relevant part of this data is starting at 1500 UTC (i.e. ~8am PDT, which is around when we closed shop and went home). So that's when the heaters were turned off, and the subsequent drift of PIT/YAW are, I claim, due to whatever thermal transients were at play.
- Just prior to that time, we were running the heater at close to its maximum rated current - so this relaxation is indicative of the range we can get out of this method of actuation.
- I had wrongly claimed in my discussion with Rana this morning that the change in alignment was mostly in pitch - in fact, the data suggests the change is almost equal in the two DoFs. Oplev and OSEMs report different changes though, by almost a factor of 2....
- The timescale of the relaxation is ~20 minutes - what part(s) of the suspension take this timescale to heat up/cool down? Unlikely to be the wire/any metal parts because the thermal conductivity is high?
- In the optimistic scenario, let's say we get 100 urad of actuation range - over 40m, this corresponds to a beam spot motion of ~8mm, which isn't a whole lot. Since the mechanism of what is causing this misalignment is unclear, we may end up with significantly less actuation range as well.
- I will repeat the test (i.e. drive the heater and look for drift in the suspension alignment using OSEMs/Oplev) in the afternoon - now I claim the radation pattern is better centered on the optic so maybe we will have a better understanding of what mechanisms are at play.
Also see this elog by Terra.
Attachment #2 shows the results from today's heating. I did 4 steps, which are obvious in the data - I=0.6A, I=0.76A, I=0.9A, and I=1.05A.
In science, one usually tries to implement some kind of interpretation. so as to translate the natural world into meaning. |
14798
|
Mon Jul 22 13:32:55 2019 |
Kruthi | Update | SUS | Test mass pitch adjustment test | [Kruthi, Milind]
On Friday, Milind and I performed the pitch adjustment test Rana had asked us to do. Only 1 blue beam in case of ITMX and two in case of ETMY, ETMX and ITMY were accessible. Milind (of mass 72 kg as of 10 May 2019) stood on each of the accessible blue beams of the test mass chambers for one minute and I recorded the corresponding gps time. Before moving to the next beam, we spared more than a minute for relaxation after the standing end time. Following are the recorded gps times.
|
ETMX
|
ITMX
|
ETMY
|
ITMY
|
|
Beam 1
|
Beam 2
|
Beam 1
|
Beam 1
|
Beam 2
|
Beam 1
|
Beam 2
|
Standing start time (gps)
|
1247620911
|
1247621055
|
1247621984
|
1247622394
|
1247622585
|
1247622180
|
1247622814
|
Standing end time (gps)
|
1247620974
|
1247621118
|
1247622058
|
1247622459
|
1247622647
|
1247622250
|
1247622880
|
PS: For each blue beam relaxation time ~ 1 min after the standing end time |
14977
|
Fri Oct 18 17:35:07 2019 |
gautam | Update | SUS | ETMX sat box disconnected | Koji suggested systematic investigation of the ETMX suspension electronics. The tests to be done are:
- Characterization of PD whitening amplifiers - with the satellite box disconnected, we will look for glitches in the OSEM channels.
- Characterization of LT1125s in the PD chain of the amplifiers - with the in-vacuum OSEMs disconnected, we will look for glitches due to the on-board transimpedance amplifiers of the satellite box.
- Characterization using the satellite box tester - this will signal problems with the physical OSEMs.
- Characterization of the suspension coil driver electronics - this will happen later.
So the ETMX satellite box is unplugged now, starting 530 pm PDT.
The satellite box was reconnected and the suspension was left with watchdog off but OSEM roughly centered. We will watch for glitches over the weekend. |
14982
|
Mon Oct 21 16:02:21 2019 |
gautam | Update | SUS | ETMX over the weekend | Looking at the sensor and oplev trends over the weekend, there was only one event where the optic seems to have been macroscopically misaligned, at ~11:05:00 UTC on Oct 19 (early Saturday morning PDT). I attach a plot of the 2kHz time series data that has the mean value subtracted and a 0.6-1.2 Hz notch filter applied to remove the pendulum motion for better visualization. The y-axis calibration for the top plot assumes 1 ct ~= 1 um. This "glitch" seems to have a timescale of a few seconds, which is consistent with what we see on the CCD monitors when the cavity is locked - the alignment drifts away over a few seconds.
As usual, this tells us nothing conclusive. Anyways, I am re-enabling the watchdog and pushing on with locking activity and hope the suspension cooperates.
Quote: |
The satellite box was reconnected and the suspension was left with watchdog off but OSEM roughly centered. We will watch for glitches over the weekend.
|
|
15002
|
Wed Oct 30 19:20:27 2019 |
gautam | Update | SUS | PRM suspension issues | While I was trying to lock the PRMI this evening, I noticed that I couldn't move the REFL beamspot on the CCD field of view by adjusting the slow bias voltages to the PRM. Other suspensions controlled by c1susaux seem to respond okay so at first glance it isn't a problem with the Acromag. Looking at the OSEM sensor input levels, I noticed that UL is much lower than the others - see Attachment #1, seems to have happened ~100 days ago. I plugged the tester box in to check if the problem is with the electronics or if this is an actual shorting of some pins on the physical OSEM as we had in the past. So PRM watchdog is shutdown for now and there is no control of the optic available as the cables are detached. I will replace the connections later in the evening.
Update 10pm:
- Measured coil inductances with breakout board and LCR meter - all 5 coils returned ~3.28-3.32 mH.
- Measured coil resistances with breakout board and DMM - all 5 coils returned ~16-17 ohms.
- Checked OSEM PD capacitance (with no bias voltage) using the LCR meter - each PD returned ~1nF.
- Checked resistance between LED Cathode and Anode for all 5 LEDs using DMM - each returned Hi-Z.
- Checked resistance between PD Cathode and Anode for all 5 PDs using DMM - each returned ~430 kohms.
- Checked that I could change the slow bias voltages and see a response at the expected pins (with the suspension disconnected).
Since I couldn't find anything wrong, I plugged the suspension back in - and voila, the suspect UL PD voltage level came back to a level consistent with the others! See Attachment #2.
Anyway, I had some hours of data with the tester box plugged in - see Attachment #3 for a comparison of the shadow sensor readout with the tester box (all black traces) vs with the suspension plugged in, local damping loops active (coloured traces). The sensing noise re-injection will depend on the specifics of the local damping loop shapes but I suspect it will limit feedforward subtraction possibilities at low frequencies.
However, I continue to have problems aligning the optic using the slow bias sliders (but the fast ones work just fine) - problem seems to be EPICS related. In Attachment #4, I show that even though I change the soft PITCH bias voltage adjust channel for the PRM, the linked channels which control the actual voltages to the coils take several seconds to show any response, and do so asynchronously. I tried restarting the modbus process on c1susaux, but the problem persists. Perhaps it needs a reboot of the computer and/or the acromag chassis? I note that the same problem exists for the BS and PRM suspensions, but not for ITMX or ITMY (didn't check the IMC optics). Perhaps a particular Acromag DAC unit is faulty / has issues with the internal subnet? |
15003
|
Wed Oct 30 23:12:27 2019 |
Koji | Update | SUS | PRM suspension issues | Sigh... hard loch |
15155
|
Sun Jan 26 13:30:19 2020 |
gautam | Update | SUS | All watchdogs tripped, now restored | Looks like a M=4.6 earthquate in Barstow,CA tripped all the suspensions. ITMX got stuck. I restored the local damping on all the suspensions just now, and freed ITMX. Looks like all the suspensions damp okay, so I think we didn't suffer any lasting damage. IMC was re-aligned and is now locked. |
15173
|
Wed Jan 29 03:05:47 2020 |
rana, gautam | Update | SUS | MC misalignments / sat box games | In the last couple days, as the IMC ringdowns have been going on, we have noticed that the MC is behaving bad. Misaligning, drifting, etc.
Gautam told me a horror story about him, Koji, and melted wires inside the sat boxes.
I said, "Its getting too hot in there. So let's take the lids off!"
So then we:
- Removed the lid (only 4 screws were still there)
- cut off some of the shield - ground wires and insulated them with electrical tape
- squished the IDC connectors on tightly
- left it this way to see if MC would get better - certainly the painfully hot heatinks inside the box were now just 110 F or so
After some minutes, we saw no drifting. So maybe my theory of "hot heatsink partially shorting a coil current to GND through partially melted ribbon cable" makes sense? IF this seems better after a month, lets de-lid all the optics.
Let's look at some longer trends and be very careful next to MC2 for the next 3 days! I have put a dangerous mousetrap there to catch anyone who walks near the vacuum chamber.
gautam: the grounding situation per my assessment is that the shield of all the IDC cables are connected to a common metal strip at 1X5 - but in my survey, I didn't see any grounding of this strip to a common ground. |
15261
|
Sat Mar 7 15:18:30 2020 |
gautam | Update | SUS | EQ tripped some suspensions | An earthquake around 330 UTC (=730pm yesterday eve) tripped ITMX, ITMY and ETMX watchdogs. ITMX got stuck. I released the stuck optic and re-enabled the local damping loops just now. |
15262
|
Tue Mar 10 14:30:16 2020 |
yehonathan | Update | SUS | | ETMX was grossly misaligned.
I re-aligned it and the X arm now locks.
7:00PM with Koji
Both the alignment of the X and Y arms was recovered.
~>z avg 10 C1:LSC-TRX_OUT C1:LSC-TRY_OUT
C1:LSC-TRX_OUT 0.9914034307003021
C1:LSC-TRY_OUT 0.9690877735614777
We are running ass for the X arm to recover the X arm alignment.
Meanwhile, i want to block the Y arm trans PD (Thorlabs). To do it, the PD<->QPD thresholds were changed from 5.0/3.0 to 0.5/0.3. |
15263
|
Tue Mar 10 19:58:16 2020 |
yehonathan | Update | SUS | | I returned the triggering threshold to normal values (5/3).
Meanwhile, i want to block the Y arm trans PD (Thorlabs). To do it, the PD<->QPD thresholds were changed from 5.0/3.0 to 0.5/0.3.
|
|
15335
|
Fri May 15 19:10:42 2020 |
gautam | Update | SUS | All watchdogs tripped, now restored | This EQ in Nevada seems to have tripped all watchdogs. ITMX was stuck. It was released, and all the watchdogs were restored. Now the IMC is locked. |
15373
|
Wed Jun 3 19:19:11 2020 |
gautam | Update | SUS | All watchdogs tripped | This EQ seems to have knocked all suspensions out. ITMX was stuck. It is now released, and the IMC is locked again. It looks like there are some serious aftershocks going on so let's keep an eye on things. |
15376
|
Thu Jun 4 20:54:40 2020 |
gautam | Update | SUS | MC1 Slow Bias issues | Summary:
I found that there is an issue with the MC1 slow bias voltages.
Details:
I usually offload the DC part of the output voltage from the WFS servos to the slow bias voltage sliders, so as to preserve maximum actuation range from the fast system. However, today, I found that this servo wasn't working well at all. So I dug a little deeper. Looking at the EPICS database records:
- The user-facing channels are "PIT" and "YAW" bias voltages.
- These are converted to voltages to be sent to individual coils by some calc channels in the EPICS database record. So, for example, the voltage to be sent to the "UL" coil (Upper Left, as viewed from the AR side of the optic), is A+B, where A is the "PIT" voltage and B is the "YAW" voltage. Similar combinations of A and B are used for the other 3 face coils.
- The problem is obvious - if either A or B > 5V, then the requested voltage to be sent to the UL coil is > 10 V, while the Acromag DACs can put out a maximum of 10 V.
- As it happens, with the IFO currently aligned, MC1 is the only optic which faces this problem.
- Why has this not been an issue before? In fact, looking at some old data, the "PIT" and "YAW" bias voltages to MC1 were both ~1-2 V in 2018. But I confirmed that something in the region of ~5 V is required from each of the "PIT" and "YAW" channels to bring the MCREFL spot back to the center of the camera, so something has changed the DC alignment of MC1, maybe an earthquake or something? Anyway, with these settings, 2/4 coils are basically saturated, and so we can only move the optic diagonally. 😢
- Other coils that have requested output voltages > 5V (so more than half the range of the DAC) include MC2 LL (5.2V), and ETMX LL and LR (5.5 and 5.8 V respectively).
- Either a factor of 0.5 should be included in all the EPICS database records, or else, we should make the "PIT" and "YAW" sliders range only from -5 to +5 V, so that this kind of misleading info isn't wasting time.
|
15377
|
Thu Jun 4 21:32:00 2020 |
Koji | Update | SUS | MC1 Slow Bias issues | We can limit the EPICS values giving some parameters to the channels. cf https://epics.anl.gov/tech-talk/2012/msg00147.php
But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example. |
15428
|
Wed Jun 24 22:33:44 2020 |
gautam | Update | SUS | EQ tripped all suspensions | This earthquake tripped all suspensions and ITMX got stuck. The watchdogs were restored and the stuck optic was released. The IFO was re-aligned, POX/POY and PRMI on carrier locking all work okay. |
15431
|
Thu Jun 25 15:11:00 2020 |
gautam | Update | SUS | MC1 coil driver resistance quartered | I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.
As expected, the requested voltage no longer exceeds the Acromag DAC range, it is now more like 2.5 V. However, I still notice that the MC REFL spot moves somewhat diagonally on the camera image - so maybe the coil gains are seriously imbalanced? Anyway, the WFS control signals can once again be safely offloaded to the slow bias voltages once again, preserving the fast ADC range for other actuation.
The Johnson noise of the series resistor has now increased by a factor of 2, from ~6.4 pA/rtHz to 12.8 pA/rtHz. Assuming a current to force coefficient of 1.6 mN/A per coil, the length noise of the cavity is expected to be 12.8e-12 * 0.064/0.25/(2*pi*100)^2 ~ 8e-18 m/rtHz at 100 Hz. In frequency units, this is 80 uHz/rtHz. I think our IMC noise is at least 10 times higher than this at 100 Hz (in any case, the noise of the coil driver is NOT dominated by the series resistance). Attachment #1 confirms that there isn't any significant MCF noise increase, and I will check with the arm cavity too. Nevertheless, we should, if possible, align the optic better and use as high a series resistance as possible.
The watchdog for MC1 was disabled and the board was pulled out for this work. After it was replaced, the IMC re-locks readily.
Quote: |
But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.
|
|
15434
|
Sun Jun 28 15:30:52 2020 |
gautam | Update | SUS | MC1 sat-box de-lidded | Judging by the summary pages, some 18 hours after this change was made and the board re-installed, the MC1 shadow sensors began to report frequent glitches. I can't think of a plausible causal connection, especially given the 18 hour time lag, but also hard to believe there isn't one? As a result, the IMC is no longer able to stay locked for extended periods of time. I did the usual cable squishing, and also took off the lid to see if that helps the situation.
While the reduced series resistance means there is more current flowing through the slow path,
- There isn't actually an increase in the net current flowing through the satellite box - this change just re-allocates the current from the fast path to the slow path, but by the time it reaches the satellite box, the current is flowing through the same conductor.
- afaik, the current buffers on the coil driver aren't overdriven - they are rated for 300 mA. No individual coil is drawing more than 30 mA.
- the resistors themselves should be running sufficiently below their rated power of 3W (I estimate 2.5 V ^2 / 100 ohms ~ 60 mW).
- The highest current should be through the UL and LR coils according to the voltage outputs from the Acromag. But the UL coil doesn't show significant glitching, and the LL one does despite drawing negligible DC current.
The attached FLIR camera image re-inforces what we already know, that the thermal environment inside the satellite box is horrible. The absolute temperature calibration may be off, but it was difficult to touch the components with a bare finger, so I'd say its definitely > 70 C.
Quote: |
I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.
|
|
15435
|
Sun Jun 28 16:29:58 2020 |
rana | Update | SUS | MC1 sat-box de-lidded | does the FLIR have an option to export image with a colorbar?
How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps? |
15436
|
Sun Jun 28 17:36:35 2020 |
gautam | Update | SUS | MC1 sat-box de-lidded | Hmm I can't seem to export with the colorbar, might be just my phone though. I tried to add some "cursors" with the temperature at a few spots, but the font color contrast is poor so you have to squint really hard to see the temperatures in the photo I attached.
I'll leave the MC1 box open overnight and see if that improves the situation, and if not, I'll switch in the SRM satellite box tomorrow.
Quote: |
does the FLIR have an option to export image with a colorbar?
How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps?
|
|
15438
|
Mon Jun 29 11:55:46 2020 |
gautam | Update | SUS | MC1 sat-box de-lidded | There was no improvement to the situation overnight. So, I did the following today:
- Ramped bias voltages for SRM and MC1 to 0, shutdown watchdogs.
- Switched SRM and MC1 satellite boxes. The SRM satellite box lid was opened, while the MC1 lid was left open. The boxes have also been re-labelled lest there be some confusion about which box belongs where.
- Restored watchdogs and bias voltages. Curiously, the MC1 optic now only requires half the bias voltages it did before to have the correct DC alignment for the optic. The Satellite box is just supposed to be a passive conduit for the drive current, so this is indicative of some PCB traces/cabling being damaged inside what was previously the MC1 satellite box?
IMC is now locked again, I will monitor for glitching/stability.
Update 6pm PDT: as shown in Attachment #1, there is a huge difference in the stability of the lock after the sat box swap. Let's hope it stays this way for a while...
Quote: |
I'll leave the MC1 box open overnight and see if that improves the situation, and if not, I'll switch in the SRM satellite box tomorrow.
|
|
15440
|
Mon Jun 29 20:30:53 2020 |
Koji | Update | SUS | MC1 sat-box de-lidded | Sigh. Do we have a spare sat box? |
|