40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 311 of 344  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14178   Thu Aug 23 08:24:38 2018 SteveUpdateSUSETMX trip follow-up

Glitch, small amplitude, 350 counts  &  no trip.

Quote:

Here is an other big one

Quote:

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

 

 

  14184   Fri Aug 24 14:58:30 2018 SteveUpdateSUSETMX trips again

The second big glich trips ETMX sus. There were small earth quakes around the glitches. It's damping recovered.

Quote:

Glitch, small amplitude, 350 counts  &  no trip.

Quote:

Here is an other big one

Quote:

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

 

 

 

  14188   Wed Aug 29 09:20:27 2018 SteveUpdateSUSlocal 4.4M earth quake

All suspension tripped. Their damping restored. The MC is locked.

ITMX-UL & side magnets are stuck.

 

  14190   Wed Aug 29 11:46:27 2018 JonUpdateSUSlocal 4.4M earth quake

I freed ITMX and coarsely realigned the IFO using the OPLEVs. All the alignments were a bit off from overnight.

The IFO is still only able to lock in MICH mode currently, which was the situation before the earthquake. This morning I additionally tried restoring the burt state of the four machines that had been rebooted in the last week (c1iscaux, c1aux, c1psl, c1lsc) but that did not solve it.

Quote:

All suspension tripped. Their damping restored. The MC is locked.

ITMX-UL & side magnets are stuck.

 

 

  14201   Thu Sep 20 08:17:14 2018 SteveUpdateSUSlocal 3.4M earth quake

M3.4 Colton shake did not trip sus.

 

  14223   Mon Oct 1 22:20:42 2018 gautamUpdateSUSPrototyping HV Bias Circuit

Summary:

I've been plugging away at Altium prototyping the high-voltage bias idea, this is meant to be a progress update.

Details:

I need to get footprints for some of the more uncommon parts (e.g. PA95) from Rich before actually laying this out on a PCB, but in the meantime, I'd like feedback on (but not restricted to) the following:

  1. The top-level diagram: this is meant to show how all this fits into the coil driver electronics chain.
    • The way I'm imagining it now, this (2U) chassis will perform the summing of the fast coil driver output to the slow bias signal using some Dsub connectors (existing slow path series resistance would simply be removed). 
    • The overall output connector (DB15) will go to the breakout board which sums in the bias voltage for the OSEM PDs and then to the satellite box.
    • The obvious flaw in summing in the two paths using a piece of conducting PCB track is that if the coil itself gets disconnected (e.g. we disconnect cable at the vacuum flange), then the full HV appears at TP3 (see pg2 of schematic). This gets divided down by the ratio of the series resistance in the fast path to slow path, but there is still the possibility of damaging the fast-path electronics. I don't know of an elegant design to protect against this.
  2. Ground loops: I asked Johannes about the Acromag DACs, and apparently they are single ended. Hopefully, because the Sorensens power Acromags, and also the eurocrates, we won't have any problems with ground loops between this unit and the fast path.
  3. High-voltage precautons: I think I've taken the necessary precautions in protecting against HV damage to the components / interfaced electronics using dual-diodes and TVSs, but someone more knowledgable should check this. Furthermore, I wonder if a Molex connector is the best way to bring in the +/- HV supply onto the board. I'd have liked to use an SHV connector but can't find a comaptible board-mountable connector.
  4.  Choice of HV OpAmp: I've chosen to stick with the PA95, but I think the PA91 has the same footprint so this shouldn't be a big deal.
  5.  Power regulation: I've adapted the power regulation scheme Rich used in D1600122 - note that the HV supply voltage doesn't undergo any regulation on the board, though there are decoupling caps close to the power pins of the PA95. Since the PA95 is inside a feedback loop, the PSRR should not be an issue, but I'll confirm with LTspice model anyways just in case.
  6. Cost: 
    • ​​Each of the metal film resistors that Rich recommended costs ~$15.
    • The voltage rating on these demand that we have 6 per channel, and if this works well, we need to make this board for 4 optics.
    • The PA95 is ~$150 each, and presumably the high voltage handling resistors and capacitors won't be cheap.
    • Steve will update about his HV supply investigations (on a secure platform, NOT the elog), but it looks like even switching supplies cost north of $1200.
    • However, as I will detail in a separate elog, my modeling suggests that among the various technical noises I've modeled so far, coil driver noise is still the largest contribution which actually seems to exceed the unsqueezed shot noise of ~ 8e-19 m/rtHz for 1W input power and PRG 40 with 20ppm RT arm losses, by a smidge (~9e-19 m/rtHz, once we take into account the fast and slow path noises, and the fact that we are not exactly Johnson noise limited).

I also don't have a good idea of what the PCB layer structure (2 layers? 3 layers? or more?) should be for this kind of circuit, I'll try and get some input from Rich.

*Updated with current noise (Attachment #2) at the output for this topology of series resistance of 25 kohm in this path. Modeling was done (in LTspice) with a noiseless 25kohm resistor, and then I included the Johnson noise contribution of the 25k in quadrature. For this choice, we are below 1pA/rtHz from this path in the band we care about. I've also tried to estimate (Attachment #3) the contribution due to (assumed flat in ASD) ripple in the HV power supply (i.e. voltage rails of the PA95) to the output current noise, seems totally negligible for any reasonable power supply spec I've seen, switching or linear.

  14261   Thu Oct 18 00:27:37 2018 KojiUpdateSUSSUS PD Whitening board inspection

[Gautam, Koji]

As a part of the preparation for the replacement of c1susaux with Acromag, I made inspection of the coil-osem transfer function measurements for the vertex SUSs.

The TFs showed typical f^-2 with the whitening on except for ITMY UL (Attachment 1). Gautam told me that this is a known issue for ~5 years.
We made a thorough inspection/replacement of the components and identified the mechanism of the problem.
It turned out that the inputs to MAX333s are as listed below.

  Whitening ON Whitening OFF
UL ~12V ~8.6V
LL 0V 15V
UR 0V 15V
LR 0V 15V
SD 0V 15V

The switching voltage for UL is obviously incorrect. We thought this comes from the broken BIO board and thus swapped the corresponding board. But the issue remained. There are 4 BIO boards in total on c1sus, so maybe we have replaced a wrong board?

Initially, we thought that the BIO can't drive the pull-up resistor of 5KOhm from 15V to 0V (=3mA of current). So I have replaced the pull-up resistor to be 30KOhm. But this did not help. These 30Ks are left on the board.
 

  14319   Mon Nov 26 17:16:27 2018 gautamUpdateSUSEY chamber work

[steve, rana, gautam]

  • PSL and EY 1064nm laser (physical) shutters on the head were closed so that we and sundance crew could work without laser safety goggles. EY oplev laser was also turned off.
  • Cylindrical heater setup removed:
    • heater wiring meant the heater itself couldn't be easily removed from the chamber
    • two lenses and Al foil cylinder removed from chamber, now placed on the mini-cleanroom table.
  • Parabolic heater is untouched for now. We can re-insert it once the test mass is back in, so that we can be better informed about the clipping situation.
  • ETMY removed from chamber.
    • EQ stops were engaged.
    • Pictures were taken
    • OSEMs were removed from cage, placed in foil holders.
    • Cage clamps were removed after checking that marker clamps were in place.
    • Optic was moved first to NW corner of table, then out of the vacuum onto the mini-cleanroom desk Chub and I had setup last week.
    • Hoepfully there isn't an earthquake. EY has been marked as off-limits to avoid accidental bumping / catasrophic wire/magnet/optic breaking.
    • We sealed up the mini cleanroom with tape. F.C. cleaning tomorrow or at another opportune moment.
    • Light door was put back on for the evening.

Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.

Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.

  14399   Tue Jan 15 10:52:38 2019 gautamUpdateSUSEY door opened

[chub, bob, gautam]

We took the heavy door off the EY chamber at ~930am.

Chamber work:

  • ETMY suspension cage was returned to its nominal position.
  • Unused hardware from the annular heater setup was removed.
  • The unused heater had its leads snipped close to the heater crimp point, and the exposed part of the bare wires was covered with Kapton tape (we should remove the source leads as well in air to avoid any accidental shorting)

Waiting for the table to level off now. Plan for later today / tomorrow is as follows:

  1. Lock the Y arm, recover good cavity alignment.
  2. Position parabolic heater such that clipping issue is resolved.
  3. Move optic to edge of table for FC cleaning
  4. Clean optic
  5. Return suspension cage to nominal position.
  14401   Tue Jan 15 15:49:47 2019 gautamUpdateSUSEY door opened

While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.


Chamber work by Chub and gautam:

  1. Table leveling was checked with a clean spirit level
    • Leveling was substantially off in two orthogonal directions, along the beam axis as well as perpendicular to it.
    • We moved almost all the weights available on the table.
    • Managed to get the leveling correct to within 1 tick on the level.
    • We are not too worried about this for now, the final leveling will be after heater repositioning, ETMY cleaning etc.
  2. ETMY OSEM re-insertion
    • OSEMs were re-inserted till their mean voltage was ~ half the open values.
    • Local damping seems to work just fine.
  14403   Wed Jan 16 16:25:25 2019 gautamUpdateSUSYarm locked

[chub, gautam]

Summary:

Y arm was locked at low power in air.

Details:

  1. ITMY chamber door was removed at ~10am with Bob's help.
  2. ETMY table leveling was found to have drifted significantly (3 ticks on the spirit level, while it was more or less level yesterday, should look up the calib of the spirit level into mrad). Chub moved some weights around on the table, we will check the leveling again tomorrow.
  3. IMC was locked.
  4. TT2 and brass alignemnt tool was used to center beam on ETMY.
  5. TT1 and brass alignment tool was used to center beam on ITMY. We had to do a c1susaux reboot to be able to move ITMY. Usual precautions were taken to avoid ITMX getting stuck.
  6. ETMY was used to make return beam from the ETM overlap with the in-going beam near ITMY, using a holey IR card.
  7. At this point, I was confident we would see IR flashes so I decided to do the fine alignment in the control room.

We are operating with 1/10th the input power we normally have, so we expect the IR transmission of the Y arm to max out at 1 when well aligned. However, it is hovering around 0.05 right now, and the dominant source of instability is the angular motion of ETMY due to the Oplev loop being non-functional. I am hesitant to do in-chamber work without an extra pair of eyes/hands around, so I'll defer that for tomorrow morning when Chub gets in. With the cavity axis well defined, I plan to align the green beam to this axis, and use the two to confirm that we are well clear of the Parabola.

* Paola, our vertex laptop, and indeed, most of the laptops inside the VEA, are not ideal to work on this kind of alignmment procedure, it would be good to set up some workstations on which we can easily interact with multiple MEDM screens,

  14407   Fri Jan 18 21:34:18 2019 gautamUpdateSUSUnused optic on EY table

Does anyone know what the purpose of the indicated optic in Attachment #1 is? Can we remove it? It will allow a little more space around the elliptical reflector...

  14408   Sat Jan 19 05:07:45 2019 KojiUpdateSUSUnused optic on EY table

I don't think it was used. It is not on the diagram too. You can remove it.

  14409   Sat Jan 19 15:33:18 2019 gautamUpdateSUSETMY OSEMs faulty

After diagnosis with the tester box, as I suspected, the fully open DC voltages on the two problematic channels, LL and UR, were restored once I replaced the LM6321 ICs in those two channel paths. However, I've been puzzled by the inability to turn on the Oplev loops on ETMY. Furthermore, the DC bias voltages required to get ETMY to line up with the cavity axis seemed excessively large, particularly since we seemed to have improved the table levelling.

I suspected that the problem with the OSEMs hasn't been fully resolved, so on Thursday night, I turned off the ETMY watchdog, kicked the optic, and let it ringdown. Then I looked at the time-series (Attachment #1) and spectra (Attachment #2) of the ringdowns. Clearly, the LL channel seems to saturate at the lower end at ~440 counts. Moreover, in the time domain, it looks like the other channels see the ringdown cleanly, but I don't see the various suspension eigenmodes in any of the sensor signals. I confirmed that all the magnets are still attached to the optic, and that the EQ stops are well clear of the optic, so I'm inclined to think that this behavior is due to an electrical fault rather than a mechanical one.

For now, I'll start by repeating the ringdown with a switched out Satellite Box (SRM) and see if that fixes the problem. 

Quote:

While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.

  14411   Tue Jan 22 20:36:53 2019 gautamUpdateSUSETMY OSEMs faulty

Short update on latest Satellite box woes.

  1. I checked the resistance of all 5 OSEM coils on ETMY using a DB25 breakout board and a multimeter - all were between 16-17 ohms (mesured from the cable to the Vacuum flange), which I think is consistent with the expected value.
  2. Checked the bias voltage (aka slow path) from the coil driver board was reaching the coils
    • The voltages were indeed being sent out of the coil driver board - I confirmed by driving a slow sine wave and measuring at the output of the coil driver board, with all the fast outputs disabled.
    • The voltage is arriving at the 64 pin IDC connector at the Satellite box - Chub and I verified this using some mini-grabbers and leads from wirewound resistors (we don't have a breakout board for this kind of connector, would be handy to get some!)
    • However, the voltages are not being sent out through the DB25 connectors on the side of the Satellite box, at least for the LL and UR channels. UL seems to work okay.
    • This behavior is consistent with the observation that we had to apply way larger bias voltages to get the cavity axis to line up than was the nominal values - if one or more coils weren't getting their signals, it would also explain the large PIT->YAW coupling I observed using the Oplev spot and the slow bias alignment EPICS sliders.
    • This behavior is puzzling - the Sat box is just supposed to be a feed-through for the coil driver signals, and we measured resistances between the 64 pin IDC connector and the corresponding DB25 pins, and measured in the range of 0.2-0.3 ohms. However, the voltage fails to make it through - not sure what's going on here.. We will investigate further on the electronics bench.

What's more - I did some Sat box switcheroo, swapping the SRM and ETM boxes back and forth in combination with the tester box. In the process, I seem to have broken the SRM sat box - all the shadow sensors are reporting close to 0 volts, and this was confirmed to be an electronic problem as opposed to some magnet skullduggery using the tester box. Once we get to the bottom of the ETMY sat box, we will look at SRM. This is more or less the last thing to look at for this vent - once we are happy the cavity axis can be recovered reliably, we can freeze the position of the elliptical reflector and begin the F.C.ing.

  14413   Wed Jan 23 12:39:18 2019 gautamUpdateSUSEY chamber work

While Chub is making new cables for the EY satellite box...

  1. I removed the unused optic on the NW corner of the EY table. It is stored in a clean Al-foil lined plastic box, and will be moved to the clean hardware section of the lab (along the South arm, south of MC2 chamber).
  2. Checked table leveling - Attachment #1, looked good, and has been stable over the weekend.
  3. I moved the two oversized washers on the reflector, which I believe are only used because the screw is long and wouldn't go in all the way otherwise. As shown in Attachment #2, this reduces the risk of clipping the main IFO beam axis.
  4. Yesterday, I pulled up the 40m CAD drawing, and played around with a rectangular box that approximates the extents of the elliptical reflector, to see what would be a good place to put it. I chose to go ahead with Attachment #3. Also shown is the eventually realized layout. Note that we'd actually like the dimension marked ~7.6 inches to be more like 7.1 inches, so the optic is actually ~0.5 inch ahead of the second focus of the ellipse, but I think this is good enough. 
  5. Attachment #4 shows the view of the optic as seen from the aperture on the back of the elliptical reflector. Looks good to me.
  6. Having positioned the reflector, I then inserted the heater into the aperture such that it is ~2/3rds the way in, which was the best position found by Annalisa last summer. I then ran 0.9 A of current through the heater for ~ 5 minutes. Attachment #5 shows the optic as seen with the FLIR with no heating, and after 5 minutes of heating. I'd say this is pretty unambiguous evidence that we are indeed heating the mirror. The gradient shown is significantly less pronounced than in Annalisa's simulations (~3K as opposed to 10K), but maybe the FLIR calibration isn't so great.
  7. For completeness, Attachment #6 shows the leveling of the table after this work. Nothing has chanegd significantly.

While the position of the reflector could possibly be optimized further, since we are already seeing a temperature gradient on the optic, I propose pushing on with other vent activities. I'm almost certain the current positioning places the optic closer to the second focus, and we already saw shifts of the HOM resonances with the old configuration, so I'd say we run with this and revisit if needed.

If Chub gives the Sat. Box the green flag, we will work on F.C.ing the mirrors in the evening, with the aim of closing up tomorrow/Friday. 

All raw images in this elog have been uploaded to the 40m google photos.

  14415   Wed Jan 23 23:12:44 2019 gautamUpdateSUSPrep for FC cleaning

In preparation for the FC cleaning, I did the following:

  1. Set up mini-cleanroom at EY - this consists of the mobile HEPA unit put up against the chamber door, with films draped around the setup.
  2. After double-checking the table leveling, I EQ-stopped ETMY and moved it to the NE corner of the EY table, where it will be cleaned.
  3. Checked leveling of IY table - see Attachment #1.
  4. Took pictures of IY table, OSEM arrangement on ITMY.
  5. EQ-stopped ITMY and SRM.
  6. Removed the face OSEMs from ITMY (this required clipping off the copper wire used to hold the OSEM wires against the suspension cage). The side OSEM has not yet been removed because I left the allen key that is compatible with that particular screw inside the EY chamber. 
  7. To position ITMY at the edge of the IY table where we can easily clean it, we will need to move the OSEM cabling tower as we did last time. I've taken photos of its current position for now.

Tomorrow, I will start with the cleaning of ETMY HR. While the FC is drying, I will position ITMY at the edge of the IY cable for cleaning (Chub will setup the mini-cleanroom at the IY table). The plan is to clean both HR surfaces and have the optics back in place by tomorrow evening. By my count, we have done everything listed in the IY and EY chambers. I'd like to minimize the time between cleaning and pumpdown, so if all goes well (Sat Box problems notwithstanding), we will check the table leveling on Friday morning, and put on the heavy doors and at least rough the main volume down to 1 torr on Friday.

  14416   Thu Jan 24 15:32:31 2019 gautamUpdateSUSY arm cavity side first contact applied

EY:

  • A clean cart was setup adjacent to the HEPA-enclosed mini cleanroom area (it cannot be inside the mini cleanroom, because of lack of space). 
  • The FC tools (first contact, acetone, beakers, brushes, PEEK mesh, clean scissors, clean tweezers, Canon camera, green flashlight) were laid out on this cart for easy access.
  • I inspected the optic - the barrel had a few specks of dust, and the outer 1.5" annular region of the HR face looked to have some streak marks
    • I was advised not to pre-wipe the HR side with any solvents
    • The FC was only applied to the centran ~1-1.5" of the optic
  • After applying the FC, I spent a few minutes inspecting the status of the OSEMs 
    • Three out of the four face OSEMs, as well as the side OSEM, did not have a filter in
    • I inserted filters into them.
  • Closed up the chamber with light door, left HEPA unit on and the mini cleanroom setup intact for now. We will dismantle everything after the pumpdown.

IY:

  • Similar setup to EY was implemented
  • Removed side OSEM from ITMY.
  • Double-checked that EQ stops were engaged.
  • Moved the OSEM cable tower to free up some space for accommodating ITMY.
  • Undid the clamps of ITMY, moved it to the NE corner of the IY table.
  • Inspected the optic - it was much cleaner than the 2016 inspection, although the barrel did have some specks of dust.
  • Once again, I applied first contact to the central ~1.5" of the HR surface.
  • Checked status of filters on OSEMs - this time, only the UL coil required a filter.
    • Attachment #3 shows the sensor voltage DC level before and after the insertion of the filter. There is ~0.1% change.
    • The filters were found in a box that suggests they were made in 2002 - but Steve tells me that it is just stored in a box with that label, and that since there are >100 filters inside that box, he thinks they are the new ones we procured in 2016. The coating specs and type of glass used are different between the two versions.

The attached photo shows the two optics with FC applied.

My original plan was to attempt to close up tomorrow. However, we are still struggling with Satellite box issues. So rather than rush it, we will attempt to recover the Y arm cavity alignment on Monday, satellite box permitting. The main motivation is to reduce the deadtime between peeling off the F.C and starting the pumpdown. We will start working on recovering the cavity alignment once the Sat box issues are solved.

  14422   Tue Jan 29 22:12:40 2019 gautamUpdateSUSAlignment prep

Since we may want to close up tomorrow, I did the following prep work:

  1. Cleaned up Y-end suspension eleoctronics setup, connected the Sat Box back to the flange
    • The OSEMs are just sitting on the table right now, so they are just seeing the fully open voltage
    • Post filter insertion, the four face OSEMs report ~3-4% lower open-voltage values compared to before, which is compatible with the transmission spec for the filters (T>95%)
    • The side OSEM is reporting ~10% lower - perhaps I just didn't put the filter on right, something to be looked at inside the chamber
  2. Suspension watchdog restoration
    • I'd shutdown all the watchdogs during the Satellite box debacle
    • However, I left ITMY, ETMY and SRM tripped as these optics are EQ-stopped / don't have the OSEMs inserted.
  3. Checked IMC alignment
    • After some hand-alignment of the IMC, it was locked, transmission is ~1200 counts which is what I remember it being
  4. Checked X-arm alignment
    • Strictly speaking, this has to be done after setting the Y-arm alignment as that dictates the input pointing of the IMC transmission to the IFO, but I decided to have a quick look nevertheless
    • Surprisingly, ITMX damping isn't working very well it seems - the optic is clearly swinging around a lot, and the shadow sensor RMS voltage is ~10s of mV, whereas for all the other optics, it is ~1mV.
    • I'll try the usual cable squishing voodoo

Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.

  14423   Wed Jan 30 11:54:24 2019 gautamUpdateSUSMore alignment prep

[chub, gautam]

  1. ETMY cage was wiped down
    • Targeted potential areas where dust could drift off from and get attracted to a charged HR surface
    • These areas were surprisingly dusty, even left a grey mark on the wipe [Attachment #1] - we think we did a sufficiently thorough job, but unclear if this helps the loss numbers
    • More pictures are on gPhoto
  2. Filters on SD and LR OSEMs were replaced - the open shadow sensor voltages with filters in/out are consistent with the T>95% coating spec.
  3. IPANG beam position was checked 
    • It is already too high, missing the first steering optic by ~0.5 inch, not the greatest photo but conclusion holds [Attachment #2].
    • I think we shouldn't worry about it for this pumpdown, we can fix it when we put in the new PR3.
  4. Cage wiping procedure was repeated on ITMY
    • The cage was much dustier than ETMY
    • However, the optic itself (barrel and edge of HR face) was cleaner
    • All accessible areas were wiped with isopropanol
    • Before/after pics are on gPhoto (even after cleaning, there are some marks on the suspension that looks like dust, but these are machining marks)

Procedure tomorrow [comments / suggestions welcome]:

  1. Start with IY chamber
    • Peel first contact with TopGun jet flowing
    • Inspect optic face with green flashlight to check for residual First Contact
    • Replace ITMY suspension cage in its position, clamp it down
    • Release ITMY from its EQ stops
    • Replace OSEMs in ITMY cage, best effort to recover previous alignment of OSEMs in their holders (I have a photo before removal of OSEMs), which supposedly minimized the coupling of the B-R modes into the shadow sensor signals
    • Best effort to have shadow sensor PD outputs at half their fully open voltages (with DC bias voltage applied)
    • Quick check that we are hitting the center of the ITM with the alignment tool
    • Check that the Oplev HeNe is reasonably centered on steering mirrors
    • Tie down OSEM cabling to the ITMY cage with clean copper wire
    • Replace the OSEM wiring tower
    • Release the SRM from its EQ stops
    • Check table leveling
    • Take pictures of everything, check that we have not left any tools inside the chamber
    • Heavy doors on
  2. Next, EY chamber
    • Repeat first seven bullets from the IY chamber, :%s/ITMY/ETMY/g
    • Confirm sufficient clearance between IFO beam axis and the elliptical reflector
    • Check Oplev beam path
    • Check table leveling
    • Take pictures of everything, check that we have not left any tools inside the chamber
    • Heavy doors on
  3. IFO alignment checks - basically follow the wiki, we want to be able to lock both arms (or at least see TEM00 resonances), and see that the PRC and SRC mode flashes look reasonable.
  4. Tighten all heavy doors up
  5. Pump down

All photos have been uploaded to google photos.

  14424   Wed Jan 30 19:25:40 2019 gautamUpdateSUSXarm cavity alignment

Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.

I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...


I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow.

  14425   Fri Feb 1 01:24:06 2019 gautamUpdateSUSAlmost ready for pumpdown tomorrow

[koji, chub, jon, rana, gautam]

Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to 

  1. Confirm clearance between elliptical reflector and ETMY
  2. Confirm leveling of ETMY table
  3. Take pics of ETMY table
  4. Put heavy door on ETMY chamber
  5. Pump down

The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.

  14426   Fri Feb 1 13:16:50 2019 gautamUpdateSUSPumpdown 83 underway

[chub, bob, gautam]

  1. Steps described in previous elog were carried out
  2. EY heavy door was put on at about 1130am.
  3. Pumpdown commenced at ~noon. We are going down at ~3 torr/min.
  14427   Fri Feb 1 14:44:14 2019 gautamUpdateSUSY arm FC cleaning and reinstall

[Attachment #1]: ITMY HR face after cleaning. I determined this to be sufficiently clean and re-installed the optic.

[Attachment #2]: ETMY HR face after cleaning. This is what the HR face looks like after 3 rounds of First-Contact application. After the first round, we noticed some arc-shaped lines near the center of the optic's clear aperture. We were worried this was a scratch, but we now believe it to be First-Contact residue, because we were able to remove it after drag wiping with acetone and isopropanol. However, we mistrust the quality of the solvents used - they are not any special dehydrated kind, and we are looking into acquiring some dehydrated solvents for future cleaning efforts.

[Attachment #3]: Top view of ETMY cage meant to show increased clearance between the IFO axis and the elliptical reflector.

Many more photos (including table leveling checks) on the google-photos page for this vent. The estimated time between F.C. peeling and pumpdown is ~24 hours for ITMY and ~15 hours for ETMY, but for the former, the heavy doors were put on ~1 hour after the peeling.

The first task is to fix the damping of ETMY.

  14428   Fri Feb 1 21:52:57 2019 gautamUpdateSUSPumpdown 83 underway

[jon, koji, gautam]

  1. IFO is at ~1 mtorr, but pressure is slowly rising because of outgassing presumably (we valved off the turbos from the main volume)
  2. Everything went smooth -
    • 760 torr to 500 mtorr took ~7 hours (we deliberately kept a slow pump rate)
    • TP3 current was found to rise above 1 A easily as we opened RV2 during the turbo pumping phase, particularly in going from 500 mtorr to 10 mtorr, so we just ran TP2 more aggressively rather than change the interlock condition.
    • The pumpspool is isolated from the main volume - TP1-3 are running (TP2 and TP3 are on Standby mode) but are only exposed to the small pumpspool volume and RGA volume).
    • RP1 and RP3 were turned off, and the manual roughing line was disconnected.
    • We will resume the pumping on Monday.

I'm leaving all suspension watchdogs tripped over the weekend as part of the suspension diagonalization campaign...

  14433   Mon Feb 4 20:13:39 2019 gautamUpdateSUSETMY suspension oddness

I looked at the free-swinging sensor data from two nights ago, and am struggling with the interpretation. 

[Attachment #1] - Fine resolution spectral densities of the 5 shadow sensor signals (y-axis assumes 1ct ~1um). The puzzling feature is that there are only 3 resonant peaks visible around the 1 Hz region, whereas we would expect 4 (PIT, YAW, POS and SIDE). afaik, Lydia looked into the ETMY suspension diagonalization last, in 2016. Compared to her plots (which are in the Euler basis while mine are in the OSEM basis), the ~0.73 Hz peak is nowhere to be seen. I also think the frequency resolution (<1 mHz) is good enough to be able to resolve two closely spaced peaks, so it looks like due to some reason (mechanical or otherwise), there are only 3 independent modes being sensed around 1 Hz.

[Attachment #2] - Koji arrived and we looked at some transfer functions to see if we could make sense of all this. During this investigation, we also think that the UL coil actuator electronics chain has some problem. This test was done by driving the individual coils and looking for the 1/f^2 pendulum transfer function shape in the Oplev error signals. The ~ 4dB difference between UR/LL and LR is due to a gain imbalance in the coil output filter bank, once we have solved the other problems, we can reset the individual coil balancing using this measurement technique.

[Attachment #3] - Downsampled time-series of the data used to make Attachment #1. The ringdown looks pretty clean, I don't see any evidence of any stuck magnets looking at these signals. The X-axis is in kilo-seconds.

We found that the POS and SIDE local damping loops do not result in instability building up. So one option is to use only Oplevs for angular control, while using shadow-sensor damping for POS and SIDE.

  14441   Thu Feb 7 19:34:18 2019 gautamUpdateSUSETMY suspension oddness

I did some tests of the electronics chain today.

  1. Drove a sine-wave using awggui to the UL-EXC channel, and monitored using an o'scope and a DB25 breakout board at J1 of the satellite box, with the flange cable disconnected - while driving 3000 cts amplitude signal, I saw a 2 Vpp signal on the scope, which is consistent with expectations.
  2. Checked resistances of the pin pairs corresponding to the OSEMs at the flange end using a breakout board - all 5 pairs read out ~16-17 ohms.
  3. Rana pointed out that the inductance is the unambiguous FoM here: all coils measured between 3.19 and 3.3 mH according to the LCR meter...

Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.

The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2.

  14443   Fri Feb 8 02:00:34 2019 gautamUpdateSUSITMY has tendency of getting stuck

As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow.

  14470   Mon Feb 25 20:20:07 2019 KojiUpdateSUSDIN 41612 (96pin) shrouds installed to vertex SUS coil drivers

The forthcoming Acromag c1susaux is supposed to use the backplane connectors of the sus euro card modules.

However, the backplane connectors of the vertex sus coil drivers were already used by the fast switches (dewhitening) of c1sus.

Our plan is to connect the Acromag cables to the upper connectors, while the switch channels are wired to the lower connector by soldering jumper wires between the upper and lower connectors on board.

To make the lower 96pin DIN connector available for this, we needed DIN 41612 (96pin) shroud. Tyco Electronics 535074-2 is the correct component for this purpose. The shrouds have been installed to the backplane pins of the coil driver circuit D010001. The shroud has the 180deg rotation dof. The direction of the shroud was matched with the ones on the upper connectors.

  14499   Thu Mar 28 23:29:00 2019 KojiUpdateSUSSuspension PD whitening and I/F boards modified for susaux replacement

Now the sus PD whitening bards are ready to move the back plane connectoresto the lower row and to plug the acromag interface board to the upper low.


Sus PD whitening boards on 1X5 rack (D000210-A1) had slow and fast channels mix in a single DIN96 connector. As we are going to use the rear-side backplane connector for Acromag access, we wanted to migrate the fast channel somewhere. For this purpose, the boards were modified to duplicate the fast signals to the lower DIN96 connector.

The modification was done on the back layer of the board (Attachment 1).
The 28A~32A and 28C~32C of P1 are connected to the corresponding pins of P2 (Attachment 2). The connections were thouroughly checked by a multimeter.

After the modification the boards were returned to the same place of the crate. The cables, which had been identified and noted before disconnection, were returned to the connectors.

The functionarity of the 40 (8sus*5ch) whitening switches were confimred using DTT one by one by looking at the transfer functions between SUS LSC EXC to the PD input filter IN1. All the switches showed the proper whitening in the measurments.

The PD slow mon (like C1:SUS-XXX_xxPDMon) channels were also checked and they returned to the values before the modification, except for the BS UL PD. As the fast version of the signal returned to the previous value, the monitor circuit was suspicious. Therefore the opamp of the monitor channels (LT1125) were replaced and the value came back to the previous value (attachment 3).

 

  14536   Thu Apr 11 12:04:43 2019 JonUpdateSUSStarting some scripted SUS tests on ITMY

Will advise when I'm finished, will be by 1 pm for ALS work to begin.

  14538   Thu Apr 11 12:57:48 2019 JonUpdateSUSStarting some scripted SUS tests on ITMY

Testing is finished.

Quote:

Will advise when I'm finished, will be by 1 pm for ALS work to begin.

  14539   Thu Apr 11 17:30:45 2019 JonUpdateSUSAutomated suspension testing with susPython

Summary

In anticipation of needing to test hundreds of suspension signals after the c1susaux upgrade, I've started developing a Python package to automate these tests: susPython

https://git.ligo.org/40m/suspython

The core of this package is not any particular test, but a general framework within which any scripted test can be "nested." Built into this framework is extensive signal trapping and exception handling, allowing actuation tests to be performed safely. Namely it protects against crashes of the test scripts that would otherwise leave the suspensions in an arbitrary state (e.g., might destroy alignment).

Usage

The package is designed to be used as a standalone from the command line. From within the root directory, it is executed with a single positional argument specifying the suspension to test:

$ python -m suspython ITMY

Currently the package requires Python 2 due to its dependence on the cdsutils package, which does not yet exist for Python 3.

Scripted Tests

So far I've implemented a cross-consistency test between the DC-bias outputs to the coils and the shadow sensor readbacks. The suspension is actuated in pitch, then in yaw, and the changes in PDMon signals are measured. The expected sign of the change in each coil's PDMon is inferred from the output filter matrix coefficients. I believe this test is sensitive to two types of signal-routing errors: no change in PDMon response (actuator is not connected), incorrect sign in either pitch or yaw response, or in both (two actuators are cross-wired).

The next test I plan to implement is a test of the slow system using the fast system. My idea is to inject a 3-8 Hz excitation into the coil output filter modules (either bandlimited noise or a sine wave), with all coil outputs initially disabled. One coil at a time will be enabled and the change in all VMon signals monitored, to verify the correct coil readback senses the excitation. In this way, a signal injected from the independent and unchanged fast system provides an absolute reference for the slow system.

I'm also aware of ideas for more advanced tests, which go beyond testing the basic signal routing. These too can be added over time within the susPython framework.

  14551   Thu Apr 18 22:35:23 2019 gautamUpdateSUSETMY actuator diagnosis

[rana, gautam]

Rana did a checkout of my story about oddness of the ETMY suspension. Today, we focused on the actuators - the goal was to find the correct coefficients on the 4 face coils that would result in diagonal actuation (i.e. if we actuate on PIT, it only truly moves the PIT DoF, as witnessed by the Oplev, and so on for the other DoFs). Here are the details:

  1. Ramp times for filter modules:
    • All the filter modules in the output matrix did not have ramp times set.
    • We used python, cdsutils and ezca to script the writing of a 3 second ramp to all the elements of the 5x6 output matrix.
    • The script lives at /opt/rtcds/caltech/c1/scripts/cds/addRampTimes.py, can be used to implement similar scripts to initialize large numbers of channels (limiters, ramp times etc).
  2. Bounce mode checkout:
    • ​The motivation here was to check if there is anomalously large coupling of the bounce mode to any of the other DoFs for ETMY relative to the other optics
    • The ITMs have a different (~15.9 Hz) bounce mode frequency compared to the ETMs (~16.2 Hz).
    • I hypothesize that this is because the ETMs were re-suspended in 2016 using new suspension wire.
    • We should check out specs of the wires, look for either thickness differences or alloying composition variation (Steve has already documented some of this in the elog linked above). Possibly also check out the bounce mode for a 250g load on the table top.
  3. Step responses for PIT and YAW
    • With the Oplevs disabled (but other local damping loops engaged), we applied a step of 100 DAC counts to the PIT and YAW DoFs from the realtime system (one at a time)
    • We saw significant cross-coupling of the YAW step coupling to PIT, at the level of 50%.
  4. OSEM coil coefficient balancing
    • I had done this a couple of months ago looking at the DC gain of the 1/f^2 pendulum response.
    • Rana suggested an alternate methodology 
      • we used the lock-in amplifier infrastructure on the SUS screens to drive a sine wave
      • Frequencies were chosen to be ~10.5 Hz and ~13.5 Hz, to be outside the Oplev loop bandwidth
      • Tests were done with the Oplev loop engaged. The Oplev error signal was used as a diagnostic to investigate the PIT/YAW cross coupling.
      • In the initial tests, we saw coupling at the 20% level. If the Oplev head is rotated by 0.05 rad relative to the "true" horizontal-vertical coordinate system, we'd expect 5% cross coupling. So this was already a red flag (i.e. it is hard to believe that Oplev QPD shenanigans are responsible for our observations). We decided to re-diagonalize the actuation.
      • The output matrix elements for the lock-in-amplifier oscillator signals were adjusted by adding some amount of YAW to the PIT elements (script lives at /opt/rtcds/caltech/c1/scripts/SUS/stepOutMat.py), and vice versa, and we tried to reduce the height of the cross-coupled peaks (viewed on DTT using exponential weighting, 4 avgs, 0.1 Hz BW - note that the DTT cursor menu has a peak find option!). DTT Template saved at /users/Templates/SUS/ETMY-actDiag.xml
      • This worked really well for minimizing PIT response while driving YAW, not as well for minimizing YAW in PIT. 
      • Next, we added some YAW to a POS drive to minimize the any signal at this drive frequency in the Oplev YAW error signal. Once that was done, we minimized the peak in the Oplev PIT error signal by adding some amount of PIT actuation.
      • So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC. 
  5. Next steps:
    • All of our tests tonight were at AC - once the coil balancing has been done at AC, we have to check the cross coupling at DC. If everything is working correctly, the response should also be fairly well decoupled at DC, but if not, we have to come up with a hypothesis as to why the AC and DC responses are different.
    • Can we gain any additional info from driving the pringle mode and minimizing it in the Oplev error signals? Or is the problem overconstrained?
    • After the output matrix diagonalization is done, drive the optic in POS, PIT and YAW, and construct the input matrix this way (i.e. transfer function), as an alternative to the usual free-swinging ringdown method. Look at what kind of an input matrix we get.
    • Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286.
  14554   Fri Apr 19 11:36:23 2019 gautamUpdateSUSNo consistent solution for output matrix

Ther isn't a consistent set of OSEM coil gains that explains the best actuation vectors we determined yesterday. Here are the explicit matrices:

  1. POS (tuned to minimize excitation at ~13.5 Hz in the Oplev PIT and YAW error signals): \begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 0.98 \\ 0.96 \\ 1.04 \\ 1.02 \\ \end{bmatrix}
  2. PIT (tuned to minimize cross coupled peak in the Oplev YAW error signal at ~10.5 Hz): ​\begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 0.64 \\ 1.12 \\ -1.12 \\ -0.64 \\ \end{bmatrix}
  3. YAW (tuned to minimize cross coupled peak in the Oplev PIT error signal at ~13.5 Hz): \begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 1.5 \\ -0.5 \\ 0.5 \\ -1.5 \\ \end{bmatrix}

There isn't a solution to the matrix equation \begin{bmatrix} \alpha_1 & \alpha_2 & \alpha_3 & \alpha_4 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & -1 \\ 1 & -1 & 1 \\ 1 & -1 & -1 \end{bmatrix} =\begin{bmatrix} 0.98 & 0.64 & 1.5 \\ 0.96 & 1.12 & -0.5 \\ 1.04 & -1.12 & 0.5 \\ 1.02 & -0.64 & -1.5 \end{bmatrix}, i.e. we cannot simply redistribute the actuation vectors we found as gains to the coils and preserve the naive actuation matrix. What this means is that in the OSEM coil basis, the actuation eigenvectors aren't the naive ones we would think for PIT and YAW and POS. Instead, we can put these custom eigenvectors into the output matrix, but I'm struggling to think of what the physical implication is. I.e. what does it mean for the actuation vectors for PIT, YAW and POS to not only be scaled, but also non-orthogonal (but still linearly independent) at ~10 Hz, which is well above the resonant frequencies of the pendulum? The PIT and YAW eigenvectors are the least orthogonal, with the angle between them ~40 degrees rather than the expected 90 degrees.

Quote:

So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC. 

  14557   Fri Apr 19 15:13:38 2019 ranaUpdateSUSNo consistent solution for output matrix

let us have 3 by 4, nevermore

so that the number of columns is no less

and no more

than the number of rows

so that forevermore we live as 4 by 4 

Quote:

I'm struggling to think

  14558   Fri Apr 19 16:19:42 2019 gautamUpdateSUSActuation matrix still not orthogonal

I repeated the exercise from yesterday, this time driving the butterfly mode [+1 -1 -1 +1] and adding the tuned PIT and YAW vectors from yesterday to it to minimize appearance in the Oplev error signals. 

The measured output matrix is \begin{bmatrix} 0.98 & 0.64 & 1.5 & 1.037 \\ 0.96 & 1.12 & -0.5 & -0.998 \\ 1.04 & -1.12 & 0.5 & -1.002 \\ 1.02 & -0.64 & -1.5 & 0.963 \end{bmatrix}, where rows are the coils in the order [UL,UR,LL,LR] and columns are the DOFs in the order [POS,PIT,YAW,Butterfly]. The conclusions from my previous elog still hold though - the orthogonality between PIT and YAW is poor, so this output matrix cannot be realized by a simple gain scaling of the coil output gains. The "adjustment matrix", i.e. the 4x4 matrix that we must multiply the "ideal" output matrix by to get the measured output matrix has a condition number of 134 (1 is a good condition number, signifies closeness to the identity matrix). 

Quote:

let us have 3 by 4, nevermore

so that the number of columns is no less

and no more

than the number of rows

so that forevermore we live as 4 by 4

  14559   Fri Apr 19 19:22:15 2019 ranaUpdateSUSActuation matrix still not orthogonal

If thy left hand troubles thee

then let the mirror show the right

for if it troubles enough to cut it off

it would not offend thy sight

  14561   Mon Apr 22 21:33:17 2019 JonUpdateSUSBench testing of c1susaux replacement

Today I bench-tested most of the Acromag channels in the replacement c1susaux. I connected a DB37 breakout board to each chassis feedthrough connector in turn and tested channels using a multimeter and calibrated voltage source. Today I got through all the digital output channels and analog input channels. Still remaining are the analog output channels, which I will finish tomorrow.

There have been a few wiring issues found so far, which are noted below.

Channel Type Issue
C1:SUS2-PRM_URVMon Analog input No response
C1:SUS2-PRM_LRVMon Analog input No response
C1:SUS2-BS_UL_ENABLE Digital output Crossed with LR
C1:SUS2-BS_LL_ENABLE Digital output Crossed with UR
C1:SUS2-BS_UR_ENABLE Digital output Crossed with LL
C1:SUS2-BS_LR_ENABLE Digital output Crossed with UL
C1:SUS2-ITMY_SideVMon Analog input Polarity reversed
C1:SUS2-MC2_UR_ENABLE Digital output Crossed with LR
C1:SUS2-MC2_LR_ENABLE Digital output Crossed with UR
     
     

 

  14562   Mon Apr 22 22:43:15 2019 gautamUpdateSUSETMY sensor diagnosis

Here are the results from this test. The data for 17 April is with the DC bias for ETMY set to the nominal values (which gives good Y arm cavity alignment), while on 18 April, I changed the bias values until all four shadow sensors reported values that were at least 100 cts different from 17 April. The times are indicated in the plot titles in case anyone wants to pull the data (I'll point to the directory where they are downloaded and stored later).

There are 3 visible peaks. There was negligible shift in position (<5 mHz)  / change in Q of any of these with the applied Bias voltage. I didn't attempt to do any fitting as it was not possible to determine which peak corresponds to which DoF by looking at the complex TFs between coils (at each peak, different combinations of 3 OSEMs have the same phase, while the fourth has ~180 deg phase lead/lag). FTR, the wiki leads me to expect the following locations for the various DoFs, and I've included the closest peak in the current measured data in parentheses:

DoF Frequency [Hz]
POS 0.982 (0.947)
PIT 0.86 (0.886)
YAW 0.894 (0.886)
SIDE 1.016 (0.996)

However, this particular SOS was re-suspended in 2016, and this elog reports substantially different peak positions, in particular, for the YAW DoF (there were still 4). The Qs of the peaks from last week's measurements are in the range 250-350.

Quote:

Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286.

  14563   Tue Apr 23 18:48:25 2019 JonUpdateSUSc1susaux bench testing completed

Today I tested the remaining Acromag channels and retested the non-functioning channels found yesterday, which Chub repaired this morning. We're still not quite ready for an in situ test. Here are the issues that remain.

Analog Input Channels

Channel Issue
C1:SUS-MC2_URPDMon No response
C1:SUS-MC2_LRPDMon No response

I further diagnosed these channels by connecting a calibrated DC voltage source directly to the ADC terminals. The EPICS channels do sense this voltage, so the problem is isolated to the wiring between the ADC and DB37 feedthrough.

Analog Output Channels

Channel Issue
C1:SUS-ITMX_ULBiasAdj No output signal
C1:SUS-ITMX_LLBiasAdj No output signal
C1:SUS-ITMX_URBiasAdj No output signal
C1:SUS-ITMX_LRBiasAdj No output signal
C1:SUS-ITMY_ULBiasAdj No output signal
C1:SUS-ITMY_LLBiasAdj No output signal
C1:SUS-ITMY_URBiasAdj No output signal
C1:SUS-ITMY_LRBiasAdj No output signal
C1:SUS-MC1_ULBiasAdj No output signal
C1:SUS-MC1_LLBiasAdj No output signal
C1:SUS-MC1_URBiasAdj No output signal
C1:SUS-MC1_LRBiasAdj No output signal

To further diagnose these channels, I connected a voltmeter directly to the DAC terminals and toggled each channel output. The DACs are outputting the correct voltage, so these problems are also isolated to the wiring between DAC and feedthrough.

In testing the DC bias channels, I did not check the sign of the output signal, but only that the output had the correct magnitude. As a result my bench test is insensitive to situations where either two degrees of freedom are crossed or there is a polarity reversal. However, my susPython scripting tests for exactly this, fetching and applying all the relevant signal gains between pitch/yaw input and coil bias output. It would be very time consuming to propagate all these gains by hand, so I've elected to wait for the automated in situ test.

Digital Output Channels

yes Everything works.

  14564   Tue Apr 23 19:31:45 2019 JonUpdateSUSWatchdog channels separated from autoBurt.req

For the new c1susaux, Gautam and I moved the watchdog channels from autoBurt.req to a new file named autoBurt_watchdogs.req. When the new modbus service starts, it loads the state contained in autoBurt.snap. We thought it best for the watchdogs to not be automatically enabled at this stage, but for an operator to manually have to do this. By moving the watchdog channels to a separate snap file, the entire SUS state can be loaded while leaving just the watchdogs disabled.

This same modification should be made to the ETMX and ETMY machines.

  14567   Wed Apr 24 17:07:39 2019 gautamUpdateSUSc1susaux in-situ testing [and future of IFOtest]

[jon, gautam]

For the in-situ test, I decided that we will use the physical SRM to test the c1susaux Acromag replacement crate functionality for all 8 optics (PRM, BS, ITMX, ITMY, SRM, MC1, MC2, MC3). To facilitate this, I moved the backplane connector of the SRM SUS PD whitening board from the P1 connector to P2, per Koji's mods at ~5:10PM local time. Watchdog was shutdown, and the backplane connectors for the SRM coil driver board was also disconnected (this is interfaced now to the Acromag chassis).

I had to remove the backplane connector for the BS coil driver board in order to have access to the SRM backplane connector. Room in the back of these eurocrate boxes is tight in the existing config...

At ~6pm, I manually powered down c1susaux (as I did not know of any way to turn off the EPICS server run by the old VME crate in a software way). The point was to be able to easily interface with the MEDM screens. So the slow channels prefixed C1:SUS-* are now being served by the Supermicro called c1susaux2.

A critical wiring error was found. The channel mapping prepared by Johannes lists the watchdog enable BIO channels as "C1:SUS-<OPTIC>_<COIL>_ENABLE", which go to pins 23A-27A on the P1 connector, with returns on the corresponding C pins. However, we use the "TEST" inputs of the coil driver boards for sending in the FAST actuation signals. The correct BIO channels for switching this input is actually "C1:SUS-<OPTIC>_<COIL>_TEST", which go to pins 28A-32A on the P1 connector. For todays tests, I voted to fix this inside the Acromag crate for the SRM channels, and do our tests. Chub will unfortunately have to fix the remaining 7 optics, see Attachment #1 for the corrections required. I apportion 70% of the blame to Johannes for the wrong channel assignment, and accept 30% for not checking it myself.

The good news: the tests for the SRM channels all passed!

  • Attachment #2: Output of Jon's testing code. My contribution is the colored logs courtesy of python's coloredlogs package, but this needs a bit more work - mainly the PASS mssage needs to be green. This test applies bias voltages to PIT/YAW, and looks for the response in the PDmon channels. It backs out the correct signs for the four PDs based on the PIT/YAW actuation matrix, and checks that the optic has moved "sufficiently" for the applied bias. You can also see that the PD signals move with consistent signs when PIT/YAW misalignment is applied. Additionally, the DC values of the PDMon channels reported by the Acromag system are close to what they were using the VME system. I propose calling the next iteration of IFOtest "Sherlock".
  • Attachment #3: Confirmation (via spectra) that the SRM OSEM PD whitening can still be switched even after my move of the signals from the P1 connector to the P2 connector. I don't have an explanation right now for the shape of the SIDE coil spectrum.
  • Attachment #4: Applied 100 cts (~ 100*10/2**15/2 ~ 15mV at the monitor point) offset at the bias input of the coil output filters on SRM (this is a fast channel). Looked for the response in the Coil Vmon channels (these are SLOW channels). The correct coil showed consistent response across all 5 channels.

Additionally, I confirmed that the watchdog tripped when the RMS OSEM PD voltage exceeded 200 counts. Ideally we'd have liked to test the stability of the EPICS server, but we have shut it down and brought the crate back out to the electronics bench for Chub to work on tomorrow.

I restarted the old VME c1susaux at 915pm local time as I didn't want to leave the watchdogs in an undefined state. Unsurprisingly, ITMY is stuck. Also, the BS (cable #22) and SRM (cable #40) coil drivers are physically disconnected at the front DB15 output because of the undefined backplane inputs. I also re-opened the PSL shutter.

  14569   Thu Apr 25 00:30:45 2019 gautamUpdateSUSETMY BR mode

We briefly talked about the bounce and roll modes of the SOS optic at the meeting today. 

Attachment #1: BR modes for ETMY from my free-swinging run on 17 April. The LL coil has a very different behavior from the others.

Attachment #2: BR modes for ETMY from my free-swinging run on 18 April, which had a macroscopically different bias voltage for the PIT/YAW sliders. Here too, the LL coil has a very different behavior from the others.

Attachment #3: BR modes for ETMX from my free-swinging run on 27 Feb. There are many peaks in addition to the prominent ones visible here, compared to ITMY. The OSEM PD noise floor for UR and SIDE is mysteriously x2 lower than for the other 3 OSEMs???

In all three cases, a bounce mode around 16.4 Hz and a roll mode around 24.0 Hz are visible. The ratio between these is not sqrt(2), but is ~1.46, which is ~3% larger. But when I look at the database, I see that in the past, the bounce and roll modes were in fact at close to these frequencies.

In conclusion:

  1. the evidence thus far says that ETMY has 5 resonant modes in the free-swinging data between 0.5 Hz and 25 Hz.
  2. Either two modes are exactly degenerate, or there is a constraint in the system which removes 1 degree of freedom.
  3. How likely is the latter? As any mechanical constraint that removes one degree of freedom would presumably also damp the Qs of the other modes more than what we are seeing.
  4. Can some large piece of debris on the barrel change the PIT/YAW eigenvectors such that the eigenvalues became exactly degenerate?
  5. Furthermore, the AC actuation vectors for PIT and YAW are not close to orthogonal, but are rotated ~45 degrees relative to each other.

Because of my negligence and rushing the closeout procedure, I don't have a great close-out picture of the magnet positions in the face OSEMs, the best I can find is Attachment #4. We tried to replicate the OSEM arrangement (orientation of leads from the OSEM body) from July 2018 as closely as possible.

I will investigate the side coil actuation strength tomorrow, but if anyone can think of more in-air tests we should do, please post your thoughts/poetry here.

  14581   Fri Apr 26 19:35:16 2019 JonUpdateSUSNew c1susaux installed, passed first round of scripted testing

[Jon, Gautam]

Today we installed the c1susaux Acromag chassis and controller computer in the 1X4 rack. As noted in 14580 the prototype Acromag chassis had to first be removed to make room in the rack. The signal feedthroughs were connected to the eurocrates by 10' DB-37 cables via adapters to 96-pin DIN.

Once installed, we ran a scripted set of suspension actuation tests using PyIFOTest. BS, PRM, SRM, MC1, MC2, and MC3 all passed these tests. We were unable to test ITMX and ITMY because both appear to be stuck. Gautam will shake them loose on Monday.

Although the new c1susaux is now mounted in the rack, there is more that needs to be done to make the installation permanent:

  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.

On Monday we plan to continue with additional scripted tests of the suspensions.


gautam - some more notes:

  • Backplane connectors for the SUS PD whitening boards, which now only serve the purpose of carrying the fast BIO signals used for switching the whitening, were moved from the P1 connector to P2 connector for MC1, MC2, MC3, ITMX, ITMY, BS and PRM.
  • In the process, the connectors for BS and PRM were detatched from the ribbon cable (there wasn't any good way to unseat the connector from the shell that I know of). These will have to be repaired by Chub, and the signal integrity will have to be checked (as they have to be for the connectors that are allegedly intact).
  • While we were doing the wiring, I disconnected the outputs of the coil driver board going to the satellite box (front panel DB15 connector on D010001). These were restored after our work for the testing phase.
  • The backplane cables to the eurocrate housing the coil driver boards were also disconnected. They are currently just dangling, but we will have to clean it up if the new crate is performing alright.
  • In general the cable routing cleanliness has to be checked and approved by Chub or someone else qualified. In particular, the power leads to the eurocrate are in the way of the DIN96-DB37 adaptor board of Johannes' design, particularly on the SUS PD eurocrate.
  • Tapping new power rails for the Acromag chassis will have to be done carefully. Ideally we shouldn't have to turn off the Sorensens.
  • There are some software issues we encountered today connected with the networking that have to be understood and addressed in a permanent way.
  • Sooner rather than later, we want to reconnect the Acromag crate that was monitoring the PSL channels, particularly given the NPRO's recent flakiness.
  • The NPRO was turned back on (following the same procedure of slowly dialing up the injection current). Primary motivation to see if the mode cleaner cavity could be locked with the new SUS electronics. Looks like it could. I'm leaving it on over the weekend...
  14587   Thu May 2 10:41:50 2019 gautamUpdateSUSSOS Magnet polarity

A concern was raised about the two ETMs and ITMX having the opposite response (relative to the other 7 SOS optics) in the OSEM PDmon channel in response to a given polarity of PIT/YAW offset being applied to the coils. Jon has factored into account all the digital gains in the actuation part of the CDS system in making this conclusion. I raised the possibility of the OSEM coil winding direction being opposite on the 15 OSEMs of the ETMs and ITMX, but I think it is more likely that the magnets are just glued on opposite to what they are "supposed" to be. See Attachment #6 of this elog (you'll have to rotate the photo either in your head or in your viewer) and note that it is opposite to what is specified in the assembly procedure, page 8. The net magnetic quadrupole moment is still 0, but the direction of actuation in response to current in the coil in a given direction would be opposite. I can't find magnet polarities for all the 10 SOS optics, but this hypothesis fits all the evidence so far..

  14588   Thu May 2 10:59:58 2019 JonUpdateSUSc1susux in situ wiring testing completed

Summary

Yesterday Gautam and I ran final tests of the eight suspensions controlled by c1susaux, using PyIFOTest. All of the optics pass a set of basic signal-routing tests, which are described in more detail below. The only issue found was with ITMX having an apparent DC bias polarity reversal (all four front coils) relative to the other seven susaux optics. However, further investigation found that ETMX and ETMY have the same reversal, and there is documentation pointing to the magnets being oppositely-oriented on these two optics. It seems likely that this is the case for ITMX as well. 

I conclude that all the new c1susaux wiring/EPICS interfacing works correctly. There are of course other tests that can still be scripted, but at this point I'm satisfied that the new Acromag machine itself is correctly installed. PyIFOTest has been morphed into a powerful general framework for automating IFO tests. Anything involving fast/slow IO can now be easily scripted. I highly encourage others to think of more applications this may have at the 40m.

Usage and Design

The code is currently located in /users/jon/pyifotest although we should find a permanent location for it. From the root level it is executed as

$ ./IFOTest <PARAMETER_FILE>

where PARAMETER_FILE is the filepath to a YAML config file containing the test parameters. I've created a config file for each of the suspended optics. They are located in the root-level directory and follow the naming convention SUS-<OPTIC>.yaml.

The code climbs a hierarchical "ladder" of actuation/readback-paired tests, with the test at each level depending on signals validated in the preceding level. At the base is the fast data system, which provides an independent reference against which the slow channels are tested. There are currently three scripted tests for the slow SUS channels, listed in order of execution:

  1. VMon test:  Validates the low-frequency sensing of SUS actuation (VMon channels). A DC offset is applied in the final filter module of the fast coil outputs, one coil at a time. The test confirms that the VMon of the actuated coil, and only this VMon, senses the displacement, and that the response has the correct polarity. The screen output is a matrix showing the change in VMon responses with actuation of each coil. A passing test, roughly, is diagonal values >> 0 and off-diagonal values << diagonal.

  2. Coil Enable test:  Validates the slow watchdog control of the fast coil outputs (Coil-Enable channels). Analogously to (1), this test also applies a DC offset via the fast system to one coil at a time and analyzes the VMon responses. However, in this case, the offset is enabled to all five coils simulataneously and only one coil output is enabled at a time. The screen output is again a \Delta VMon matrix interpreted in the same way as above.

     

  3. PDMon/DC Bias test:  Validates slow alignment control and readback (BiasAdj and PDMon channels). A DC misalignment is introduced first in pitch, then in yaw, with the OSEM PDMon responses measured in both cases. Using the gains from the PIT/YAW---> COIL output coupling matrix, the script verifies that each coil moves in the correct direction and by a sufficiently large magnitude for the applied DC bias. The screen output shows the change in PDMon responses with a pure pitch actuation, and with a pure yaw actuation. The output filter matrix coefficients have already been divided out, so a passing test is a sufficiently large, positive change under both pitch and yaw actuations.

     

  14591   Fri May 3 09:12:31 2019 gautamUpdateSUSAll vertex SUS watchdogs were tripped

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames?

  14592   Fri May 3 12:48:40 2019 gautamUpdateSUS1X4/1X5 cable admin

Chub and I crossed off some of these items today morning. The last bullet was addressed by Jon yesterday. I added a couple of new bullets.

The new power connectors will arrive next week, at which point we will install them. Note that there is no 24V Sorensen available, only 20V.

I am running a test on the 2W Mephisto for which I wanted the diagnostics connector plugged in again and Acromag channels to record them. So we set up the highly non-ideal but temporary set up shown in Attachment #1. This will be cleaned up by Monday evening latest.

update 1630 Monday 5/6: the sketchy PSL acromag setup has been disassembled.

Quote:
 
  • Take photos of the new setup, cabling.
  • Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
  • Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
  14596   Mon May 6 11:05:23 2019 JonUpdateSUSAll vertex SUS watchdogs were tripped

Yes, this was a consequence of the systemd scripting I was setting up. Unlike the old susaux system, we decided for safety NOT to allow the modbus IOC to automatically enable the coil outputs. Thus when the modbus service starts/restarts, it automatically restores all state except the watchdog channels, which are left in their default disabled state. They then have to be manully enabled by an operator, as I should have done after finishing testing.

Quote:

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

ELOG V3.1.3-