I moved the network-enabled power strip from above the power supplies on rack 1y4 to below. Nothing was powered through the strip when I unplugged everything and I connected everything to the same port after.
[Ian, Koji] - Activity on 25th (Fri)
We continued working on the ETMY electronics replacement.
- The units were fixed on the rack along with the rack plan.
- Unnecessary Eurocard modules were removed from the crate.
- Unnecessary IDC cables and the sat amp were removed from the wiring chain. The side cross-connects became obsolete and they also were removed.
- A 18V DC power strip was attached to one of the side DIN rails.
- Right now the ETMY suspension is free and not damped. We are relying on the EQ stops.
Next things to do:
- Layout the coil driving cables from the vacuum feedthru to the sat amp (2x D2100675-01 30ft ) [40m wiki]
- Layout DB cables between the units
- Layout the DC power cables from the power strip to the units
- Reassign ADC/DAC channels in the iscey model.
- Recover the optic damping
- Measure the change of the PD gains and the actuator gains.
The replacement key switches and Ne Indicators came in. They were replaced and work fine now.
The power supply units were tested with the X end HeNe display. It turned out that one unit has the supply module for 1350V 4.9mA while the other two do 1700V 4.9mA.
In any case, these two ignited the HeNe Laser (1103P spec 1700V 4.9mA).
The 1350V one is left at the HeNe display and the others were stored in the cabinet together with spare key SWs and Ne lamps.
Now that the ETMSUS is back up and running I reran my measurements from the beginning of the process. The results below show a change in gain between the before and after measurements. I have given values of the low-frequency section below.
Average Gain difference from the TFs: 18.867 (excluding thee side change)
I also am noting the new values for the OSEM DC output: average gain increase: 9.004
In addition, the oplev position was:
All data and settings have been included in the zip file
From the average gain increase of the TFs which indicates the increase of the whole system and the increase in gain from the OSEM we can calculate the gain from the actuators.
18.867/9.004 = 2.09
thus the increase in gain on the actuator is about 2.09
EDIT: I updated the side TF with one with better SNR. I increased the excitation amplitude.
I updated the gain of the ct2um filter on the OSEMS for ETMY and decreased their gain by a factor of 9 from 0.36 to 0.04.
I added a filter called "gain_offset" to all the coils except for side and added a gain of 0.48.
together these should negate the added gain from the electronics replacement of the ETMY
After Ian updated the cts2um filters for OSEM, shouldn't the damping gains be increased back by factor of 10 to previos values? Was the damping gain for SIDE ever changed? we found it at 250.
Can you explain why gain_offset filter was required and why this wasn't done for the side coil?
The point of changing the gains was to return the system to its origional state. ie I wanted the over all gain of the physical components to be the same as when we started. From the CDS side of things nothing else should be changed. The damping filters should remain at their origional values. The cts2um filter was changed to counteract a change in the electronics (replacing them). These changes should cancel eachother out. As for the side control, on 3/4/22 koji reduced the output resistors for the 4 face OSEMs but did not change the the SD one. there fore the SD did not need the same adjustment as the others.
We ran one more free swing test on ETMY last night, after the last bit of tweaking on the SIDE OSEM. It now looks pretty good:
pit yaw pos side butt
UL -0.323 1.274 1.459 -0.019 0.932
UR 1.013 -0.726 1.410 -0.050 -1.099
LR -0.664 -1.353 0.541 -0.036 0.750
LL -2.000 0.647 0.590 -0.004 -1.219
SD 0.021 -0.035 1.174 1.000 0.137
So I declare: WE'RE NOW READY TO CLOSE UP.
I opened up the ETMY satellite box to investigate the glitches seen in the UL sensor output.
Attachments #1 & 2: The connection to J4 from the satellite amplifier goes through a "satellite amplifier termination board", whose function, according to the schematic, is to prevent oscillations of the output amplifiers for the PD outputs. This seems to have been attached to the inside cover of the Satellite box by means of some sort of sponge/adhesive arrangement. The box itself gets rather hot however, and the sponge/adhesive was a gooey mess. I believe it is possible that some pins on the termination board were getting shorted - so if the 100 ohm resistor for the Ul channel that is meant to prevent the output amplifier oscillating was getting shorted, this could explain the problem.
For now, I cleaned off the old sponge/adhesive as best as I could, and used 4 pads of thick double sided tape (with measured resistance > 60Mohm) to affix the termination board to the inside of the box lid. In the ~3 hours since I have plugged the satellite box back in, there has been no evidence of any glitching.
Of course, it could be that the problem has nothing to do with the termination board, and perhaps an OpAmp in the UL signal chain is damaged, but I stopped short of replacing these for now. I plan to push on with putting the IFO back together, and will keep an eye on this problem to see if more action is needed.
Also, if the inside of the ETMY satellite box had this problem of the sponge/adhesive giving way, it may be that something similar is going on in the other boxes as well. This remains to be investigated.
I did some work on the ETMY real and Sim.
It seems like there is still a problem with the input whitening filters. I believe the Xycom logic is set such that the analog whitening of the OSEM signals is turned ON only when the FM1 is turned OFF. Joe has got to fix this (and elog it) so that we can damp the suspension correctly. For now, the damping of the ETMY and the SETMY require different servo gains and signs, probably because of this.
4. The blue Output Filters section has been changed to agree with the new filter of matrices row, column labeling. My fault for not testing it and realizing it was broken. The change was made in /opt/rtcds/caltech/c1/medm/master/C1SUS_DEFAULTNAME.adl and then ,/generate_master_screens.py was run, updating all the screens.
5. I have swapped the logic for the sensor filter banks (ULSEN, URSEN, etc). It now sends a "1" to the Binary Output board controlling the OSEM analog whitening when the FM1 filter is ON. This has been done for all the suspensions (BS, ITMX,ITMY, SRM, PRM, MC1, MC2,MC3, ITMX, ITMY).
I am also updating the first sensor filter banks for the BS, ITMX, ITMY, SRM, PRM,MC1,MC2,MC3, called "3:30", to match the Y and X ends.
8. I can't find any documentation on how to get a momentary button press to toggle states. I could stick a filter bank in and use the on/off feature of that part, but that feels like a silly hack. I've decided for the moment to split the TM offset button into 2, one for ON, one for OFF. I'll put in on the list of things to have added to the RCG code (either a method, or documentation if it already exists).
EDIT: TM offset still doesn't work. Will worry about it next week.
9. Fixed a connection in SPY/SPX models where the side senor path that was missing a constant to a modulo block.
About 2 weeks ago, I noticed some odd behaviour of the LSC TRY data stream. Its DC value seems to be drifting ~10x more than TRX. Both signals come from the transmission QPDs. At the time, we were dealing with various CDS FE issues but things have been stable on that end for the last two weeks, so I looked into this a bit more today. It seems like one particular channel is bad - Quadrant 4 of the ETMY TRANS QPD. Furthermore, there is a bump around 150Hz, and some features above 2kHz, that are only present for the ETMY channels and not the ETMX ones.
Since these spectra were taken with the PSL shutter closed and all the lab room lights off, it would suggest something is wrong in the electronics - to be investigated.
The drift in TRY can be as large as 0.3 (with 1.0 being the transmitted power in the single arm lock). This seems unusually large, indeed we trigger the arm LSC loops when TRY > 0.3. Attachment #2 shows the second trend of the TRX and TRY 16Hz EPICS channels for 1 day. In the last 12 hours or so, I had left the LSC master switch OFF, but the large drift of the DC value of TRY is clearly visible.
In the short term, we can use the high-gain THORLABS PD for TRY monitoring.
Indeed, the whole point of the high/low gain setup is to never use the QPDs for the single arm work. Only use the high gain Thorlabs PD and then the switchover code uses the QPD once the arm powers are >5.
I don't know how the operation procedure went so higgledy piggledy.
Had been disconnected for about two weeks. I found a partially seated 4-pin LEMO cable coming from the OSEM PD interface board.
For the ITMY, I squished together the cables which are in the 'Cable Interface Board' which lives in the rack. This thing takes the 64 pin IDC from the satellite module and converts it into 2 D-sub connectors to go to the PD whitening board and the coil driver board. Lets see if the ITMY OSEM glitches change character overnight.
Last night from 8:30 pm to 8:30 am PDT, ETMY UL signal was glitchy again. As of now it seems to have quieted back down, but we pushed on the cables on the board at the Y end to hopefully prevent it from coming back. After doing so it still seems to be behaving well.
I'd recommend replacing the wire and grinding down the clamp to prevent cutting the wire. Since we have almost never replaced clamps, many of them probably have grooves from the wires and can make unpleasant cuts. Better safe than sorry in this case.
I've been noticing that the ETMY UL sensor output has been erratic over the last few days. It seems to be jumping around a lot, even though there is no discernable change in any of the other sensor signals. Damping is OFF, which means the sensor signals should just be a reflection of actual test mass motion. But the fact that only one sensor output is erratic leads me to believe that the problem is in the electronics. I've also double checked that we aren't touching any EQ stops. Also, we had centered all the sensor outputs to half their maximum value pretty carefully. But looking at the Striptool traces, I now find that the UL sensor output has settled at some other value. Simply removing the OSEM connector and plugging it in again leads to the sensor output going back to the carefully centered value. Could it be that the photodiode has gone bad? If so, do we have spare OSEMs to use? I will also re-squish the satellite box cables to see if that fixes the problem.
Attachment #1: Sensor output spectra around the bounce mode peak. Nothing was touched inside the chamber between the time this spectrum was taken and the spectrum I put up last night (in fact the chamber was closed)
Attachment #2: UL sensor output is erratic, while the others show no glitching. This supports the hypothesis that the problem is electronic. The glitch itself happened while the chamber was closed.
Attachment #3: The only difference between this trace and Attachment #2 is that the UL connector was removed and plugged in (OSEM wasn't touched)
This problem has existed well before the vent
We do indeed have a box of clean spare OSEMs, it should be out with all of the other boxes of clean stuff we had for the suspension building. You could also try swapping in a different satellite box, to see if the circuit powering the OSEM PD is to blame.
I wanted to observe the UL coil for any excursions over the weekend. Looking at the 2 day trend, something is definitely wrong. These glitches/excursions are much more pronounced than what is seen in the pre-vent plots Steve had put up.
In order to try and narrow down whether the problem is with the Satellite box or the LED/PD themselves, I switched the Satellite box at the Y end with the Satellite box for ITMY (at ~930pm tonight). Hopefully over a 12 hour observation period, we see something that will allow us to make some conclusion.
It looks like the problem is indeed in the Satellite box. Attachment #1 shows the second trend for the last 12 hours (~930pm 28 Aug 2016 - 930am 29 Aug 2016) for the ITMY and ETMY sensor signals. The satellite boxes for the two were switched during this time (the switch is seen at the leftmost edge of the plots). After the switch, ETMY UL has been well behaved, though ITMY UL shows evidence of excursions similar to what we have been seeing. All the ITMY coils are pulled out of the suspension cage currently, and are just sitting on the optical table, so they should just be reading out a constant value. I think this is conclusive evidence that the problem is with the Satellite box and not the OSEM itself. I will pull the Satellite box out and have a look at its innards to see if I can find the origin of the problem...
This afternoon I re-enabled the ETMY coils after I found that the watchdogs for the mirror had tripped last night at 2:06am.
Rana did a checkout of my story about oddness of the ETMY suspension. Today, we focused on the actuators - the goal was to find the correct coefficients on the 4 face coils that would result in diagonal actuation (i.e. if we actuate on PIT, it only truly moves the PIT DoF, as witnessed by the Oplev, and so on for the other DoFs). Here are the details:
We connected megatron to the IO chassis which in turn was plugged into the rest of the ITMY setup. We had manually turned the watchdogs off before we touched anything, to ensure we didn't accidently drive the optic. The connections seem to go smoothly.
However, on reboot of megatron with the IO chassis powered up, we were unable to actually start the code. (The subsystem has been renamed from SAS to TST, short for test). While starttst claimed to start the IOC Server, we couldn't find the process running, nor did the medm screens associated with it work.
As a sanity test, we tried running mdp, Peter's plant model, but even that didn't actually run. Although it also gave an odd error we hadn't seen before:
"epicsThreadOnceOsd epicsMutexLock failed."
Running startmdp a second time didn't give the error message, but still no running code. The mdp medm screens remained white.
We turned the IO chassis off and rebooted megatron, but we're still having the same problem.
Things to try tomorrow:
1) Try disconnecting megatron completely from the IO chassis and get it to a state identical to that of last night, when the mdp and mdc did run.
2) Confirm the .mdl files are still valid, and try rebuilding them
[koji, rana, gautam]
This morning, we did the following;
The OSEMs remain in the EY vacuum chamber. The next set of steps are:
We will most likely work on this tomorrow. At ~1615, I briefly opened the PSL shutter and tweaked the IMC alignment. We will almost certainly change the pointing into the IMC when we remove the old OMC and rebalance that table, so care should be taken when working on that...
Summary: Today we moved the suspended ETMY optic back into the chamber from the cleanroom. Once in the chamber, we positioned the optic using the stops that marked the previous position of the optic. We then shortened the arm length by 19mm (in order to match the X and Y arm lengths. The F.C. coat on the HR face was removed prior to the final placement of the optic. We then adjusted the OSEM positions in their holders to get the sensor outputs to half their maximum value.
We did not get to check where the input beam hits the optic or see if the pitch balance of the optic is such that the reflected beam makes it back to the ITM. The plan for tomorrow is to do this.
Part 1: Cleanroom work
Part 2: Transportation of optic
Part 3: Chamber work
Plan for tomorrow:
Attachment #1: Wire is in groove in side without OSEM
Attachment #2: Wire is in groove in side with OSEM (picture taken with OSEM coil removed)
Attachment #3: UL magent relative to OSEM coil
Attachment #4: LL magent relative to OSEM coil
Attachment #5: LR magnet relative to OSEM coil
Attachment #6: UR magnet relative to OSEM coil
Attachment #7: Side magnet relative to OSEM coil
Attachment #8: ETMY HR face with F.C. film removed. Non-covered part isn't super clean, but the covered part itself does not have any large specks of dust visible.
Attachment #9: Scheme adopted to shorten Y arm length by 19mm.
Attachment #10: Current situation inside EY chamber. Counterweight that was moved to balance the table is indicated.
There was some confusion as to the order in which we should go about trying to recover the Y arm. But here are the steps we decided on in the end.
Yesterday, Eric, Johannes and I tried to do step 1, but after some hours of beam walking, we were unsuccessful. Today morning, Koji suggested that the ITM wedge could be playing a part - essentially, over 40m, the wedge would shift the beam horizontally by ~30cm, which is kind of what we were seeing yesterday. That is, with 0 biases to the tip tilts, we could find the beam in the ETM chamber, towards the end of the table, ~30cm away from where it should be (since the input pointing is adjusted taking this effect into account, but we were doing all of our alignment attempts without the ITM in).
So, we shifted strategy today. The idea was to trust that the green beam was well aligned to the cavity axis (we had maximized the green transmission before the vent), and set the pitch bias voltage to ETMY by making the reflected beam overlap with itself. This was done successfully, and we needed to apply a pitch bias of ~-2.70 (value on the MEDM screen slider), which agrees well with what I was seeing in the cleanroom. We then adjusted the OSEMs to bring the sensor outputs to half their nominal maximum value. Next, we went into the ITMX chamber, and were able to find the green beam, at the right height, and approximately where we expect the center of the ITM to be (this supports the hypothesis that the green input pointing was pretty good). I am however concerned if this is truly the right value of the bias for making a cavity with the ITM, because the pre-vent value of the pitch bias slider for ETMY was at -3.7, which is a 30% difference from the current value (and I can't think of a reason why this should have changed, the standoffs weren't touched for ETMY). If we go ahead and fine tune the OSEMs rotationally assuming this is the right bias to have, we may end up with sub-optimal bounce mode coupling into the sensor signals if we have to apply a significantly larger/smaller offset to realise a cavity? The alternative is to put in the ITM, and set the pitch balance using the IR beam, and then go about rotating OSEMs. The obvious downside is that we have to peel the F.C. off, risking dirtying the ITMs.
For much of the rest of the day, we were trying to play with the rotation of the OSEM coils in order to minimize the bounce mode coupling into the sensor signals. We weren't able to come up with a good scheme to do this measurement, and I couldn't find any elog which details how this was done in the past. The problem is we have no target as to how good is good enough, and it is extremely difficult to gauge whether our rotation has improved the situation or not. For instance, with no rotation of the OSEMs, by observing the bounce mode peak height over a period of 20-30 minutes, we saw the peak height change by a factor of at least 3. This is not really surprising I guess, because the impulses that are exciting the bounce mode are stochastic (or at least they should be), and so it is very hard to make an apples to apples comparison as to whether a rotation has improved the situation on.
After some thought, the best I can come up with is the following. If anyone has better ideas or if my idea is flawed, or if this is a huge waste of time, please correct me!
Of course, this method assumes that the excitation into the bounce mode is a constant over time. I'm also attaching the spectrum of the OSEM sensor signals right now - the optic is in the chamber, free swinging (no damping) with the door on (so it is fairly quiet). The LR signal seems to be the best (indeed seems to match the levels in this plot), but it is not clear whether the others can be improved or not.
There was also some concern as to whether we will be able to see the beam in the ETMX chamber once the ITM has been re-installed. Assuming we get 100mW out of the IMC, PRM transmission of 5.5%, and ITM transmission of 1.4%, we get ~35uW incident on the ETM, which while isn't a lot, should be sufficient to see using an IR card.
I test drove ETMY biases.
PITCH worked well in slow and fast modes. Slow drive was from the IFO alignment screen C1:SUS-ETMY_PIT_COM and
the fast one from C1:SUS-ETMY_ASCPIT_OFFSET
YAW did not. It was always diagonal. It was specially bad with the fast drive. I compared them with ETMX. ETMX yaw is diagonal a little bit too.
The OPLEV return spots on the qpd ETMX and ETMY are big 5-6 mm diameter. The ETMY spot has weird geometry to qpd.
After finally figuring out what was messed up with ETMY I was able to get good measurements of the binary whitening switching on ETMY to determine that it is in fact working now:
ul : 3.2937569959 = 10.3538310999 db
ll : 3.28988426634 = 10.3436124066 db
sd : 3.34670033732 = 10.4923365497 db
lr : 3.08727050163 = 9.7914936665 db
ur : 3.27587751842 = 10.3065531117 db
[gautam, johannes, lydia]
We decided to try some different approaches on minimizing the ETMY bounce coupling today, since the peak height in the previously attched spectrum was higher than the previously recorded levels in 2011 for all but the LR OSEM.
The c1iscey was converted over to be a diskless Gentoo machine like the other front ends, following the instructions found here. Its front end model, c1scy was copied and approriately changed from the c1scx model, along with the filter banks. A new IOP c1x05 was created and assigned to c1iscey.
The c1iscey IO chassis had the small 4 PCI slot board removed and a large 17 PCI slot board put in. It was repopulated with an ADC/DAC/BO and RFM card. The host interface board from Rolf was also put in.
On start up, the IOP process did not see or recognize any of the cards in the IO chassis.
Four reboots later, the IOP code had seen the ADC/DAC/BO/RFM card once. And on that reboot, there was a time out on the ADC which caused the IOP code to exit.
In addition to the not seeing the PCI cards most of the time, several cables still need to be put together for plugging into the the adapter boards and a box need to be made for the DAC adapter electronics.
Last Saturday I succeeded in damping the ETMY suspension eventually.
This means now ALL the suspensions are happily damped.
It looked like some combination of gains and control filters had made unstabie conditions.
I actually was playing with the on/off switches of the control filters and the gain values just for fun.
Then finally I found it worked when the chebyshev filters were off. This is the same situation as Yuta told me about two months before.
Other things like the input and the output matrix looked nothing is wrong, except for the sign flips at ULSEN and SDSEN as I mentioned in the last entry (see here).
So we still should take a look at the analog filters in order to make sure why the signs are flipped.
ETMY sus damping was restored
ETMY's watch dogs were found tripped. They were restored.
ETMY sus damping restored.
ETMY sus damping restored
ETMY damping restored.
Cryo interlock closed VC1 ~2 days ago. P1 is 6.3 mTorr. Cryo temp 12K stable, reset photoswitch and opened VC1
I made some efforts in order to damp ETMY, however it still doesn't happily work.
It looks like something wrong is going on around the whitening filters and the AA filter borad.
I will briefly check those analog parts tomorrow morning.
- - -(symptom)
Signs of the UL and the SD readouts are flipped, which I don't know why.
At the testpoints on the analog PD interface board, all the signs are the same. This is good.
But after the signals go through the whitening filters and AA filters, UL and SD become sign-flipped.
I tried compensating the sign-flips by changing the sign by means of the software, but it didn't help the damping.
In fact the suspension got crazy when I activated the damping. So I have no idea if we are looking at exactly right readouts or some sort of different signals.
- - -(fixing DAC connector)
I fixed a connector of the DAC ribbon cable since the solderless connector was loosely locked to its cable.
Before fixing this connector I couldn't apply voltages on some of the coils but now it is working well.
I collected some free-swinging data from earlier today evening. There are still only 3 peaks visible in the ASDs, see Attachment #1.
TBH, I don't have any clear ideas as to what we are supposed to do to to fix the problem (or even what the problem is). So here is my plan for now:
I anticipate that these will throw up some more clues
Jenne, Kiwamu, Alberto, Steve, Bob, Koji
We wiped ETMY after recovery of the computer system. We take the lunch and resume at 14:00 for ITMX.
Detailed reports will follow.
It will arrive around 10 am Monday morning.
Enclosure cover #1 transmission measured in 1064 nm, 156 mW, P polarization and beam size ~ 1 mm
As condition: fully assembled, protective layer removed, tinted- adhesive activated on yellow acrylic on top of each other.
T = 1.2 % in 20 minutes exposure test. This agrees with the test measurement of 6-18-2012
There is a reflected 2-3 cm circular glare that is barely visible on sensor card. It is well below 1 mW level
As we are installing the NPRO with ~350 mW of power we have to address what additional shield should be installed.
The June 2012 test with 1W power burned through of the 3 layer IR coated films in 3-4 hours.
We 'll use Aluminum shields in the high power path till we come up with better solution.