I set the MC back to its good alignment (June 21st) using this procedure. The trend of the OSEM values over the last 40 days and 40 nights is attached.
Then I aligned the periscope to that beam. This took some serious periscope knob action. Without WFS, the transmission went to 2.7 V and the reflection down to 0.6V.
Then I re-aligned the MC_REFL path as usual. The beam was far enough off that I had to also re-align onto the MC LSC PD as well as the MC REFL camera (~2 beam radii).
Beams are now close to their historical positions on Faraday and MC2. I then restored the PZT sliders to their April snapshot and the X-arm locked.
Steve - please recenter the iris which is on the periscope. It has been way off for a long time.
So it looks OK now. The main point here is that we can trust the MC OSEMs.
Afterwards I rebooted c1susvme1 and c1susvme2 because they were skewed.
It is really surprising that we now have again the data from the MC OSEMs since up to two days ago the record looked corrupted (see the attachments in my entry 1774).
The reason I ended up severely misaligning the the MC is exactly that there wasn't anymore a reference position that I could go back to and I had to use the camera looking a the Faraday.
I'm impressed by Rana's simple way to align the MC. IFO arms are locked or flashing. 20 days trend attached.
Interferometer alignment is restored
ASS has been run on each arm, recycling mirrors were aligned by overlapping on AS camera.
ETMY was not getting its ASC pitch and yaw signals. C1SCY had a red RFM bit (although, it still does now...)
I took a look at the c1rfm simulink diagram and found that C1RFM had an RFM block called C1:RFM-TST_ETMY_[PIT/YAW] and C1SCY had one called C1:TST-SCY_ETMY_[PIT/YAW].
It seems that C1TST was illegally being used in a real signal chain, and Jenne's recent work with c1tst broke it. I renamed the channels in C1RFM and C1SCY to C1:RFM-SCY_ETMY_[PIT/YAW], saved, compiled, installed, restarted. All was well.
There are still some in SCY that have this TST stuff going on, however. They have to do with ALS, it seems, but are SHMEM blocks, not RFM. Namely:
We are restoring the IFO alignment back to Nominal operating state
What we did:
- Fixed the IMC and got WFS back to running state.
- Returned all TMs to there original postion using the OPLEVs
- Used BurtRestore to bring back the C1:HPC.adl and C1:BAC.adc
- Align the OPLEVs.
To start off the morning, I began to work on IMC which became misaligned ~6:00am. IMC WFS seemed to be the issue, so I turned these off and pushed worked to align IMC manually (This took a ton of time). After aligning, Yuta came in and showed me how to fix IMC WFS and we got it to work again. the main issue being that "Clear History" was stuck for a bit and the offsets were stuck to insane values.
Next we moved forward with viewing the BHD PDs. Yuta knew there was an issue with the BurtRestore Yehonathan and I did yesterday because there were no values on the C1:HPC.adl page. To fix this, in the terminal we typed -->
~> cd /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2023/Jun/9
Now, you land in whichever date you chose, for us it was the 9th of June:
This will pull up BURT. from here go Restore-> Snapshot Files-> (Select your time) -> (Select which files you want to restore) -> Ok -> Cancel -> Restore. (Keywords: How to burtrestore )
This will restore the files of which you have chosen back to the date you selected.
Yuta and I restored C1:HPC.adl and C1:BAC.adl.
Following, now that all the BurtRestoring is done, we moved forward with aligning single arms. We had a very trashy mode on BHD and decided to align TT1 and TT2.
I tried to align the Y arm to start off, but ETMy was acting very hectic. I was trying to align the OPLEVs to start off, but ETMY Oplev was VERY FAR OFF. It was very weird that I was moving YAW for a very good amount and yet the Red Beam would not move one bit in the YAW direction! I couldnt get it even close to hit the steering mirror to hit the ETMY_OPLEV_QPD. I asked Yuta for help and it turns out that the gain for the alignment offesets were 0! After Yuta found this, he restored the original offset values by using:
restoreAlignment -o ETMY -t 'now 14days'
(Keywords: How to restoreAlignment)
After this issue was fixed, I was able to align the OPLEV beam with ease and we started to get some flashing on the Y Arm. Yuta now has the Y Arm locked with flashing on the X Arm. We plan to align nicely and realign the OPLEVs before the end of the day.
The incompleteness of burtrestore is a critical issue. We need to track down what is the issue.
I think we (Jenne, Jamie) are going to leave things for the night to give ourselves more time to prep for the vent tomorrow.
We still need to put in the PSL output beam attenuator, and then redo the MC alignment.
The AS spot is also indicating that we're clipping somewhere (see below). We need to align things in the vertex and then check the centerings on the AP table.
So I think we're back on track and should be ready to vent by the end of the day tomorrow.
We had a big alignment party early this morning, and things are back to looking good. We have been very careful not to bump or touch tables any more than necessary. Also, we have removed the apertures from the BS and PRM, so there are no more apertures currently left in the chambers (this is good, since we won't forget).
We started over again from the PZTs, using the PRM aperture and the freestanding aperture in front of PR2, to get the height of the beam correct. We then moved PZTs to get the beam centered on BS, ITMY, ETMY. We had to do a little poking of PR2 (and PR3?) to get pitch correct everywhere.
We then went to ETMX to check beam pointing, and used BS to steer the beam to the center of ETMX. We checked that the beam was centered on ITMX.
We went through and ensured that ITMX, ITMY, PRM, SRM are all retroreflecting. We see nice MICH fringes, and we see some fringes (although still not so nice...) when we bring PRM and SRM into alignment.
We checked the AS path (with only MICH aligned), and made sure we are centered on all of the mirrors. This included steering a little bit on the mirrors on the OMC table, in yaw. Initially, AS was coming out of the vacuum, but hitting the side of the black beam tube. Now it gets nicely to the table.
For both AS and REFL, we made sure there is no clipping in the OMC chamber.
I recentered the beams for AS and REFL on their respective cameras.
IPPOS was centered on the QPD. This involved moving the first out-of-vac steering mirror sideways a small amount, since the beam was hitting the edge of the mirror. IPANG was aligned in-vac, and has been centered on the QPD.
Right now, Manasa, Jamie and Ayaka are doing some finishing touches work, checking that POY isn't clipping on OM2, the second steering mirror after the SRM, and they'll confirm that POX comes out of the chamber nicely, and that POP is also still coming out (by putting the green laser pointer back on that table, and making sure the green beam is co-aligned with the beam from PR2-PR3. Also on the list is checking the vertex oplevs. Steve and Manasa did some stuff with the ETM oplevs yesterday, but haven't had a chance to write about it yet.
Today we found the green beam from the end was totally missing at the vertex.
- What we found was very weak green beam at the end. Unhappy.
- We removed the PBS. We should obtain the beam for the fiber from the rejection of the (sort of) dichroic separator although the given space is not large.
- The temperature controller was off. We turned it on again.
- We found everything was still misaligned. Aligned the crystal, aligned the Faraday for the green.
- Aligned the last two steering mirrors such that we hit the approximate center of the ETMX and the center of the ITMX.
- Made the fine alignment to have the green beam at the PSL table.
The green beam emerged from the chamber looks not so round as there is a clipping at an in-vac steering.
We will make the thorough realignment before closing the tank.
The goal of the night was to lock the Y arm. (Since that didn't happen, I moved on to fixing the WFS since they were hurting the MC)
I used the power supplies at 1Y4 to steer PZT2, and watched the face of the black glass baffle at ETMY. (elog 7569 has notes re: camera work earlier) When I am nearly at the end of the PZT range (+140V on the analog power supply, which I think is yaw), I can see the beam spot near the edge of the baffle's aperture. Unfortunately, lower voltages move the spot away from the aperture, so I can't find the spot on the other side of the aperture and center it. Since the max voltage for the PZTs is +150, I don't want to go too much farther. I can't take a capture since the only working CCD I found is the one which won't talk to the Sensoray. We need some more cameras....they're already on Steve's list.
When the spot is a little closer to the center of the aperture than the edge of the aperture (so the full +150V!!), I don't see any beam coming out of AS....no beam out of the chamber at all, not just no beam on the camera. Crapstick. This is not good. I'm not really sure how we (I?) screwed up this thoroughly. Sigh. Whatever ghost REFL beam that Kiwamu and Koji found last week is still coming out of REFL.
Previous PZT voltages, before tonight's steering: +32V on analog power supply, +14.7 on digital. This is the place that the PRMI has been aligned to the past week or so.
Next, just to see what happens, I think I might install a camera looking at the back (output) side of the Faraday so that I can steer PRM until the reflected beam is going back through the Faraday. Team K&K did this with viewers and mirrors, so it'll be more convenient to just have a camera.
VENT NOW and FIX ALIGNMENT!
[ Yuki, Koji, Gautam ]
An alignment of AUX Y end green beam was bad. With Koji and Gautam's advice, it was recovered on Friday. The maximum value of TRY was about 0.5.
Since we may want to close up tomorrow, I did the following prep work:
Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.
[Koji / Kiwamu]
We have realigned the interferometer except the incident beam.
The REFL beam is not coming out from the chamber and is likely hitting the holder of a mirror in the OMC chamber.
So we need to open the chamber again before trying to lock the recycled interferometers at some point.
--- What we did
---- things to be fixed
- Align the steering mirrors in the faraday rejected beam path (requires vent)
- SRM oplev (this is out of the QPD range)
- ITMX oplev (out of the range too)
Today the Y arm was locking fine. The alignment had drifted somewhat so I ran the dither and TRY returned to ~0.8. However, the mode cleaner has been somewhat unstable. It locked many times but usually for only a few minutes. Maybe the alignment or autolocker needs to be adjusted, but I didn't change anything other than playing with the gain sliders (which didn't seem to make it either better or worse).
ITMX is still stuck.
All is not lost. I've stuck and unstuck optics around a half dozen times. Can you please post the zoomed in time series (not trend) from around the time it got stuck? Sometimes the bias sliders have to be toggles to make the bias correct. From the OSEM trend it seems like it got a large Yaw bias. May also try to reseat the satellite box cables and the cable from the coil driver to the cable breakout board in the back of the rack.
Here's the timeseries plots. I've zoomed in to right after the problem- did you want before? We pretty much know what happened: c1susaux was restarted from the crate but the damping was on, so as soon as the machine came back online the damping loops sent a huge signal to the coils. (Also, it seems to be down again. Now we know what to do first before keying the crate.) It seems like both right side magnets are stuck, and this could probably be fixed by moving the yaw slider. Steve advised that we wait for an experienced hand to do so.
susaux is responsible for turning on/off the inputs to the coil driver, but not the actual damping loops. So rebooting susaux only does the same as turning the watchdogs on/off so it shouldn't be a big issue.
Both before and after would be good. We want to see how much bias and how much voltage from the front ends were applied. l1susaux could have put in a huge bias, but NOT a huge force from the damping loops. But I've never seen it put in a huge bias and there's no way to prevent this anyway without disconnecting cables.
I think its much more likely that its a little stuck due to static charge on the rubber EQ stop tips and that we can shake it lose with the damping loops.
ITMX is free, OSEM signals all rougly centered.
This was accomplished by rocking the static alignment (i.e. slow controls) pitch and yaw offsets until the optic broke free. This took a few volts back and forth. At this point, I tried to find a point where the optic seemed to freely swing, and hopefully have signals in all 5 OSEMS. It seemed to be free sometimes but mostly settling into two different stationary states. I realized that it was becoming torqued enough in pitch to be leaning on the top-front or top-back EQ stops. So, I slowly adjusted the pitch from one of these states until it seemed to be swinging a bit on the camera, and three OSEM signals were showing real motion. Then, I slowly adjusted the pitch and yaw alignments to get all OSEMS signals roughly centered at half of their max voltage.
While I'm looking at the PRM ASC servo model, I tried to use the current servo filters for the ASC
as Manasa aligned the POP PDs and QPD yesterday. (BTW, I don't find any elog about it)
The POP PD was showing only ~200 counts which was very low compared to what we recollect from earlier PRMI locks (~400 counts). Also, the POP ASC QPD was also not well-aligned.
While holding PRMI lock on REFL55, I aligned POP path to its PD (maximize POP DC counts) and QPD (centered in pitch and yaw).
X and Y green
The X green totally lost its pointing because of the misaligned PZTs from last week's power failure. This was recovered.
Y arm green alignment was also recovered.
[EricQ, Manasa, Koji]
We measured the spot positions on the MC mirrors and redid the MC alignment by only touching the MC mirror sliders. Now all the MC spots are <1mm away from the center.
We opened the ITMY and ETMY chambers to align the green to the arm. The green was already centered on the ITMY. We went back and forth to recenter the green on the ETMY and ITMY (This was done by moving the test masses in pitch and yaw only without touching the green pointing) until we saw green flashes in higher order modes. At this point we found the IR was also centered on the ETMY and a little low in pitch on ITMY. But we could see IR flashes on the ITMYF camera. We put back the light doors and did the rest of the alignment using the pitch and yaw sliders.
When the flashes were as high as 0.05, we started seeing small lock stretches. Playing around with the gain and tweaking the alignment, we could lock the Y arm in TEM00 for IR and also run the ASS. The green also locked to the arm in 00 mode at this point. We aligned the BS to get a good AS view on the camera. ITMX was tweaked to get good michelson.
PRM and SRM OSEM LL 1.5V are they misaligned?
We were trying to check POY alignment using the green laser in the reverse direction (outside vacuum to in-vac) . The green laser was installed along with a steering mirror to steer it into the ITMY chamber pointing at POY.
We found that the green laser did follow the path back into the chamber perfectly; it was clipping at the edge of POY. To align it to the center of POY (get a narrower angle of incidence at the ITMY), the green laser had to be steered in at a wider angle of incidence from the table. This is now being limited by the oplev steering optics on the table. We were not able to figure out the oplev path on the table perfectly; but we think we can find a way to move the oplev steering mirrors that are now restricting the POY alignment.
The oplev optics will be moved once we confirm with Jenne or Steve.
We aligned the ETM oplevs yesterday. We confirmed that the oplev beam hit the ETMs. We checked for centering of the beam coming back at the oplev PDs and the QPDsums matched the values they followed before the vent.
Sadly, they have to be checked once again tomorrow because the alignment was messed up all over again yesterday.
Can we have a drawing of what you did, how you confirmed your green alignment as the same as the IR (I think you had a good idea
about the beam going to the BS...can you please write it down in detail?), and where you think the beam is clipping? Cartoon-level, 20
to 30 minutes of work, no more. Enough to be informative, but we have other work that needs doing if we're going to put on doors
Thursday morning (or tomorrow afternoon?).
The ETMs weren't moved today, just the beam going to the ETMs, so the oplevs there shouldn't need adjusting. Anyhow, the oplevs I'm
more worried about are the ones which include in-vac optics at the corner, which are still on the to-do list.
So, tomorrow Steve + someone can check the vertex oplevs, while I + someone finish looking briefly at POX and POP, and at POY in
If at all possible, no clamping / unclamping of anything on the in-vac tables. Let's try to use things as they are if the beams are getting to
where they need to go. Particularly for the oplevs, I'd rather have a little bit of movement of optics on the out-of-vac tables than any
changes happening inside.
I made a script that averages together many photos taken with the capture script that Rana found, which takes 50 pictures, one after
another. If I average the pictures, I don't see a spot. If I add the photos together even after subtracting away a no-beam shot, the
picture us saturated and is completely white. I'm trying to let ideas percolate in my head for how to get a useful spot.
The way to usually do image subtraction is to:
1) Turn off the room lights.
2) Take 500 images with no beam.
3) Use Mean averaging to get a reference image.
4) Same with the beam on.
5) Subtract the two averaged images. If that doesn't work, I guess its best to just take an image of the green beam on the mirrors using the new DSLR.
blarg. Chrome ate my elog.
112607010 is the start of five minutes on all whitened 1F PDs. REFL55 has more low frequency noise than REFL165, I think we may need more CARM supression (i.e. we need to think about the required gain). This is also supported by the difference in shape of these two histograms, taken at the same time in 3f full lock. The CARM fluctuations seem to spread REFL55 out much more.
I made some filters and scripts to do DC coupling of the ITM oplevs. This makes maintaining stable alignment in full lock much easier.
I had a few 15+ minute locks on 3f, that only broke because I did something to break it.
Here's one of the few "quick" locklosses I had. I think it really is CARM/AO action, since the IMC sees it right away, but I don't see anything ringing up; just a spontaneous freakout.
I just tried to adjust the ETMY camera and its not very user friendly = NEEDS FIXING.
* Camera view is upside down.
* Camera lens is contacting the lexan viewport cover; this means the focus cannot be adjusted without misaligning the camera.
* There's no strain relief of the camera cables at the can. Needs a rubber cable grommet too.
* There's a BNC "T" in the cable line.
Probably similar issues with some of the other setups; they've had aluminum foil covers for too long. We'll have a camera committee meeting tomorrow to see how to proceed.
ITMY has been upgraded here I have the new lenses on hand to do the others when it fit into the schedule.
I was looking into the status of IPC communications in our realtime network, as Chris suggested that there may be more phase missing that I thought. However, the recent continual red indicators on a few of the models made it hard to tell if the problems were real or not. Thus, I set out to fix what I could, and have achieved full green lights in the CDS screen.
The frontend models have been svn'd. The BLRMs block has not, since its in a common cds space, and am not sure what the status of its use at the sites is...
After getting the go ahead from Jamie, I recompiled all the FE models against the same version of RCG that we tested on the c1iscex models.
To do so:
IFO alignment needs to be redone, but at least we now have a (admittedly rounabout way) of getting testpoints. Did a quick check for "nan-s" on the ASC screen, saw none. So I am re-enabling watchdogs for all optics.
GV 23 August 9am: Last night, I re-aligned the TMs for single arm locks. Before the model restarts, I had saved the good alignment on the EPICs sliders, but the gain of x3 on the coil driver filter banks have to be manually turned on at the moment (i.e. the safe.snap file has them off). ALS noise looked good for both arms, so just for fun, I tried transitioning control of both arms to ALS (in the CARM/DARM basis as we do when we lock DRFPMI, using the Transition_IR_ALS.py script), and was successful.
Here is what was done (Jamie will correct me if I am mistaken).
So while we are in a better state now, the problem isn't fully solved.
Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.
Attachment #1: State of CDS overview screen as of 9.30AM today morning when I came in.
Looks like there may have bene a power glitch, although judging by the wall StripTool traces, if there was one, it happened more than 8 hours ago. FB is down atm so can't trend to find out when this happened.
All FEs and FB are unreachable from the control room workstations, but Megatron, Optimus and Chiara are all ssh-able. The latter reports an uptime of 704 days, so all seems okay with its UPS. Slow machines are all responding to ping as well as telnet.
Recovery process to begin now. Hopefully it isn't as complicated as the most recent effort [FAMOUS LAST WORDS]
I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".
Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.
In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling.
A bit more digging on the diagnostics page of the RAID array reveals that the two power supplies actually failed on Jun 2 2017 at 10:21:00. Not surprisingly, this was the date and approximate time of the last major power glitch we experienced. Apart from this, the only other error listed on the diagnostics page is "Reading Error" on "IDE CHANNEL 2", but these errors precede the power supply failure.
Perhaps the power supplies are not really damaged, and its just in some funky state since the power glitch. After discussing with Jamie, I think it should be safe to power cycle the Jetstor RAID array once the FB machine has been powered down. Perhaps this will bring back one/both of the faulty power supplies. If not, we may have to get new ones.
The problem with FB may or may not be related to the state of the Jestor RAID array. It is unclear to me at what point during the boot process we are getting stuck at. It may be that because the RAID disk is in some funky state, the boot process is getting disrupted.
After a couple of minutes, the front LCD display seemed to indicate that it had finished running some internal checks. The messages indicating failure of power units, which was previously constantly displayed on the front LCD panel, was no longer seen. Going back to the control room and checking the web diagnostics page, everything seemed back to normal.
It's possible the fb bios got into a weird state. fb definitely has it's own local boot disk (*not* diskless boot). Try to get to the BIOS during boot and make sure it's pointing to it's local disk to boot from.
If that's not the problem, then it's also possible that fb's boot disk got fried in the power glitch. That would suck, since we'd have to rebuild the disk. If it does seem to be a problem with the boot disk then we can do some invasive poking to see if we can figure out what's up with the disk before rebuilding.
I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.
Jamie suggested verifying that the problem is indeed with the disk and not with the controller, so I tried switching the original boot disk to Slot #1 (from Slot #0 where it normally resides), but the same problem persists - the green "OK" indicator light keeps flashing even in Slot #1, which was verified to be a working slot using the spare 2.5 inch disk. So I think it is reasonable to conclude that the problem is with the boot disk itself.
The disk is a Seagate Savvio 10K.2 146GB disk. The datasheet doesn't explicitly suggest any recovery options. But Table 24 on page 54 suggests that a blinking LED means that the disk is "spinning up or spinning down". Is this indicative of any particular failure moed? Any ideas on how to go about recovery? Is it even possible to access the data on the disk if it doesn't spin up to the nominal operating speed?
If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.
If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)
Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.
If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.
I couldn't find an external docking setup for this SAS disk, seems like we need an actual controller in order to interface with it. Mike Pedraza in Downs had such a unit, so I took the disk over to him, but he wasn't able to interface with it in any way that allows us to get the data out. He wants to try switching out the logic board, for which we need an identical disk. We have only one such spare at the 40m that I could locate, but it is not clear to me whether this has any important data on it or not. It has "hda RTLinux" written on its front panel with a sharpie. Mike thinks we can back this up to another disk before trying anything, but he is going to try locating a spare in Downs first. If he is unsuccessful, I will take the spare from the 40m to him tomorrow, first to be backed up, and then for swapping out the logic board.
Chatting with Jamie and Koji, it looks like the options we have are:
I just want to mention that the situation is actually much more dire than we originally thought. The diskless NFS root filesystem for all the front-ends was on that fb disk. If we can't recover it we'll have to rebuilt the front end OS as well.
As of right now none of the front ends are accessible, since obviously their root filesystem has disappeared.
We will begin drag wiping and putting on doors at 9am tomorrow (Tuesday).
We need to get started on time so that we can finish at least the 4 test masses before lunch (if possible).
We will have a ~2 hour break for LIGOX + Valera's talk.
I propose the following teams:
(Team 1: 2 people, one clean, one dirty) Open light doors, clamp EQ stops, move optic close to door. ETMX, ITMX, ITMY, ETMY
(Team 2: K&J) Drag wipe optic, and put back against rails. Follow Team 1 around.
(Team 3 = Team 1, redux: 2 people, one clean, one dirty) Put earthquake stops at correct 2mm distance. Follow Team 2 around.
(Team 4: 3 people, Steve + 2) Close doors. Follow Team 3 around.
Later, we'll do BS door and Access Connector. BS, SRM, PRM already have the EQ stops at proper distances.
MC1 MC2 MC3 ETMX ETMY ITMX ITMY PRM SRM BS mean std
Pitch 0.671 0.747 0.762 0.909 0.859 0.513 0.601 0.610 0.566 0.747 0.698 0.129
Yaw 0.807 0.819 0.846 0.828 0.894 0.832 0.856 0.832 0.808 0.792 0.831 0.029
Pos 0.968 0.970 0.980 1.038 0.983 0.967 0.988 0.999 0.962 0.958 0.981 0.024
Side 0.995 0.993 0.971 0.951 1.016 0.986 1.004 0.993 0.973 0.995 0.988 0.019
There is a large amount of variation in the frequencies, even though the suspensions are nominally all the same. I leave it to the suspension makers to ponder and explain.
As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.
I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.
Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.
Masha and I ate some of Jamie's popcorn. It was good.
Okay! Attached are two power spectra. The first is a power spectrum of reality, the second is a power spectrum of the simPlant. Its looking much better (as in, no longer obviously white noise!), but there seems to be a gain problem somewhere (and it doesn't have seismic noise). I'll see if I can fix the first problem then move on to trying to find the seismic noise filters.
Koji found some 68nF caps from Downs and I finished modifying the last remaining coil driver box and tested it.
With this, all coil drivers have been modified and tested and are ready to be used. This DCC tree has links to all the coil driver pages which have documentation of modifications and test data.
I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).
I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again. Utter failure.
I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do.
I think the problem was caused by a failure of the RFM network: the RFM MEDM screen showed frozen values even when I was power recycling any of the FE computers. So I tried the following things:
After Alberto's bootfest which was more successful than mine, I tried powercycling the AWG crate one more time. No success. Just as Alberto had gotten, I got the DAQ screen's AWG lights to flash green, then go back to red. At Alberto's suggestion, I also gave the physical reset button another try. Another round of flash-green-back-red ensued.
When I was in a few hours ago while everything was hosed, all the other computer's 'lights' on the DAQ screen were solid red, but the two AWG lights were flashing between green and red, even though I was power cycling the other computers, not touching the AWG at the time. Those are the lights which are now solid red, except for a quick flash of green right after a reboot.
I poked around in the history of the curren and old elogs, and haven't found anything referring to this crazy blinking between good and bad-ness for the AWG computers. I don't know if this happens when the tpman goes funky (which is referred to a lot in the annals of the elog in the same entries as the AWG needing rebooting) and no one mentions it, or if this is a new problem. Alberto and I have decided to get Alex/someone involved in this, because we've exhausted our ideas.