ID |
Date |
Author |
Type |
Category |
Subject |
12284
|
Sun Jul 10 15:54:23 2016 |
ericq | Update | CDS | All models restarted | For some reason, all of the non-IOP models on the vertex frontends had crashed.
To get all of the bits green, I ended up restarting all models on all frontends. (This means Gautam's coil tests have been interrupted.) |
13885
|
Thu May 24 10:16:29 2018 |
gautam | Update | General | All models on c1lsc frontend crashed | All models on the c1lsc front end were dead. Looking at slow trend data, looks like this happened ~6hours ago. I rebooted c1lsc and now all models are back up and running to their "nominal state". |
14241
|
Wed Oct 10 12:38:27 2018 |
yuki | Configuration | LSC | All hardware was installed | I connected DAC - AIboard - PZTdriver - PZT mirrors and made sure the PZT mirrors were moving when changing the signal from DAC. Tomorrow I will prepare alignment servo with green beam for Y-arm. |
7920
|
Sat Jan 19 15:05:37 2013 |
Jenne | Update | Computers | All front ends but c1lsc are down | Message I get from dmesg of c1sus's IOP:
[ 44.372986] c1x02: Triggered the ADC
[ 68.200063] c1x02: Channel Hopping Detected on one or more ADC modules !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[ 68.200064] c1x02: Check GDSTP screen ADC status bits to id affected ADC modules
[ 68.200065] c1x02: Code is exiting ..............
[ 68.200066] c1x02: exiting from fe_code()
Right now, c1x02's max cpu indicator reads 73,000 micro seconds. c1x05 is 4,300usec, and c1x01 seems totally fine, except that it has the 02xbad.
c1x02 has 0xbad (not 0x2bad). All other models on c1sus, c1ioo, c1iscex and c1iscey all have 0x2bad.
Also, no models on those computers have 'heartbeats'.
C1x02 has "NO SYNC", but all other IOPs are fine.
I've tried rebooting c1sus, restarting the daqd process on fb, all to no avail. I can ssh / ping all of the computers, but not get the models running. Restarting the models also doesn't help.

c1iscex's IOP dmesg:
[ 38.626001] c1x01: Triggered the ADC
[ 39.626001] c1x01: timeout 0 1000000
[ 39.626001] c1x01: exiting from fe_code()
c1ioo's IOP has the same ADC channel hopping error as c1sus'.
|
7922
|
Sat Jan 19 18:23:31 2013 |
rana | Update | Computers | All front ends but c1lsc are down | After sshing into several machines and doing 'sudo shutdown -r now', some of them came back and ran their processes.
After hitting the reset button on the RFM switch, their diagnostic lights came back. After restarting the Dolphin task on fb:
"sudo /etc/init.d/dis_networkmgr restart"
the Dolphin diagnostic lights came up green on the FE status screen.
iscex still wouldn't come up. The awgtpman tasks on there keep trying to start but then stop due to not finding ADCs.
Then power cycled the IO Chassis for EX and then awtpman log files changed, but still no green lights. Then tried a soft reboot on fb and now its not booting correctly.
Hardware lights are on, but I can't telnet into it. Tried power cycling it once or twice, but no luck.
Probably Jamie will have to hook up a keyboard and monitor to it, to find out why its not booting.

P.S. The snapshot scripts in the yellow button don't work and the MEDM screen itself is missing the time/date string on the top. |
2386
|
Thu Dec 10 13:50:02 2009 |
Jenne | Update | VAC | All doors on, ready to pump | [Everybody: Alberto, Kiwamu, Joe, Koji, Steve, Bob, Jenne]
The last heavy door was put on after lunch. We're now ready to pump. |
6997
|
Fri Jul 20 17:11:50 2012 |
Jamie | Update | CDS | All custom MEDM screens moved to cds_users_apps svn repo | Since there are various ongoing requests for this from the sites, I have moved all of our custom MEDM screens into the cds_user_apps SVN repository. This is what I did:
For each system in /opt/rtcds/caltech/c1/medm, I copied their "master" directory into the repo, and then linked it back in to the usual place, e.g.:
a=/opt/rtcds/caltech/c1/medm/${model}/master
b=/opt/rtcds/userapps/trunk/${system}/c1/medm/${model}
mv $a $b
ln -s $b $a
Before committing to the repo, I did a little bit of cleanup, to remove some binary files and other known superfluous stuff. But I left most things there, since I don't know what is relevant or not.
Then committed everything to the repo.
|
1733
|
Sun Jul 12 20:06:44 2009 |
Jenne | DAQ | Computers | All computers down | I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).
I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again. Utter failure.
I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do. |
1735
|
Mon Jul 13 00:34:37 2009 |
Alberto | DAQ | Computers | All computers down |
Quote: |
I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).
I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again. Utter failure.
I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do.
|
I think the problem was caused by a failure of the RFM network: the RFM MEDM screen showed frozen values even when I was power recycling any of the FE computers. So I tried the following things:
- resetting the RFM switch
- power cycling the FE computers
- rebooting the framebuilder
but none of them worked. The FEs didn't come back. Then I reset C1DCU1 and power cycled C1DAQCTRL.
After that, I could restart the FEs by power recycling them again. They all came up again except for C1DAQADW. Neither the remote reboot or the power cycling could bring it up.
After every attempt of restarting it its lights on the DAQ MEDM screen turned green only for a fraction of a second and then became red again.
So far every attempt to reanimate it failed. |
1736
|
Mon Jul 13 00:53:50 2009 |
Alberto | DAQ | Computers | All computers down |
Quote: |
Quote: |
I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).
I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again. Utter failure.
I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do.
|
I think the problem was caused by a failure of the RFM network: the RFM MEDM screen showed frozen values even when I was power recycling any of the FE computers. So I tried the following things:
- resetting the RFM switch
- power cycling the FE computers
- rebooting the framebuilder
but none of them worked. The FEs didn't come back. Then I reset C1DCU1 and power cycled C1DAQCTRL.
After that, I could restart the FEs by power recycling them again. They all came up again except for C1DAQADW. Neither the remote reboot or the power cycling could bring it up.
After every attempt of restarting it its lights on the DAQ MEDM screen turned green only for a fraction of a second and then became red again.
So far every attempt to reanimate it failed.
|
After Alberto's bootfest which was more successful than mine, I tried powercycling the AWG crate one more time. No success. Just as Alberto had gotten, I got the DAQ screen's AWG lights to flash green, then go back to red. At Alberto's suggestion, I also gave the physical reset button another try. Another round of flash-green-back-red ensued.
When I was in a few hours ago while everything was hosed, all the other computer's 'lights' on the DAQ screen were solid red, but the two AWG lights were flashing between green and red, even though I was power cycling the other computers, not touching the AWG at the time. Those are the lights which are now solid red, except for a quick flash of green right after a reboot.
I poked around in the history of the curren and old elogs, and haven't found anything referring to this crazy blinking between good and bad-ness for the AWG computers. I don't know if this happens when the tpman goes funky (which is referred to a lot in the annals of the elog in the same entries as the AWG needing rebooting) and no one mentions it, or if this is a new problem. Alberto and I have decided to get Alex/someone involved in this, because we've exhausted our ideas. |
16527
|
Mon Dec 20 14:10:56 2021 |
Anchal | Update | BHD | All coil drivers ready to be used, modified and tested | Koji found some 68nF caps from Downs and I finished modifying the last remaining coil driver box and tested it.
SERIAL # |
TEST result |
S2100633 |
PASS |
With this, all coil drivers have been modified and tested and are ready to be used. This DCC tree has links to all the coil driver pages which have documentation of modifications and test data. |
7132
|
Thu Aug 9 04:26:51 2012 |
Sasha | Update | Simulations | All c1spx screens working | As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.
I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.
Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.
Masha and I ate some of Jamie's popcorn. It was good. |
7133
|
Thu Aug 9 07:24:58 2012 |
Sasha | Update | Simulations | All c1spx screens working |
Quote: |
As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.
I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.
Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.
Masha and I ate some of Jamie's popcorn. It was good.
|
Okay! Attached are two power spectra. The first is a power spectrum of reality, the second is a power spectrum of the simPlant. Its looking much better (as in, no longer obviously white noise!), but there seems to be a gain problem somewhere (and it doesn't have seismic noise). I'll see if I can fix the first problem then move on to trying to find the seismic noise filters. |
4934
|
Fri Jul 1 20:26:29 2011 |
rana | Summary | SUS | All SUS Peaks have been fit | MC1 MC2 MC3 ETMX ETMY ITMX ITMY PRM SRM BS mean std
Pitch 0.671 0.747 0.762 0.909 0.859 0.513 0.601 0.610 0.566 0.747 0.698 0.129
Yaw 0.807 0.819 0.846 0.828 0.894 0.832 0.856 0.832 0.808 0.792 0.831 0.029
Pos 0.968 0.970 0.980 1.038 0.983 0.967 0.988 0.999 0.962 0.958 0.981 0.024
Side 0.995 0.993 0.971 0.951 1.016 0.986 1.004 0.993 0.973 0.995 0.988 0.019
There is a large amount of variation in the frequencies, even though the suspensions are nominally all the same. I leave it to the suspension makers to ponder and explain. |
5281
|
Tue Aug 23 01:05:40 2011 |
Jenne | Update | Treasure | All Hands on Deck, 9am! | We will begin drag wiping and putting on doors at 9am tomorrow (Tuesday).
We need to get started on time so that we can finish at least the 4 test masses before lunch (if possible).
We will have a ~2 hour break for LIGOX + Valera's talk.
I propose the following teams:
(Team 1: 2 people, one clean, one dirty) Open light doors, clamp EQ stops, move optic close to door. ETMX, ITMX, ITMY, ETMY
(Team 2: K&J) Drag wipe optic, and put back against rails. Follow Team 1 around.
(Team 3 = Team 1, redux: 2 people, one clean, one dirty) Put earthquake stops at correct 2mm distance. Follow Team 2 around.
(Team 4: 3 people, Steve + 2) Close doors. Follow Team 3 around.
Later, we'll do BS door and Access Connector. BS, SRM, PRM already have the EQ stops at proper distances.
|
13103
|
Mon Jul 10 09:49:02 2017 |
gautam | Update | General | All FEs down | Attachment #1: State of CDS overview screen as of 9.30AM today morning when I came in.
Looks like there may have bene a power glitch, although judging by the wall StripTool traces, if there was one, it happened more than 8 hours ago. FB is down atm so can't trend to find out when this happened.
All FEs and FB are unreachable from the control room workstations, but Megatron, Optimus and Chiara are all ssh-able. The latter reports an uptime of 704 days, so all seems okay with its UPS. Slow machines are all responding to ping as well as telnet.
Recovery process to begin now. Hopefully it isn't as complicated as the most recent effort [FAMOUS LAST WORDS] |
13104
|
Mon Jul 10 11:20:20 2017 |
gautam | Update | General | All FEs down | I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".
Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.
In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling. |
13106
|
Mon Jul 10 17:46:26 2017 |
gautam | Update | General | All FEs down | A bit more digging on the diagnostics page of the RAID array reveals that the two power supplies actually failed on Jun 2 2017 at 10:21:00. Not surprisingly, this was the date and approximate time of the last major power glitch we experienced. Apart from this, the only other error listed on the diagnostics page is "Reading Error" on "IDE CHANNEL 2", but these errors precede the power supply failure.
Perhaps the power supplies are not really damaged, and its just in some funky state since the power glitch. After discussing with Jamie, I think it should be safe to power cycle the Jetstor RAID array once the FB machine has been powered down. Perhaps this will bring back one/both of the faulty power supplies. If not, we may have to get new ones.
The problem with FB may or may not be related to the state of the Jestor RAID array. It is unclear to me at what point during the boot process we are getting stuck at. It may be that because the RAID disk is in some funky state, the boot process is getting disrupted.
Quote: |
I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".
Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.
In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling.
|
|
13107
|
Mon Jul 10 19:15:21 2017 |
gautam | Update | General | All FEs down | The Jetstor RAID array is back in its nominal state now, according to the web diagnostics page. I did the following:
- Powered down the FB machine - to avoid messing around with the RAID array while the disks are potentially mounted.
- Turned off all power switches on the back of the Jetstor unit - there were 4 of them, all of them were toggled to the "0" position.
- Disconnected all power cords from the back of the Jetstor unit - there were 3 of them.
- Reconnected the power cords, turned the power switches back on to their "1" position.
After a couple of minutes, the front LCD display seemed to indicate that it had finished running some internal checks. The messages indicating failure of power units, which was previously constantly displayed on the front LCD panel, was no longer seen. Going back to the control room and checking the web diagnostics page, everything seemed back to normal.
However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now. |
13108
|
Mon Jul 10 21:03:48 2017 |
jamie | Update | General | All FEs down |
Quote: |
However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now.
|
It's possible the fb bios got into a weird state. fb definitely has it's own local boot disk (*not* diskless boot). Try to get to the BIOS during boot and make sure it's pointing to it's local disk to boot from.
If that's not the problem, then it's also possible that fb's boot disk got fried in the power glitch. That would suck, since we'd have to rebuild the disk. If it does seem to be a problem with the boot disk then we can do some invasive poking to see if we can figure out what's up with the disk before rebuilding. |
13110
|
Mon Jul 10 22:07:35 2017 |
Koji | Update | General | All FEs down | I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.
|
13111
|
Tue Jul 11 15:03:55 2017 |
gautam | Update | General | All FEs down | Jamie suggested verifying that the problem is indeed with the disk and not with the controller, so I tried switching the original boot disk to Slot #1 (from Slot #0 where it normally resides), but the same problem persists - the green "OK" indicator light keeps flashing even in Slot #1, which was verified to be a working slot using the spare 2.5 inch disk. So I think it is reasonable to conclude that the problem is with the boot disk itself.
The disk is a Seagate Savvio 10K.2 146GB disk. The datasheet doesn't explicitly suggest any recovery options. But Table 24 on page 54 suggests that a blinking LED means that the disk is "spinning up or spinning down". Is this indicative of any particular failure moed? Any ideas on how to go about recovery? Is it even possible to access the data on the disk if it doesn't spin up to the nominal operating speed?
Quote: |
I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.
|
|
13112
|
Tue Jul 11 15:12:57 2017 |
Koji | Update | General | All FEs down | If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.
If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?) |
13113
|
Wed Jul 12 10:21:07 2017 |
gautam | Update | General | All FEs down | Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.
Quote: |
If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.
If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)
|
|
13114
|
Wed Jul 12 14:46:09 2017 |
gautam | Update | General | All FEs down | I couldn't find an external docking setup for this SAS disk, seems like we need an actual controller in order to interface with it. Mike Pedraza in Downs had such a unit, so I took the disk over to him, but he wasn't able to interface with it in any way that allows us to get the data out. He wants to try switching out the logic board, for which we need an identical disk. We have only one such spare at the 40m that I could locate, but it is not clear to me whether this has any important data on it or not. It has "hda RTLinux" written on its front panel with a sharpie. Mike thinks we can back this up to another disk before trying anything, but he is going to try locating a spare in Downs first. If he is unsuccessful, I will take the spare from the 40m to him tomorrow, first to be backed up, and then for swapping out the logic board.
Chatting with Jamie and Koji, it looks like the options we have are:
- Get the data from the old disk, copy it to a working one, and try and revert the original FB machine to its last working state. This assumes we can somehow transfer all the data from the old disk to a working one.
- Prepare a fresh boot disk, load the old FB daqd code (which is backed up on Chiara) onto it, and try and get that working. But Jamie isn't very optimistic of this working, because of possible conflicts between the code and any current OS we would install.
- Get FB1 working. Jamie is looking into this right now.
Quote: |
Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.
|
|
13115
|
Wed Jul 12 14:52:32 2017 |
jamie | Update | General | All FEs down | I just want to mention that the situation is actually much more dire than we originally thought. The diskless NFS root filesystem for all the front-ends was on that fb disk. If we can't recover it we'll have to rebuilt the front end OS as well.
As of right now none of the front ends are accessible, since obviously their root filesystem has disappeared. |
13243
|
Tue Aug 22 18:36:46 2017 |
gautam | Update | Computers | All FE models compiled against RCG3.4 | After getting the go ahead from Jamie, I recompiled all the FE models against the same version of RCG that we tested on the c1iscex models.
To do so:
- I did rtcds make and rtcds install for all the models.
- Then I ssh-ed into the FEs and did rtcds stop all, followed by rtcds start <model> in the order they are listed on the CDS overview MEDM screen (top to bottom).
- During the compilation process (i.e. rtcds make), for some of the models, I got some compilation warnings. I believe these are related to models that have custom C code blocks in them. Jamie tells me that it is okay to ignore these warnings at that they will be fixed at some point.
- c1lsc FE crashed when I ran rtcds stop all - had to go and do a manual reboot.
- Doing so took down the models on c1sus and c1ioo that were running - but these FEs themselves did not have to be robooted.
- Once c1lsc came back up, I restarted all the models on the vertex FEs. They all came back online fine.
- Then I ssh-ed into FB1, and restarted the daqd processes - but c1lsc and c1ioo CDS indicators were still red.
- Looks like the mx_stream processes weren't started automatically on these two machines. Reasons unknown. Earlier today, the same was observed for c1iscey.
- I manually restarted the mx_stream processes, at which point all CDS indicator lights became green (see Attachment #1).
IFO alignment needs to be redone, but at least we now have a (admittedly rounabout way) of getting testpoints. Did a quick check for "nan-s" on the ASC screen, saw none. So I am re-enabling watchdogs for all optics.
GV 23 August 9am: Last night, I re-aligned the TMs for single arm locks. Before the model restarts, I had saved the good alignment on the EPICs sliders, but the gain of x3 on the coil driver filter banks have to be manually turned on at the moment (i.e. the safe.snap file has them off). ALS noise looked good for both arms, so just for fun, I tried transitioning control of both arms to ALS (in the CARM/DARM basis as we do when we lock DRFPMI, using the Transition_IR_ALS.py script), and was successful.
Quote: |
[jamie, gautam]
We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints.
Here is what was done (Jamie will correct me if I am mistaken).
- Jamie checked out branch 3.4 of the RCG from the SVN.
- Jamie recompiled all the models on c1iscex against this version of RCG.
- I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx.
- Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green.
- Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above.
So while we are in a better state now, the problem isn't fully solved.
Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.
|
|
10892
|
Tue Jan 13 04:57:26 2015 |
ericq | Update | CDS | All FE diagnostics back to green | I was looking into the status of IPC communications in our realtime network, as Chris suggested that there may be more phase missing that I thought. However, the recent continual red indicators on a few of the models made it hard to tell if the problems were real or not. Thus, I set out to fix what I could, and have achieved full green lights in the CDS screen.

This required:
- Fixing the BLRMS block, as was found to be a problem in ELOG 9911 (There were just some hanging lines not doing anything)
- Cleaning up one-sided RFM and SHMEM communications in C1SCY, C1TST, C1RFM and C1OAF
The frontend models have been svn'd. The BLRMs block has not, since its in a common cds space, and am not sure what the status of its use at the sites is... |
5350
|
Tue Sep 6 22:51:53 2011 |
rana | Summary | Cameras | All Camera setups a need upgrading | I just tried to adjust the ETMY camera and its not very user friendly = NEEDS FIXING.
* Camera view is upside down.
* Camera lens is contacting the lexan viewport cover; this means the focus cannot be adjusted without misaligning the camera.
* There's no strain relief of the camera cables at the can. Needs a rubber cable grommet too.
* There's a BNC "T" in the cable line.
Probably similar issues with some of the other setups; they've had aluminum foil covers for too long. We'll have a camera committee meeting tomorrow to see how to proceed. |
5358
|
Wed Sep 7 13:28:25 2011 |
steve | Summary | Cameras | All Camera setups a need upgrading |
Quote: |
I just tried to adjust the ETMY camera and its not very user friendly = NEEDS FIXING.
* Camera view is upside down.
* Camera lens is contacting the lexan viewport cover; this means the focus cannot be adjusted without misaligning the camera.
* There's no strain relief of the camera cables at the can. Needs a rubber cable grommet too.
* There's a BNC "T" in the cable line.
Probably similar issues with some of the other setups; they've had aluminum foil covers for too long. We'll have a camera committee meeting tomorrow to see how to proceed.
|
ITMY has been upgraded here I have the new lenses on hand to do the others when it fit into the schedule. |
11210
|
Thu Apr 9 02:58:26 2015 |
ericq | Update | LSC | All 1F, all whitened | blarg. Chrome ate my elog.
112607010 is the start of five minutes on all whitened 1F PDs. REFL55 has more low frequency noise than REFL165, I think we may need more CARM supression (i.e. we need to think about the required gain). This is also supported by the difference in shape of these two histograms, taken at the same time in 3f full lock. The CARM fluctuations seem to spread REFL55 out much more.
 
I made some filters and scripts to do DC coupling of the ITM oplevs. This makes maintaining stable alignment in full lock much easier.
I had a few 15+ minute locks on 3f, that only broke because I did something to break it.
Here's one of the few "quick" locklosses I had. I think it really is CARM/AO action, since the IMC sees it right away, but I don't see anything ringing up; just a spontaneous freakout.

|
7677
|
Wed Nov 7 00:10:38 2012 |
Jenne | Update | Alignment | Alignment- POY and oplevs. photos. | Can we have a drawing of what you did, how you confirmed your green alignment as the same as the IR (I think you had a good idea
about the beam going to the BS...can you please write it down in detail?), and where you think the beam is clipping? Cartoon-level, 20
to 30 minutes of work, no more. Enough to be informative, but we have other work that needs doing if we're going to put on doors
Thursday morning (or tomorrow afternoon?).
The ETMs weren't moved today, just the beam going to the ETMs, so the oplevs there shouldn't need adjusting. Anyhow, the oplevs I'm
more worried about are the ones which include in-vac optics at the corner, which are still on the to-do list.
So, tomorrow Steve + someone can check the vertex oplevs, while I + someone finish looking briefly at POX and POP, and at POY in
more detail.
If at all possible, no clamping / unclamping of anything on the in-vac tables. Let's try to use things as they are if the beams are getting to
where they need to go. Particularly for the oplevs, I'd rather have a little bit of movement of optics on the out-of-vac tables than any
changes happening inside.
I made a script that averages together many photos taken with the capture script that Rana found, which takes 50 pictures, one after
another. If I average the pictures, I don't see a spot. If I add the photos together even after subtracting away a no-beam shot, the
picture us saturated and is completely white. I'm trying to let ideas percolate in my head for how to get a useful spot. |
7678
|
Wed Nov 7 07:11:10 2012 |
rana | Update | Alignment | Alignment- POY and oplevs. photos. | The way to usually do image subtraction is to:
1) Turn off the room lights.
2) Take 500 images with no beam.
3) Use Mean averaging to get a reference image.
4) Same with the beam on.
5) Subtract the two averaged images. If that doesn't work, I guess its best to just take an image of the green beam on the mirrors using the new DSLR. |
7675
|
Tue Nov 6 17:22:51 2012 |
Manasa, Jamie | Update | Alignment | Alignment- POY and oplevs |
Right now, Manasa, Jamie and Ayaka are doing some finishing touches work, checking that POY isn't clipping on OM2, the second steering mirror after the SRM, and they'll confirm that POX comes out of the chamber nicely, and that POP is also still coming out (by putting the green laser pointer back on that table, and making sure the green beam is co-aligned with the beam from PR2-PR3. Also on the list is checking the vertex oplevs. Steve and Manasa did some stuff with the ETM oplevs yesterday, but haven't had a chance to write about it yet.
|
We were trying to check POY alignment using the green laser in the reverse direction (outside vacuum to in-vac) . The green laser was installed along with a steering mirror to steer it into the ITMY chamber pointing at POY.
We found that the green laser did follow the path back into the chamber perfectly; it was clipping at the edge of POY. To align it to the center of POY (get a narrower angle of incidence at the ITMY), the green laser had to be steered in at a wider angle of incidence from the table. This is now being limited by the oplev steering optics on the table. We were not able to figure out the oplev path on the table perfectly; but we think we can find a way to move the oplev steering mirrors that are now restricting the POY alignment.
The oplev optics will be moved once we confirm with Jenne or Steve.
[Steve, Manasa]
We aligned the ETM oplevs yesterday. We confirmed that the oplev beam hit the ETMs. We checked for centering of the beam coming back at the oplev PDs and the QPDsums matched the values they followed before the vent.
Sadly, they have to be checked once again tomorrow because the alignment was messed up all over again yesterday. |
7679
|
Wed Nov 7 09:09:02 2012 |
Steve | Update | Alignment | Alignment- | PRM and SRM OSEM LL 1.5V are they misaligned? |
9593
|
Mon Feb 3 23:31:33 2014 |
Manasa | Update | General | Alignment update / Y arm locked | [EricQ, Manasa, Koji]
We measured the spot positions on the MC mirrors and redid the MC alignment by only touching the MC mirror sliders. Now all the MC spots are <1mm away from the center.
We opened the ITMY and ETMY chambers to align the green to the arm. The green was already centered on the ITMY. We went back and forth to recenter the green on the ETMY and ITMY (This was done by moving the test masses in pitch and yaw only without touching the green pointing) until we saw green flashes in higher order modes. At this point we found the IR was also centered on the ETMY and a little low in pitch on ITMY. But we could see IR flashes on the ITMYF camera. We put back the light doors and did the rest of the alignment using the pitch and yaw sliders.
When the flashes were as high as 0.05, we started seeing small lock stretches. Playing around with the gain and tweaking the alignment, we could lock the Y arm in TEM00 for IR and also run the ASS. The green also locked to the arm in 00 mode at this point. We aligned the BS to get a good AS view on the camera. ITMX was tweaked to get good michelson. |
9765
|
Mon Mar 31 13:15:55 2014 |
manasa | Summary | LSC | Alignment update |
Quote: |
While I'm looking at the PRM ASC servo model, I tried to use the current servo filters for the ASC
as Manasa aligned the POP PDs and QPD yesterday. (BTW, I don't find any elog about it)
|
Guilty!!
POP path
The POP PD was showing only ~200 counts which was very low compared to what we recollect from earlier PRMI locks (~400 counts). Also, the POP ASC QPD was also not well-aligned.
While holding PRMI lock on REFL55, I aligned POP path to its PD (maximize POP DC counts) and QPD (centered in pitch and yaw).
X and Y green
The X green totally lost its pointing because of the misaligned PZTs from last week's power failure. This was recovered.
Y arm green alignment was also recovered. |
12500
|
Fri Sep 16 19:48:52 2016 |
Lydia | Update | General | Alignment status | Today the Y arm was locking fine. The alignment had drifted somewhat so I ran the dither and TRY returned to ~0.8. However, the mode cleaner has been somewhat unstable. It locked many times but usually for only a few minutes. Maybe the alignment or autolocker needs to be adjusted, but I didn't change anything other than playing with the gain sliders (which didn't seem to make it either better or worse).
ITMX is still stuck. |
12501
|
Sat Sep 17 02:00:23 2016 |
rana | Update | SUS | Alignment status | All is not lost. I've stuck and unstuck optics around a half dozen times. Can you please post the zoomed in time series (not trend) from around the time it got stuck? Sometimes the bias sliders have to be toggles to make the bias correct. From the OSEM trend it seems like it got a large Yaw bias. May also try to reseat the satellite box cables and the cable from the coil driver to the cable breakout board in the back of the rack. |
12502
|
Sat Sep 17 16:51:01 2016 |
Lydia | Update | SUS | Alignment status | Here's the timeseries plots. I've zoomed in to right after the problem- did you want before? We pretty much know what happened: c1susaux was restarted from the crate but the damping was on, so as soon as the machine came back online the damping loops sent a huge signal to the coils. (Also, it seems to be down again. Now we know what to do first before keying the crate.) It seems like both right side magnets are stuck, and this could probably be fixed by moving the yaw slider. Steve advised that we wait for an experienced hand to do so.
Quote: |
All is not lost. I've stuck and unstuck optics around a half dozen times. Can you please post the zoomed in time series (not trend) from around the time it got stuck? Sometimes the bias sliders have to be toggles to make the bias correct. From the OSEM trend it seems like it got a large Yaw bias. May also try to reseat the satellite box cables and the cable from the coil driver to the cable breakout board in the back of the rack.
|
|
12503
|
Sun Sep 18 16:18:05 2016 |
rana | Update | SUS | Alignment status | susaux is responsible for turning on/off the inputs to the coil driver, but not the actual damping loops. So rebooting susaux only does the same as turning the watchdogs on/off so it shouldn't be a big issue.
Both before and after would be good. We want to see how much bias and how much voltage from the front ends were applied. l1susaux could have put in a huge bias, but NOT a huge force from the damping loops. But I've never seen it put in a huge bias and there's no way to prevent this anyway without disconnecting cables.
I think its much more likely that its a little stuck due to static charge on the rubber EQ stop tips and that we can shake it lose with the damping loops. |
12504
|
Mon Sep 19 11:11:43 2016 |
ericq | Update | SUS | Alignment status | [ericq, Steve]
ITMX is free, OSEM signals all rougly centered.
This was accomplished by rocking the static alignment (i.e. slow controls) pitch and yaw offsets until the optic broke free. This took a few volts back and forth. At this point, I tried to find a point where the optic seemed to freely swing, and hopefully have signals in all 5 OSEMS. It seemed to be free sometimes but mostly settling into two different stationary states. I realized that it was becoming torqued enough in pitch to be leaning on the top-front or top-back EQ stops. So, I slowly adjusted the pitch from one of these states until it seemed to be swinging a bit on the camera, and three OSEM signals were showing real motion. Then, I slowly adjusted the pitch and yaw alignments to get all OSEMS signals roughly centered at half of their max voltage. |
7534
|
Fri Oct 12 01:56:26 2012 |
kiwamu | Update | General | Alignment situation of interferometer | [Koji / Kiwamu]
We have realigned the interferometer except the incident beam.
The REFL beam is not coming out from the chamber and is likely hitting the holder of a mirror in the OMC chamber. 
So we need to open the chamber again before trying to lock the recycled interferometers at some point.
--- What we did
- Ran the MC decenter script to check the spot positions.
- MC3 YAW gave a - 5mm offset with an error of about the same level.
- We didn't believe in this dither measurement.
- Checked the IP-POS and IP-ANG trends.
- The trends looked stable over 10 days (with a 24 hours drift).
- So we decided not to touch the MC suspensions.
- Tried aligning PRM
- Found that the beam on the REFL path was a fake beam
- The position of this beam was not sensitive to the alignment of PRM or ITMs.
- So certainly this is not the REFL beam.
- The power of this unknown beam is about 7.8 mW
- Let the PRM reflection beam go through the Faraday
- This was done by looking at the hole of the Faraday though a view port of the IOO chamber with an IR viewer.
- Aligned the rest of the interferometer (not including ETMs)
- We used the aligned PRM as the alignment reference
- Aligned ITMY such that the ITMY reflection overlaps with the PRM beam at the AS port.
- Aligned the BS and SRM such that their associated beam overlap at the AS port
- Aligned ITMX in the same way.
- Note that the beam axis, defined by the BS, ITMX and SRM, was not determined by this process. So we need to align it using the y-arm as a reference at some point.
- After the alignment, the beam at the AS port still doesn't look clipped. Which is good.
---- things to be fixed
- Align the steering mirrors in the faraday rejected beam path (requires vent)
- SRM oplev (this is out of the QPD range)
- ITMX oplev (out of the range too) |
1210
|
Thu Jan 1 00:55:39 2009 |
Yoichi | Update | ASC | Alignment scripts for Linux | A Happy New Year.
The dither alignment scripts did not run on linux machines because tdscntr and ezcademod do not run
on linux. Tobin wrote a perl version of tdscntr and I modified it for 40m some time ago.
Today, I wrote a perl version of ezcademod. The script is called ditherServo.pl and resides in /cvs/cds/caltech/scripts/general/.
It is not meant to be a drop-in replacement, so the command line syntax is different. Usage is explained in the comment of the script.
Using those two scripts, I wrote linux versions of the alignment scripts.
Now when you call, for example, alignX script, it calls alignX.linux or alignX.solaris depending on the OS of
your machine. alignX.solaris is the original script using the compiled ezcademod.
In principle, ezcademod is faster than my ditherServo.pl because my script suffers from the overhead of
calling tdsdmd on each iteration of the servo. But in practice ditherServo.pl is not that bad. At least, as far as
the alignment is concerned, the performances of the both commands are comparable in terms of the final arm power and the convergence.
Now the alignXXX commands from the IFO Configure MEDM screen work for X-arm, Y-arm, PRM and DRM. I did not write a script for Michelson, since
it is optional.
I confirmed that "Align Full IFO" works correctly. |
14422
|
Tue Jan 29 22:12:40 2019 |
gautam | Update | SUS | Alignment prep | Since we may want to close up tomorrow, I did the following prep work:
- Cleaned up Y-end suspension eleoctronics setup, connected the Sat Box back to the flange
- The OSEMs are just sitting on the table right now, so they are just seeing the fully open voltage
- Post filter insertion, the four face OSEMs report ~3-4% lower open-voltage values compared to before, which is compatible with the transmission spec for the filters (T>95%)
- The side OSEM is reporting ~10% lower - perhaps I just didn't put the filter on right, something to be looked at inside the chamber
- Suspension watchdog restoration
- I'd shutdown all the watchdogs during the Satellite box debacle
- However, I left ITMY, ETMY and SRM tripped as these optics are EQ-stopped / don't have the OSEMs inserted.
- Checked IMC alignment
- After some hand-alignment of the IMC, it was locked, transmission is ~1200 counts which is what I remember it being
- Checked X-arm alignment
- Strictly speaking, this has to be done after setting the Y-arm alignment as that dictates the input pointing of the IMC transmission to the IFO, but I decided to have a quick look nevertheless
- Surprisingly, ITMX damping isn't working very well it seems - the optic is clearly swinging around a lot, and the shadow sensor RMS voltage is ~10s of mV, whereas for all the other optics, it is ~1mV.
- I'll try the usual cable squishing voodoo
Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum. |
14211
|
Sun Sep 23 17:38:48 2018 |
yuki | Update | ASC | Alignment of AUX Y end green beam was recovered | [ Yuki, Koji, Gautam ]
An alignment of AUX Y end green beam was bad. With Koji and Gautam's advice, it was recovered on Friday. The maximum value of TRY was about 0.5. |
726
|
Wed Jul 23 18:42:18 2008 |
Jenne | Update | PSL | Alignment of AOM | [Rana, Yoichi, Jenne]
Short Version: We are selecting the wrong diffracted beam on the 2nd pass through the AOM (we use the 2nd order rather than the first). This will be fixed tomorrow.
Long Version of AOM activities:
We checked the amount of power going to the AOM, through the AOM on the first pass, and then through the AOM on the second pass, and saw that we get about 50% through on the first pass, but only about 10% on the 2nd pass. Before the AOM=60mW, after the first pass=38mW, after the 2nd pass=4mW. Clearly the alignment through the AOM is really sketchy.
We translated the AOM so the beam goes through the center of the crystal while we align things. We see that we only get the first order beam, which is good. We twiddled the 4 adjust screws on the side of the AOM to maximize the power at the curved mirror for the 1st order of the first pass, which was 49.6mW. We then looked at the DC output of the Reference Cavity's Refl. PD, and saw 150mV on the 'scope. The power measured after the polarizing beam splitter and the next wave plate was still 4mW. Adjusting the curved mirror, we got up to 246mV on the 'scope for the Refl. PD, and 5.16mW after the PBS+Waveplate. We adjusted the 4 side screws of the AOM again, and the tip/tilt of the PBS, and got up to 288mV on the 'scope.
Then we looked at the beam that we keep after the 2nd pass through the AOM, and send to the reference cavity, and we find that we are keeping the SECOND order beam after the second pass. This is bad news. Yoichi and I will fix this in the morning. We checked that we were seeing a higher order beam by modulating the Offset of the MC servo board with a triangle wave, and watching the beam move on the camera. If we were chosing the correct beam, there would be no movement because of the symmetry of 2 passes through the AOM.
I took some sweet video of the beam spot moving, which I'll upload later, if I can figure out how to get the movies off my cell phone. |
7573
|
Thu Oct 18 03:57:20 2012 |
Jenne | Update | Locking | Alignment is really bad?? | The goal of the night was to lock the Y arm. (Since that didn't happen, I moved on to fixing the WFS since they were hurting the MC)
I used the power supplies at 1Y4 to steer PZT2, and watched the face of the black glass baffle at ETMY. (elog 7569 has notes re: camera work earlier) When I am nearly at the end of the PZT range (+140V on the analog power supply, which I think is yaw), I can see the beam spot near the edge of the baffle's aperture. Unfortunately, lower voltages move the spot away from the aperture, so I can't find the spot on the other side of the aperture and center it. Since the max voltage for the PZTs is +150, I don't want to go too much farther. I can't take a capture since the only working CCD I found is the one which won't talk to the Sensoray. We need some more cameras....they're already on Steve's list.
When the spot is a little closer to the center of the aperture than the edge of the aperture (so the full +150V!!), I don't see any beam coming out of AS....no beam out of the chamber at all, not just no beam on the camera. Crapstick. This is not good. I'm not really sure how we (I?) screwed up this thoroughly. Sigh. Whatever ghost REFL beam that Kiwamu and Koji found last week is still coming out of REFL.
Previous PZT voltages, before tonight's steering: +32V on analog power supply, +14.7 on digital. This is the place that the PRMI has been aligned to the past week or so.
Next, just to see what happens, I think I might install a camera looking at the back (output) side of the Faraday so that I can steer PRM until the reflected beam is going back through the Faraday. Team K&K did this with viewers and mirrors, so it'll be more convenient to just have a camera.
Advice welcome. |
7581
|
Fri Oct 19 16:24:39 2012 |
rana | Update | Locking | Alignment is really bad?? |
VENT NOW and FIX ALIGNMENT! |
3847
|
Tue Nov 2 16:24:07 2010 |
Koji | Update | Auxiliary locking | Alignment for the green in the X trans table | [Kiwamu Koji]
Today we found the green beam from the end was totally missing at the vertex.
- What we found was very weak green beam at the end. Unhappy.
- We removed the PBS. We should obtain the beam for the fiber from the rejection of the (sort of) dichroic separator although the given space is not large.
- The temperature controller was off. We turned it on again.
- We found everything was still misaligned. Aligned the crystal, aligned the Faraday for the green.
- Aligned the last two steering mirrors such that we hit the approximate center of the ETMX and the center of the ITMX.
- Made the fine alignment to have the green beam at the PSL table.
The green beam emerged from the chamber looks not so round as there is a clipping at an in-vac steering.
We will make the thorough realignment before closing the tank. |
|