40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 26 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  1791   Sat Jul 25 16:04:32 2009 robUpdatePSLAligning the beam to the Faraday

Quote:

When I turned them on, the control signal in Pitch from WFS2 started going up with no stop. It was like the integrator in the loop was fed with a DC bias. The effect of that was to misalign the MC cavity from the good state in which it was with the only length control on (that is, transmission ~2.7, reflection ~ 0.4).

I don't know why that is happening. To exclude that it was due to a computer problem I first burtrestored C1IOO to July the 18th, but since that did not help, I even restarted it. Also that didn't solve the problem.

 

 

At least one problem is the mis-centering of the resonant spot on MC2, which can be viewed with the video monitors.  It's very far from the center of the optic, which causes length-to-angle coupling that makes the mulitple servos which actuate on MC2 (MCL, WFS, local damping) fight each other and go unstable.

  1792   Sat Jul 25 19:04:01 2009 KojiUpdatePSLAligning the beam to the Faraday

Quote:

Quote:

When I turned them on, the control signal in Pitch from WFS2 started going up with no stop. It was like the integrator in the loop was fed with a DC bias. The effect of that was to misalign the MC cavity from the good state in which it was with the only length control on (that is, transmission ~2.7, reflection ~ 0.4).

I don't know why that is happening. To exclude that it was due to a computer problem I first burtrestored C1IOO to July the 18th, but since that did not help, I even restarted it. Also that didn't solve the problem.

 

 

At least one problem is the mis-centering of the resonant spot on MC2, which can be viewed with the video monitors.  It's very far from the center of the optic, which causes length-to-angle coupling that makes the mulitple servos which actuate on MC2 (MCL, WFS, local damping) fight each other and go unstable.

I played with the MC alignment for the beam centering. After that, I restored the alignment values.



In principle, one can select the MC2 spot as one likes, while the transmitted beam axis to the IFO is not changed
as far as you are at the best alignment. This principle is almost trivial because the beam axis matches
to the input beam axis at the best alignment.
The alignment solution is not unique for a triangle cavity if we don't fix the end spot position.

In practice, this cruising of the MC2 spot is accomplished by the following procedure:
0) Assume that you are initially at the best alignment (=max transmission).
1) Slightly tilt the MC2.
2) Adjust MC1/MC3 so that the best transmission is restored.

I started from the following initial state of the alignment sliders:

BEFORE TRIAL

MC1 Pitch  +3.6242
MC1 Yaw  -0.8640
MC2 Pitch  3.6565
MC2 Yaw -1.1216
MC3 Pitch -0.6188
MC3 Yaw -3.1910
MC Trans 2.70

After many iterations, the spot was centered in some extent. (See the picture)
RESULT

    adj.
MC1 Pitch  +3.363 (-0.26)
MC1 Yaw  -1.164 (-0.3)
MC2 Pitch  3.7565 (+0.1)
MC2 Yaw -1.2800 (~ -0.16)
MC3 Pitch -0.841 (~ -0.22)
MC3 Yaw -3.482 (~ -0.29)
MC Trans 2.75  

The instability looked cured somewhat.
Further adjustment caused a high freq (10Hz at the camera) instability and the IMCR shift issue.
So I returned to the last stable setting.

Side effect:
Of course, if you move MC1, the reflected spot got shifted.
The spot has been apparently off-centered from the IMCR camera. (up and right)
At this stage, I could not determine what is the good state.
So, I restored the alignment of the MC as it was.
But now Alberto can see which mirror do we have to move in which direction and how much.

Attachment 1: MC2_Cam.jpg
MC2_Cam.jpg
  1793   Sun Jul 26 13:19:54 2009 ranaUpdatePSLAligning the mode cleaner

I set the MC back to its good alignment (June 21st) using this procedure. The trend of the OSEM values over the last 40 days and 40 nights is attached.

Then I aligned the periscope to that beam. This took some serious periscope knob action. Without WFS, the transmission went to 2.7 V and the reflection down to 0.6V.

Then I re-aligned the MC_REFL path as usual. The beam was far enough off that I had to also re-align onto the MC LSC PD as well as the MC REFL camera (~2 beam radii).

Beams are now close to their historical positions on Faraday and MC2. I then restored the PZT sliders to their April snapshot and the X-arm locked.

Steve - please recenter the iris which is on the periscope. It has been way off for a long time.

So it looks OK now. The main point here is that we can trust the MC OSEMs.

Afterwards I rebooted c1susvme1 and c1susvme2 because they were skewed.

 

Attachment 1: Untitled.png
Untitled.png
  1794   Sun Jul 26 16:05:17 2009 AlbertoUpdatePSLAligning the mode cleaner

Quote:

I set the MC back to its good alignment (June 21st) using this procedure. The trend of the OSEM values over the last 40 days and 40 nights is attached.

Then I aligned the periscope to that beam. This took some serious periscope knob action. Without WFS, the transmission went to 2.7 V and the reflection down to 0.6V.

Then I re-aligned the MC_REFL path as usual. The beam was far enough off that I had to also re-align onto the MC LSC PD as well as the MC REFL camera (~2 beam radii).

Beams are now close to their historical positions on Faraday and MC2. I then restored the PZT sliders to their April snapshot and the X-arm locked.

Steve - please recenter the iris which is on the periscope. It has been way off for a long time.

So it looks OK now. The main point here is that we can trust the MC OSEMs.

Afterwards I rebooted c1susvme1 and c1susvme2 because they were skewed.

 

 It is really surprising that we now have again the data from the MC OSEMs since up to two days ago the record looked corrupted (see the attachments in my entry 1774).

The reason I ended up severely misaligning the the MC is exactly that there wasn't anymore a reference position that I could go back to and I had to use the camera looking a the Faraday.

  1795   Mon Jul 27 09:34:07 2009 steveSummaryIOOAligning the mode cleaner

Quote:

I set the MC back to its good alignment (June 21st) using this procedure. The trend of the OSEM values over the last 40 days and 40 nights is attached.

Then I aligned the periscope to that beam. This took some serious periscope knob action. Without WFS, the transmission went to 2.7 V and the reflection down to 0.6V.

Then I re-aligned the MC_REFL path as usual. The beam was far enough off that I had to also re-align onto the MC LSC PD as well as the MC REFL camera (~2 beam radii).

Beams are now close to their historical positions on Faraday and MC2. I then restored the PZT sliders to their April snapshot and the X-arm locked.

Steve - please recenter the iris which is on the periscope. It has been way off for a long time.

So it looks OK now. The main point here is that we can trust the MC OSEMs.

Afterwards I rebooted c1susvme1 and c1susvme2 because they were skewed.

 

 I'm impressed by Rana's simple way to align the MC. IFO arms are locked or flashing. 20 days trend attached.

 

Attachment 1: 20dtrend.jpg
20dtrend.jpg
  10534   Wed Sep 24 18:17:46 2014 ericqUpdateGeneralAlignment Restored

Interferometer alignment is restored

ASS has been run on each arm, recycling mirrors were aligned by overlapping on AS camera. 


Notes:

  • Mode cleaner alignment took some manual tweaking, locked fine around 1k counts. Still no autolocker.
  • At this point, some light was visible on AS and REFL, which was a good sign regarding TTs. 
  • Used green light to align ETMs to support a green 00 mode. 
  • Ensured no recylcying flashes were taking place on AS camera and PRM face camera.
  • Arms were locked using AS55, with the other ITM misalgined, for better SNR than PO[XY]. ASS brought arm powers to ~0.06, which is about what we would expect from 1k MC2 trans instead of 16k.
    • ASS Yarm required debugging, see below.
    • ETMX was getting kicks again. Top Dsub connector on the flange near the ground closer to the end table was a little loose. We should fasten it more securely.
  • At this point, michelson alignment was good. Brought in PRM to see PRC flashes, REFL spot was happy. Brought in SRM to AS sppot. 
  • Saved all optic positions. 
  • Oplevs:
    • PRMs new aligned state is falling off the QPD.
    • ETMs and BS oplev centering are fine, rest are less good, but still on the detector.

 


ASS-RFM issue:

ETMY was not getting its ASC pitch and yaw signals. C1SCY had a red RFM bit (although, it still does now...)

I took a look at the c1rfm simulink diagram and found that C1RFM had an RFM block called C1:RFM-TST_ETMY_[PIT/YAW] and C1SCY had one called C1:TST-SCY_ETMY_[PIT/YAW]. 

It seems that C1TST was illegally being used in a real signal chain, and Jenne's recent work with c1tst broke it. I renamed the channels in C1RFM and C1SCY to C1:RFM-SCY_ETMY_[PIT/YAW], saved, compiled, installed, restarted. All was well.

There are still some  in SCY that have this TST stuff going on, however. They have to do with ALS, it seems, but are SHMEM blocks, not RFM. Namely:

  • C1:TST-SCY_TRY
  • C1:TST-SCY_GLOBALPOS
  • C1:TST-SCY_AMP_CTRL

 

  7291   Tue Aug 28 00:16:19 2012 jamieUpdateGeneralAlignment and vent prep

I think we (Jenne, Jamie) are going to leave things for the night to give ourselves more time to prep for the vent tomorrow.

We still need to put in the PSL output beam attenuator, and then redo the MC alignment.

The AS spot is also indicating that we're clipping somewhere (see below).  We need to align things in the vertex and then check the centerings on the AP table.

So I think we're back on track and should be ready to vent by the end of the day tomorrow.

Attachment 1: as1.png
as1.png
  7673   Tue Nov 6 16:38:37 2012 jenne, jamie, ayaka, manasaUpdateAlignmentAlignment back under control again

We had a big alignment party early this morning, and things are back to looking good.  We have been very careful not to bump or touch tables any more than necessary.  Also, we have removed the apertures from the BS and PRM, so there are no more apertures currently left in the chambers (this is good, since we won't forget).

We started over again from the PZTs, using the PRM aperture and the freestanding aperture in front of PR2, to get the height of the beam correct.  We then moved PZTs to get the beam centered on BS, ITMY, ETMY.  We had to do a little poking of PR2 (and PR3?) to get pitch correct everywhere.

We then went to ETMX to check beam pointing, and used BS to steer the beam to the center of ETMX.  We checked that the beam was centered on ITMX.

We went through and ensured that ITMX, ITMY, PRM, SRM are all retroreflecting.  We see nice MICH fringes, and we see some fringes (although still not so nice...) when we bring PRM and SRM into alignment.

We checked the AS path (with only MICH aligned), and made sure we are centered on all of the mirrors.  This included steering a little bit on the mirrors on the OMC table, in yaw.  Initially, AS was coming out of the vacuum, but hitting the side of the black beam tube.  Now it gets nicely to the table.

For both AS and REFL, we made sure there is no clipping in the OMC chamber.

I recentered the beams for AS and REFL on their respective cameras.

IPPOS was centered on the QPD.  This involved moving the first out-of-vac steering mirror sideways a small amount, since the beam was hitting the edge of the mirror.  IPANG was aligned in-vac, and has been centered on the QPD.

Right now, Manasa, Jamie and Ayaka are doing some finishing touches work, checking that POY isn't clipping on OM2, the second steering mirror after the SRM, and they'll confirm that POX comes out of the chamber nicely, and that POP is also still coming out (by putting the green laser pointer back on that table, and making sure the green beam is co-aligned with the beam from PR2-PR3.  Also on the list is checking the vertex oplevs.  Steve and Manasa did some stuff with the ETM oplevs yesterday, but haven't had a chance to write about it yet.

  3847   Tue Nov 2 16:24:07 2010 KojiUpdateAuxiliary lockingAlignment for the green in the X trans table

[Kiwamu Koji]

Today we found the green beam from the end was totally missing at the vertex.

- What we found was very weak green beam at the end. Unhappy.

- We removed the PBS. We should obtain the beam for the fiber from the rejection of the (sort of) dichroic separator although the given space is not large.

- The temperature controller was off. We turned it on again.

- We found everything was still misaligned. Aligned the crystal, aligned the Faraday for the green.

- Aligned the last two steering mirrors such that we hit the approximate center of the ETMX and the center of the ITMX.

- Made the fine alignment to have the green beam at the PSL table.

The green beam emerged from the chamber looks not so round as there is a clipping at an in-vac steering.
We will make the thorough realignment before closing the tank.

  7573   Thu Oct 18 03:57:20 2012 JenneUpdateLockingAlignment is really bad??

The goal of the night was to lock the Y arm.  (Since that didn't happen, I moved on to fixing the WFS since they were hurting the MC)

I used the power supplies at 1Y4 to steer PZT2, and watched the face of the black glass baffle at ETMY.  (elog 7569 has notes re: camera work earlier)  When I am nearly at the end of the PZT range (+140V on the analog power supply, which I think is yaw), I can see the beam spot near the edge of the baffle's aperture.  Unfortunately, lower voltages move the spot away from the aperture, so I can't find the spot on the other side of the aperture and center it.  Since the max voltage for the PZTs is +150, I don't want to go too much farther.  I can't take a capture since the only working CCD I found is the one which won't talk to the Sensoray.  We need some more cameras....they're already on Steve's list.

When the spot is a little closer to the center of the aperture than the edge of the aperture (so the full +150V!!), I don't see any beam coming out of AS....no beam out of the chamber at all, not just no beam on the camera.  Crapstick.  This is not good.  I'm not really sure how we (I?) screwed up this thoroughly.  Sigh.  Whatever ghost REFL beam that Kiwamu and Koji found last week is still coming out of REFL.

Previous PZT voltages, before tonight's steering:  +32V on analog power supply, +14.7 on digital.  This is the place that the PRMI has been aligned to the past week or so.

Next, just to see what happens, I think I might install a camera looking at the back (output) side of the Faraday so that I can steer PRM until the reflected beam is going back through the Faraday.  Team K&K did this with viewers and mirrors, so it'll be more convenient to just have a camera.

Advice welcome.

  7581   Fri Oct 19 16:24:39 2012 ranaUpdateLockingAlignment is really bad??

 

 VENT NOW and FIX ALIGNMENT!

  726   Wed Jul 23 18:42:18 2008 JenneUpdatePSLAlignment of AOM
[Rana, Yoichi, Jenne]

Short Version: We are selecting the wrong diffracted beam on the 2nd pass through the AOM (we use the 2nd order rather than the first). This will be fixed tomorrow.

Long Version of AOM activities:

We checked the amount of power going to the AOM, through the AOM on the first pass, and then through the AOM on the second pass, and saw that we get about 50% through on the first pass, but only about 10% on the 2nd pass. Before the AOM=60mW, after the first pass=38mW, after the 2nd pass=4mW. Clearly the alignment through the AOM is really sketchy.

We translated the AOM so the beam goes through the center of the crystal while we align things. We see that we only get the first order beam, which is good. We twiddled the 4 adjust screws on the side of the AOM to maximize the power at the curved mirror for the 1st order of the first pass, which was 49.6mW. We then looked at the DC output of the Reference Cavity's Refl. PD, and saw 150mV on the 'scope. The power measured after the polarizing beam splitter and the next wave plate was still 4mW. Adjusting the curved mirror, we got up to 246mV on the 'scope for the Refl. PD, and 5.16mW after the PBS+Waveplate. We adjusted the 4 side screws of the AOM again, and the tip/tilt of the PBS, and got up to 288mV on the 'scope.

Then we looked at the beam that we keep after the 2nd pass through the AOM, and send to the reference cavity, and we find that we are keeping the SECOND order beam after the second pass. This is bad news. Yoichi and I will fix this in the morning. We checked that we were seeing a higher order beam by modulating the Offset of the MC servo board with a triangle wave, and watching the beam move on the camera. If we were chosing the correct beam, there would be no movement because of the symmetry of 2 passes through the AOM.

I took some sweet video of the beam spot moving, which I'll upload later, if I can figure out how to get the movies off my cell phone.
  14211   Sun Sep 23 17:38:48 2018 yukiUpdateASCAlignment of AUX Y end green beam was recovered

[ Yuki, Koji, Gautam ]

An alignment of AUX Y end green beam was bad. With Koji and Gautam's advice, it was recovered on Friday. The maximum value of TRY was about 0.5.

  14422   Tue Jan 29 22:12:40 2019 gautamUpdateSUSAlignment prep

Since we may want to close up tomorrow, I did the following prep work:

  1. Cleaned up Y-end suspension eleoctronics setup, connected the Sat Box back to the flange
    • The OSEMs are just sitting on the table right now, so they are just seeing the fully open voltage
    • Post filter insertion, the four face OSEMs report ~3-4% lower open-voltage values compared to before, which is compatible with the transmission spec for the filters (T>95%)
    • The side OSEM is reporting ~10% lower - perhaps I just didn't put the filter on right, something to be looked at inside the chamber
  2. Suspension watchdog restoration
    • I'd shutdown all the watchdogs during the Satellite box debacle
    • However, I left ITMY, ETMY and SRM tripped as these optics are EQ-stopped / don't have the OSEMs inserted.
  3. Checked IMC alignment
    • After some hand-alignment of the IMC, it was locked, transmission is ~1200 counts which is what I remember it being
  4. Checked X-arm alignment
    • Strictly speaking, this has to be done after setting the Y-arm alignment as that dictates the input pointing of the IMC transmission to the IFO, but I decided to have a quick look nevertheless
    • Surprisingly, ITMX damping isn't working very well it seems - the optic is clearly swinging around a lot, and the shadow sensor RMS voltage is ~10s of mV, whereas for all the other optics, it is ~1mV.
    • I'll try the usual cable squishing voodoo

Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.

  1210   Thu Jan 1 00:55:39 2009 YoichiUpdateASCAlignment scripts for Linux
A Happy New Year.

The dither alignment scripts did not run on linux machines because tdscntr and ezcademod do not run
on linux. Tobin wrote a perl version of tdscntr and I modified it for 40m some time ago.
Today, I wrote a perl version of ezcademod. The script is called ditherServo.pl and resides in /cvs/cds/caltech/scripts/general/.
It is not meant to be a drop-in replacement, so the command line syntax is different. Usage is explained in the comment of the script.

Using those two scripts, I wrote linux versions of the alignment scripts.
Now when you call, for example, alignX script, it calls alignX.linux or alignX.solaris depending on the OS of
your machine. alignX.solaris is the original script using the compiled ezcademod.
In principle, ezcademod is faster than my ditherServo.pl because my script suffers from the overhead of
calling tdsdmd on each iteration of the servo. But in practice ditherServo.pl is not that bad. At least, as far as
the alignment is concerned, the performances of the both commands are comparable in terms of the final arm power and the convergence.

Now the alignXXX commands from the IFO Configure MEDM screen work for X-arm, Y-arm, PRM and DRM. I did not write a script for Michelson, since
it is optional.
I confirmed that "Align Full IFO" works correctly.
  7534   Fri Oct 12 01:56:26 2012 kiwamuUpdateGeneralAlignment situation of interferometer

[Koji / Kiwamu]

 We have realigned the interferometer except the incident beam.

 The REFL beam is not coming out from the chamber and is likely hitting the holder of a mirror in the OMC chamber.

So we need to open the chamber again before trying to lock the recycled interferometers at some point.

 

--- What we did

  •  Ran the MC decenter script to check the spot positions.
    • MC3 YAW gave a - 5mm offset with an error of about the same level.
    • We didn't believe in this dither measurement.
  •  Checked the IP-POS and IP-ANG trends.
    • The trends looked stable over 10 days (with a 24 hours drift).
    • So we decided not to touch the MC suspensions.
  • Tried aligning PRM
  • Found that the beam on the REFL path was a fake beam
    • The position of this beam was not sensitive to the alignment of PRM or ITMs.
    • So certainly this is not the REFL beam.
    • The power of this unknown beam is about 7.8 mW
  • Let the PRM reflection beam go through the Faraday
    • This was done by looking at the hole of the Faraday though a view port of the IOO chamber with an IR viewer.
  • Aligned the rest of the interferometer (not including ETMs)
    • We used the aligned PRM as the alignment reference
    • Aligned ITMY such that the ITMY reflection overlaps with the PRM beam at the AS port.
    • Aligned the BS and SRM such that their associated beam overlap at the AS port
    • Aligned ITMX in the same way.
    • Note that the beam axis, defined by the BS, ITMX  and SRM, was not determined by this process. So we need to align it using the y-arm as a reference at some point.
    • After the alignment, the beam at the AS port still doesn't look clipped. Which is good.

 

---- things to be fixed

   - Align the steering mirrors in the faraday rejected beam path (requires vent)

   - SRM oplev (this is out of the QPD range)

   - ITMX oplev (out of the range too)

  12500   Fri Sep 16 19:48:52 2016 LydiaUpdateGeneralAlignment status

Today the Y arm was locking fine. The alignment had drifted somewhat so I ran the dither and TRY returned to ~0.8. However, the mode cleaner has been somewhat unstable. It locked many times but usually for only a few minutes. Maybe the alignment or autolocker needs to be adjusted, but I didn't change anything other than playing with the gain sliders (which didn't seem to make it either better or worse).

ITMX is still stuck.

  12501   Sat Sep 17 02:00:23 2016 ranaUpdateSUSAlignment status

All is not lost. I've stuck and unstuck optics around a half dozen times. Can you please post the zoomed in time series (not trend) from around the time it got stuck? Sometimes the bias sliders have to be toggles to make the bias correct. From the OSEM trend it seems like it got a large Yaw bias. May also try to reseat the satellite box cables and the cable from the coil driver to the cable breakout board in the back of the rack.

  12502   Sat Sep 17 16:51:01 2016 LydiaUpdateSUSAlignment status

Here's the timeseries plots. I've zoomed in to right after the problem- did you want before? We pretty much know what happened: c1susaux was restarted from the crate but the damping was on, so as soon as the machine came back online the damping loops sent a huge signal to the coils. (Also, it seems to be down again. Now we know what to do first before keying the crate.) It seems like both right side magnets are stuck, and this could probably be fixed by moving the yaw slider. Steve advised that we wait for an experienced hand to do so. 

Quote:

All is not lost. I've stuck and unstuck optics around a half dozen times. Can you please post the zoomed in time series (not trend) from around the time it got stuck? Sometimes the bias sliders have to be toggles to make the bias correct. From the OSEM trend it seems like it got a large Yaw bias. May also try to reseat the satellite box cables and the cable from the coil driver to the cable breakout board in the back of the rack.

 

Attachment 1: Screenshot_from_2016-09-17_16-45-00.png
Screenshot_from_2016-09-17_16-45-00.png
  12503   Sun Sep 18 16:18:05 2016 ranaUpdateSUSAlignment status

susaux is responsible for turning on/off the inputs to the coil driver, but not the actual damping loops. So rebooting susaux only does the same as turning the watchdogs on/off so it shouldn't be a big issue.

Both before and after would be good. We want to see how much bias and how much voltage from the front ends were applied. l1susaux could have put in a huge bias, but NOT a huge force from the damping loops. But I've never seen it put in a huge bias and there's no way to prevent this anyway without disconnecting cables.

I think its much more likely that its a little stuck due to static charge on the rubber EQ stop tips and that we can shake it lose with the damping loops.

  12504   Mon Sep 19 11:11:43 2016 ericqUpdateSUSAlignment status

[ericq, Steve]

ITMX is free, OSEM signals all rougly centered. 


This was accomplished by rocking the static alignment (i.e. slow controls) pitch and yaw offsets until the optic broke free. This took a few volts back and forth. At this point, I tried to find a point where the optic seemed to freely swing, and hopefully have signals in all 5 OSEMS. It seemed to be free sometimes but mostly settling into two different stationary states. I realized that it was becoming torqued enough in pitch to be leaning on the top-front or top-back EQ stops. So, I slowly adjusted the pitch from one of these states until it seemed to be swinging a bit on the camera, and three OSEM signals were showing real motion. Then, I slowly adjusted the pitch and yaw alignments to get all OSEMS signals roughly centered at half of their max voltage.

  9765   Mon Mar 31 13:15:55 2014 manasaSummaryLSCAlignment update

Quote:

While I'm looking at the PRM ASC servo model, I tried to use the current servo filters for the ASC
as Manasa aligned the POP PDs and QPD yesterday. (BTW, I don't find any elog about it)

 Guilty!!

POP path

The POP PD was showing only ~200 counts which was very low compared to what we recollect from earlier PRMI locks (~400 counts). Also, the POP ASC QPD was also not well-aligned.
While holding PRMI lock on REFL55, I aligned POP path  to its PD (maximize POP DC counts) and QPD (centered in pitch and yaw).

X and Y green

The X green totally lost its pointing because of the misaligned PZTs from last week's power failure. This was recovered.
Y arm green alignment was also recovered.

  9593   Mon Feb 3 23:31:33 2014 ManasaUpdateGeneralAlignment update / Y arm locked

[EricQ, Manasa, Koji]

We measured the spot positions on the MC mirrors and redid the MC alignment by only touching the MC mirror sliders. Now all the MC spots are <1mm away from the center.

We opened the ITMY and ETMY chambers to align the green to the arm. The green was already centered on the ITMY. We went back and forth to recenter the green on the ETMY and ITMY (This was done by moving the test masses in pitch and yaw only without touching the green pointing) until we saw green flashes in higher order modes. At this point we found the IR was also centered on the ETMY and a little low in pitch on ITMY. But we could see IR flashes on the ITMYF camera. We put back the light doors and did the rest of the alignment using the pitch and yaw sliders.

When the flashes were as high as 0.05, we started seeing small lock stretches. Playing around with the gain and tweaking the alignment, we could lock the Y arm in TEM00 for IR and also run the ASS. The green also locked to the arm in 00 mode at this point. We aligned the BS to get a good AS view on the camera. ITMX was tweaked to get good michelson.

  7679   Wed Nov 7 09:09:02 2012 SteveUpdateAlignmentAlignment-
PRM and SRM  OSEM LL 1.5V are they misaligned?
Attachment 1: 9amNov7w.png
9amNov7w.png
  7675   Tue Nov 6 17:22:51 2012 Manasa, JamieUpdateAlignmentAlignment- POY and oplevs

Right now, Manasa, Jamie and Ayaka are doing some finishing touches work, checking that POY isn't clipping on OM2, the second steering mirror after the SRM, and they'll confirm that POX comes out of the chamber nicely, and that POP is also still coming out (by putting the green laser pointer back on that table, and making sure the green beam is co-aligned with the beam from PR2-PR3.  Also on the list is checking the vertex oplevs.  Steve and Manasa did some stuff with the ETM oplevs yesterday, but haven't had a chance to write about it yet.

We were trying to check POY alignment using the green laser in the reverse direction (outside vacuum to in-vac) . The green laser was installed along with a steering mirror to steer it into the ITMY chamber pointing at POY.

We found that the green laser did follow the path back into the chamber perfectly; it was clipping at the edge of POY. To align it to the center of POY (get a narrower angle of incidence at the ITMY), the green laser had to be steered in at a wider angle of incidence from the table. This is now being limited by the oplev steering optics on the table. We were not able to figure out the oplev path on the table perfectly; but we think we can find a way to move the oplev steering mirrors that are now restricting the POY alignment.

The oplev optics will be moved once we confirm with Jenne or Steve.

 

[Steve, Manasa]

We aligned the ETM oplevs yesterday. We confirmed that the oplev beam hit the ETMs. We checked for centering of the beam coming back at the oplev PDs and the QPDsums matched the values they followed before the vent.

Sadly, they have to be checked once again tomorrow because the alignment was messed up all over again yesterday.

  7677   Wed Nov 7 00:10:38 2012 Jenne UpdateAlignmentAlignment- POY and oplevs. photos.
Can we have a drawing of what you did, how you confirmed your green alignment as the same as the IR (I think you had a good idea 
about the beam going to the BS...can you please write it down in detail?), and where you think the beam is clipping? Cartoon-level, 20 
to 30 minutes of work, no more. Enough to be informative, but we have other work that needs doing if we're going to put on doors 
Thursday morning (or tomorrow afternoon?).

The ETMs weren't moved today, just the beam going to the ETMs, so the oplevs there shouldn't need adjusting. Anyhow, the oplevs I'm 
more worried about are the ones which include in-vac optics at the corner, which are still on the to-do list.

So, tomorrow Steve + someone can check the vertex oplevs, while I + someone finish looking briefly at POX and POP, and at POY in 
more detail.

If at all possible, no clamping / unclamping of anything on the in-vac tables. Let's try to use things as they are if the beams are getting to 
where they need to go.  Particularly for the oplevs, I'd rather have a little bit of movement of optics on the out-of-vac tables than any 
changes happening inside.

I made a script that averages together many photos taken with the capture script that Rana found, which takes 50 pictures, one after 
another. If I average the pictures, I don't see a spot. If I add the photos together even after subtracting away a no-beam shot, the 
picture us saturated and is completely white. I'm trying to let ideas percolate in my head for how to get a useful spot. 
  7678   Wed Nov 7 07:11:10 2012 ranaUpdateAlignmentAlignment- POY and oplevs. photos.

The way to usually do image subtraction is to:

1) Turn off the room lights.

2) Take 500 images with no beam.

3) Use Mean averaging to get a reference image.

4) Same with the beam on.

5) Subtract the two averaged images. If that doesn't work, I guess its best to just take an image of the green beam on the mirrors using the new DSLR.

  11210   Thu Apr 9 02:58:26 2015 ericqUpdateLSCAll 1F, all whitened

blarg. Chrome ate my elog. 

112607010 is the start of five minutes on all whitened 1F PDs. REFL55 has more low frequency noise than REFL165, I think we may need more CARM supression (i.e. we need to think about the required gain). This is also supported by the difference in shape of these two histograms, taken at the same time in 3f full lock. The CARM fluctuations seem to spread REFL55 out much more.  

I made some filters and scripts to do DC coupling of the ITM oplevs. This makes maintaining stable alignment in full lock much easier. 

I had a few 15+ minute locks on 3f, that only broke because I did something to break it.  

Here's one of the few "quick" locklosses I had. I think it really is CARM/AO action, since the IMC sees it right away, but I don't see anything ringing up; just a spontaneous freakout. 

Attachment 1: quickLockLoss.png
quickLockLoss.png
Attachment 2: 55_1.png
55_1.png
Attachment 3: 55_2.png
55_2.png
  5350   Tue Sep 6 22:51:53 2011 ranaSummaryCamerasAll Camera setups a need upgrading

I just tried to adjust the ETMY camera and its not very user friendly = NEEDS FIXING.

* Camera view is upside down.

* Camera lens is contacting the lexan viewport cover; this means the focus cannot be adjusted without misaligning the camera.

* There's no strain relief of the camera cables at the can. Needs a rubber cable grommet too.

* There's a BNC "T" in the cable line.

Probably similar issues with some of the other setups; they've had aluminum foil covers for too long. We'll have a camera committee meeting tomorrow to see how to proceed.

  5358   Wed Sep 7 13:28:25 2011 steveSummaryCamerasAll Camera setups a need upgrading

Quote:

I just tried to adjust the ETMY camera and its not very user friendly = NEEDS FIXING.

* Camera view is upside down.

* Camera lens is contacting the lexan viewport cover; this means the focus cannot be adjusted without misaligning the camera.

* There's no strain relief of the camera cables at the can. Needs a rubber cable grommet too.

* There's a BNC "T" in the cable line.

Probably similar issues with some of the other setups; they've had aluminum foil covers for too long. We'll have a camera committee meeting tomorrow to see how to proceed.

 ITMY has been upgraded  here I have the new lenses on hand to do the others when it fit into the schedule.

  10892   Tue Jan 13 04:57:26 2015 ericqUpdateCDSAll FE diagnostics back to green

I was looking into the status of IPC communications in our realtime network, as Chris suggested that there may be more phase missing that I thought. However, the recent continual red indicators on a few of the models made it hard to tell if the problems were real or not. Thus, I set out to fix what I could, and have achieved full green lights in the CDS screen. 

This required:

  • Fixing the BLRMS block, as was found to be a problem in ELOG 9911 (There were just some hanging lines not doing anything)
  • Cleaning up one-sided RFM and SHMEM communications in C1SCY, C1TST, C1RFM and C1OAF

The frontend models have been svn'd. The BLRMs block has not, since its in a common cds space, and am not sure what the status of its use at the sites is...

  13243   Tue Aug 22 18:36:46 2017 gautamUpdateComputersAll FE models compiled against RCG3.4

After getting the go ahead from Jamie, I recompiled all the FE models against the same version of RCG that we tested on the c1iscex models.

To do so:

  • I did rtcds make and rtcds install for all the models.
  • Then I ssh-ed into the FEs and did rtcds stop all, followed by rtcds start <model> in the order they are listed on the CDS overview MEDM screen (top to bottom).
  • During the compilation process (i.e. rtcds make), for some of the models, I got some compilation warnings. I believe these are related to models that have custom C code blocks in them. Jamie tells me that it is okay to ignore these warnings at that they will be fixed at some point.
  • c1lsc FE crashed when I ran rtcds stop all - had to go and do a manual reboot.
  • Doing so took down the models on c1sus and c1ioo that were running - but these FEs themselves did not have to be robooted.
  • Once c1lsc came back up, I restarted all the models on the vertex FEs. They all came back online fine.
  • Then I ssh-ed into FB1, and restarted the daqd processes - but c1lsc and c1ioo CDS indicators were still red.
  • Looks like the mx_stream processes weren't started automatically on these two machines. Reasons unknown. Earlier today, the same was observed for c1iscey.
  • I manually restarted the mx_stream processes, at which point all CDS indicator lights became green (see Attachment #1).

IFO alignment needs to be redone, but at least we now have a (admittedly rounabout way) of getting testpoints. Did a quick check for "nan-s" on the ASC screen, saw none. So I am re-enabling watchdogs for all optics.

GV 23 August 9am: Last night, I re-aligned the TMs for single arm locks. Before the model restarts, I had saved the good alignment on the EPICs sliders, but the gain of x3 on the coil driver filter banks have to be manually turned on at the moment (i.e. the safe.snap file has them off). ALS noise looked good for both arms, so just for fun, I tried transitioning control of both arms to ALS (in the CARM/DARM basis as we do when we lock DRFPMI, using the Transition_IR_ALS.py script), and was successful.

Quote:

[jamie, gautam]

We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints.

Here is what was done (Jamie will correct me if I am mistaken).

  1. Jamie checked out branch 3.4 of the RCG from the SVN.
  2. Jamie recompiled all the models on c1iscex against this version of RCG.
  3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 
  4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green.
  5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above.

So while we are in a better state now, the problem isn't fully solved. 

Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.

 

Attachment 1: CDS_Aug22.png
CDS_Aug22.png
  13103   Mon Jul 10 09:49:02 2017 gautamUpdateGeneralAll FEs down

Attachment #1: State of CDS overview screen as of 9.30AM today morning when I came in.

Looks like there may have bene a power glitch, although judging by the wall StripTool traces, if there was one, it happened more than 8 hours ago. FB is down atm so can't trend to find out when this happened.

All FEs and FB are unreachable from the control room workstations, but Megatron, Optimus and Chiara are all ssh-able. The latter reports an uptime of 704 days, so all seems okay with its UPS. Slow machines are all responding to ping as well as telnet.

Recovery process to begin now. Hopefully it isn't as complicated as the most recent effort indecision[FAMOUS LAST WORDS]

Attachment 1: CDS_down_10Jul2017.png
CDS_down_10Jul2017.png
  13104   Mon Jul 10 11:20:20 2017 gautamUpdateGeneralAll FEs down

I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".

Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.


In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling. 

  13106   Mon Jul 10 17:46:26 2017 gautamUpdateGeneralAll FEs down

A bit more digging on the diagnostics page of the RAID array reveals that the two power supplies actually failed on Jun 2 2017 at 10:21:00. Not surprisingly, this was the date and approximate time of the last major power glitch we experienced. Apart from this, the only other error listed on the diagnostics page is "Reading Error" on "IDE CHANNEL 2", but these errors precede the power supply failure.

Perhaps the power supplies are not really damaged, and its just in some funky state since the power glitch. After discussing with Jamie, I think it should be safe to power cycle the Jetstor RAID array once the FB machine has been powered down. Perhaps this will bring back one/both of the faulty power supplies. If not, we may have to get new ones. 

The problem with FB may or may not be related to the state of the Jestor RAID array. It is unclear to me at what point during the boot process we are getting stuck at. It may be that because the RAID disk is in some funky state, the boot process is getting disrupted.

Quote:

I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".

Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.


In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling. 

 

  13107   Mon Jul 10 19:15:21 2017 gautamUpdateGeneralAll FEs down

The Jetstor RAID array is back in its nominal state now, according to the web diagnostics page. I did the following:

  1. Powered down the FB machine - to avoid messing around with the RAID array while the disks are potentially mounted.
  2. Turned off all power switches on the back of the Jetstor unit - there were 4 of them, all of them were toggled to the "0" position.
  3. Disconnected all power cords from the back of the Jetstor unit - there were 3 of them.
  4. Reconnected the power cords, turned the power switches back on to their "1" position.

After a couple of minutes, the front LCD display seemed to indicate that it had finished running some internal checks. The messages indicating failure of power units, which was previously constantly displayed on the front LCD panel, was no longer seen. Going back to the control room and checking the web diagnostics page, everything seemed back to normal.

However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now. 

  13108   Mon Jul 10 21:03:48 2017 jamieUpdateGeneralAll FEs down

 

Quote:
 

However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now. 

It's possible the fb bios got into a weird state.  fb definitely has it's own local boot disk (*not* diskless boot).  Try to get to the BIOS during boot and make sure it's pointing to it's local disk to boot from.

If that's not the problem, then it's also possible that fb's boot disk got fried in the power glitch.  That would suck, since we'd have to rebuild the disk.  If it does seem to be a problem with the boot disk then we can do some invasive poking to see if we can figure out what's up with the disk before rebuilding.

  13110   Mon Jul 10 22:07:35 2017 KojiUpdateGeneralAll FEs down

I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.

 

  13111   Tue Jul 11 15:03:55 2017 gautamUpdateGeneralAll FEs down

Jamie suggested verifying that the problem is indeed with the disk and not with the controller, so I tried switching the original boot disk to Slot #1 (from Slot #0 where it normally resides), but the same problem persists - the green "OK" indicator light keeps flashing even in Slot #1, which was verified to be a working slot using the spare 2.5 inch disk. So I think it is reasonable to conclude that the problem is with the boot disk itself.

The disk is a Seagate Savvio 10K.2 146GB disk. The datasheet doesn't explicitly suggest any recovery options. But Table 24 on page 54 suggests that a blinking LED means that the disk is "spinning up or spinning down". Is this indicative of any particular failure moed? Any ideas on how to go about recovery? Is it even possible to access the data on the disk if it doesn't spin up to the nominal operating speed?

Quote:

I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.

 

 

  13112   Tue Jul 11 15:12:57 2017 KojiUpdateGeneralAll FEs down

If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.

If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)

  13113   Wed Jul 12 10:21:07 2017 gautamUpdateGeneralAll FEs down

Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.

Quote:

If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.

If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)

 

  13114   Wed Jul 12 14:46:09 2017 gautamUpdateGeneralAll FEs down

I couldn't find an external docking setup for this SAS disk, seems like we need an actual controller in order to interface with it. Mike Pedraza in Downs had such a unit, so I took the disk over to him, but he wasn't able to interface with it in any way that allows us to get the data out. He wants to try switching out the logic board, for which we need an identical disk. We have only one such spare at the 40m that I could locate, but it is not clear to me whether this has any important data on it or not. It has "hda RTLinux" written on its front panel with a sharpie. Mike thinks we can back this up to another disk before trying anything, but he is going to try locating a spare in Downs first. If he is unsuccessful, I will take the spare from the 40m to him tomorrow, first to be backed up, and then for swapping out the logic board.

Chatting with Jamie and Koji, it looks like the options we have are:

  1. Get the data from the old disk, copy it to a working one, and try and revert the original FB machine to its last working state. This assumes we can somehow transfer all the data from the old disk to a working one.
  2. Prepare a fresh boot disk, load the old FB daqd code (which is backed up on Chiara) onto it, and try and get that working. But Jamie isn't very optimistic of this working, because of possible conflicts between the code and any current OS we would install.
  3. Get FB1 working. Jamie is looking into this right now.
Quote:

Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.

 

 

  13115   Wed Jul 12 14:52:32 2017 jamieUpdateGeneralAll FEs down

I just want to mention that the situation is actually much more dire than we originally thought.  The diskless NFS root filesystem for all the front-ends was on that fb disk.  If we can't recover it we'll have to rebuilt the front end OS as well.

As of right now none of the front ends are accessible, since obviously their root filesystem has disappeared.

  5281   Tue Aug 23 01:05:40 2011 JenneUpdateTreasureAll Hands on Deck, 9am!

We will begin drag wiping and putting on doors at 9am tomorrow (Tuesday). 

We need to get started on time so that we can finish at least the 4 test masses before lunch (if possible). 

We will have a ~2 hour break for LIGOX + Valera's talk.

 

I propose the following teams:

(Team 1: 2 people, one clean, one dirty) Open light doors, clamp EQ stops, move optic close to door.  ETMX, ITMX, ITMY, ETMY

(Team 2: K&J) Drag wipe optic, and put back against rails. Follow Team 1 around.

(Team 3 = Team 1, redux: 2 people, one clean, one dirty) Put earthquake stops at correct 2mm distance. Follow Team 2 around.

(Team 4: 3 people, Steve + 2) Close doors.  Follow Team 3 around.

Later, we'll do BS door and Access Connector.  BS, SRM, PRM already have the EQ stops at proper distances.

 

  4934   Fri Jul 1 20:26:29 2011 ranaSummarySUSAll SUS Peaks have been fit

         MC1    MC2    MC3    ETMX   ETMY   ITMX   ITMY   PRM    SRM    BS     mean   std
Pitch   0.671  0.747  0.762  0.909  0.859  0.513  0.601  0.610  0.566  0.747  0.698  0.129
Yaw     0.807  0.819  0.846  0.828  0.894  0.832  0.856  0.832  0.808  0.792  0.831  0.029
Pos     0.968  0.970  0.980  1.038  0.983  0.967  0.988  0.999  0.962  0.958  0.981  0.024
Side    0.995  0.993  0.971  0.951  1.016  0.986  1.004  0.993  0.973  0.995  0.988  0.019

There is a large amount of variation in the frequencies, even though the suspensions are nominally all the same. I leave it to the suspension makers to ponder and explain.

Attachment 1: Screen_shot_2011-07-01_at_8.17.22_PM.png
Screen_shot_2011-07-01_at_8.17.22_PM.png
  7132   Thu Aug 9 04:26:51 2012 SashaUpdateSimulationsAll c1spx screens working

As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.

I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.

Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.

Masha and I ate some of Jamie's popcorn. It was good.

  7133   Thu Aug 9 07:24:58 2012 SashaUpdateSimulationsAll c1spx screens working

Quote:

As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.

I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.

Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.

Masha and I ate some of Jamie's popcorn. It was good.

 Okay! Attached are two power spectra. The first is a power spectrum of reality, the second is a power spectrum of the simPlant. Its looking much better (as in, no longer obviously white noise!), but there seems to be a gain problem somewhere (and it doesn't have seismic noise). I'll see if I can fix the first problem then move on to trying to find the seismic noise filters.

Attachment 1: Screenshot.png
Screenshot.png
Attachment 2: Screenshot-1.png
Screenshot-1.png
  16527   Mon Dec 20 14:10:56 2021 AnchalUpdateBHDAll coil drivers ready to be used, modified and tested

Koji found some 68nF caps from Downs and I finished modifying the last remaining coil driver box and tested it.

SERIAL # TEST result
S2100633 PASS

With this, all coil drivers have been modified and tested and are ready to be used. This DCC tree has links to all the coil driver pages which have documentation of modifications and test data.

  1733   Sun Jul 12 20:06:44 2009 JenneDAQComputersAll computers down

I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).

 

I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again.  Utter failure. 

 

I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do.

  1735   Mon Jul 13 00:34:37 2009 AlbertoDAQComputersAll computers down

Quote:

I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).

 

I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again.  Utter failure. 

 

I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do.

 I think the problem was caused by a failure of the RFM network: the RFM MEDM screen showed frozen values even when I was power recycling any of the FE computers. So I tried the following things:

- resetting the RFM switch
- power cycling the FE computers
- rebooting the framebuilder
 
but none of them worked.  The FEs didn't come back. Then I reset C1DCU1 and power cycled C1DAQCTRL.
 
After that, I could restart the FEs by power recycling them again. They all came up again except for C1DAQADW. Neither the remote reboot or the power cycling could bring it up.
 
After every attempt of restarting it its lights on the DAQ MEDM  screen turned green only for a fraction of a second and then became red again.
 
So far every attempt to reanimate it failed.
ELOG V3.1.3-