40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 315 of 344  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  1930   Wed Aug 19 23:57:35 2009 robUpdateLockingreport

 

locking work proceeding apace tonight.

diagonalized DRM with setDDphases & senseDRM

initial locks are fairly quick, aqstep script succeeds reliably.

first part of cm_step (handoff CARM-> MCL) usually works.

tuning up later parts of cm_step (presumably due to optical gain changes resulting from MOPA decline). 

got to arm powers ~60.

  1960   Fri Aug 28 13:49:07 2009 robUpdateLockingRF CARM hand off problem

Quote:
Last night, the lock script proceeded to the RF CARM hand-off about half of the time.
However, the hand off was still unsuccessful.

It failed instantly when you turn on the REFL1 input of the CM board, even
when the REFL1 input gain was very low, like -28dB.

I went to the LSC rack and checked the cabling.
The output from the PD11_I (REFL_2) demodulation board is split
into two paths. One goes directly to the ADC and the other one goes
to an SR560. This SR560 is used just as an inverter. Then
the signal goes to the REFL1 input of the CM board.

I found that the SR560 was set to the A-B mode, but B input was open.
This made the signal very noisy. So I changed it to A only mode.
There was also a 1/4 attenuator between the PD11_I output and the SR560.
I took it out and reduced the gain of SR560 from 10 to 2.
These changes allowed me to increase the REFL1 gain to -22dB or so.
But it is still not enough.

I wanted to check the CM open loop TF before the hand-off, but I could
not do that because the lock was lost instantly as soon as I enabled the
test input B of the CM board.
Something is wrong with the board ?

Using the PD11_I signal going into the ADC, I measured the transfer functions
from the CM excitation (digital one) to the REFL_DC (DC CARM signal) and PD11_I.
The TF shapes matched. So the PD11_I signal itself should be fine.

We should try:
* See if flipping the sign of PD11_I signal going to REFL1 input solve the problem.
* Try to measure the CM analog TF again.
* If the noise from the servo analyzer is a problem, try to increase the input gains
of the CM board and reduce the output gain accordingly, so that the signal flowing
inside the CM board is larger.



I'd bet it's in a really twitchy state by the time the script gets to the RF CARM handoff, as the script is not really validated up to that point. It's just the old script with a few haphazard mods, so it needs to be adjusted to accomodate the 15% power drop we've experienced since the last time it was locked.

The CM servo gain needs to be tweaked earlier in the script--you should be able to measure the AO path TF with the arm powers at 30 or so. I was able to do this with the current SR785 setup earlier this week without any trouble.

The 1/4 attenuator is there to prevent saturations on the input to the SR560 when there's still a CARM offset.

Not sure if flipping the sign of PD11 is right, but it's possible we compensated the digital gains and forgot about it. This signal is used for SRCL in the initial acquisition, so we'd have noticed a sign flip.
  1989   Thu Sep 17 14:17:04 2009 robUpdateComputersawgtpman on c1omc failing to start

[root@c1omc controls]# /opt/gds/awgtpman -2 &
[1] 16618
[root@c1omc controls]# mmapped address is 0x55577000
32 kHz system
Spawn testpoint manager
no test point service registered
Test point manager startup failed; -1

[1]+  Exit 1                  /opt/gds/awgtpman -2

 

 

 

  1990   Thu Sep 17 15:05:47 2009 robUpdateComputersawgtpman on c1omc failing to start

Quote:

[root@c1omc controls]# /opt/gds/awgtpman -2 &
[1] 16618
[root@c1omc controls]# mmapped address is 0x55577000
32 kHz system
Spawn testpoint manager
no test point service registered
Test point manager startup failed; -1

[1]+  Exit 1                  /opt/gds/awgtpman -2

 

 

 

 

 

 

This turned out to be fallout from the /cvs/cds transition.  Remounting and restarting fixed it.

  1991   Fri Sep 18 14:25:00 2009 robOmnistructurePSLwater under the laser chiller

rob, koji, steve

We noticed some water (about a cup) on the floor under the NESLAB chiller today.  We put the chiller up on blocks and took off the side panel for a cursory inspection, but found no obvious leaks.  We'll keep an eye on it.

  1994   Wed Sep 23 17:32:37 2009 robAoGComputersGremlins in the RFM

A cosmic ray struck the RFM in the framebuilder this afternoon, causing hours of consternation.  The whole FE system is just now coming back up, and it appears the mode cleaner is not coming back to the same place (alignment).

 

rob, jenne

  1997   Thu Sep 24 15:45:27 2009 robUpdateIOOMC OLG

I measured the mode cleaner open loop gain with the HP3563A

The UGF is 64kHz, phase margin is 28 deg.

  2013   Mon Sep 28 17:39:34 2009 robUpdatePSLproblems

The PSL/IOO combo has not been behaving responsibly recently. 

The first attachment is a 15 day trend of the MZ REFL, ISS INMON, and MC REFL power.  These show two separate problems--recurring MZ flakiness, which may actually be a loose cable somewhere which makes the servo disengage.  Such disengagement is not as obvious with the MZ as it is with other systems, because the MZ is relatively stable on its own.  The second problem is more recent, just starting in the last few days.  The MC is drifting off the fringe, either in alignment, length, or both.  This is unacceptable.

The second attachment is a two-day trend of the MC REFL power.  Last night I carefully put the beam on the center of the MC-WFS quads.  This appears to have lessened the problem, but it has not eliminated it. 

It's probably worth trying to re-measure the MCWFS system to make sure the control matrix is not degenerate. 

  2016   Tue Sep 29 01:50:10 2009 robConfigurationLSCnew modulation frequencies

Mode cleaner length measured tonight.

 

33196198

132784792

165980990

199177188

[Tag by KA: modulation frequency, MC length]

  2019   Tue Sep 29 16:14:44 2009 robConfigurationElectronicsRob is breaking stuff....

Quote:

Koji and I were looking for an extender card to aid with MZ board testing.  Rob went off on a quest to find one.  He found 2 (in addition to the one in the drawer near the electronics bench which says "15V shorted"), and put them in some empty slots in 1X1 to test them out.  Somehow, this burned a few pins on each board (1 pin on one of them, and 3 pins on the other). We now have 0 functioning extender cards: unfortunately, both extender cards now need fixing.  The 2 slots that were used in 1X1 now have yellow electrical tape covering the connectors so that they do not get used, because the ends of the burnt-off pins may still be in there. 

In other, not-Rob's-fault news, the Martian network is down...we're going to try to reset it so that we have use of the laptops again.

 

This happened when I plugged the cards into a crate with computers, which apparently is a no-no.  The extender cards only go in VME crates full of in-house, LIGO-designed electronics.

  2024   Tue Sep 29 23:43:46 2009 robUpdateSUSITMY UL OSEM

We had a redo of elog entry 975 tonight.  The noisy OSEM was fixed by jiggling the rack end of the long cable.  Don't know exactly where--I also poked around the OSEM PD interface board.

In the attached PDF the reference trace is the noisy one.

  2026   Wed Sep 30 01:04:56 2009 robUpdateComputersgrief

much grief.  somehow a burt restore of c1iscepics failed to work, and so the LSC XYCOM settings were not correct.  This meant that the LSC whitening filter states were not being correctly set and reported, making it difficult to lock for at least the last week or so.

  2027   Wed Sep 30 02:01:28 2009 robUpdateLockingweek

It's been a miserable week for lock acquisition, with each night worst than the last.  The nadir was around Sunday night, when I couldn't even get a PRM to lock stably, which meant that the auto-alignment scripts could not finish successfully.  It now appears that was due to some XYCOM mis-settings. 

We've also been having problems with timing for c1susvme2.  Attached is a one-hour plot of timing data for this cpu, known as SRM.  Each spike is an instance of lateness, and a potential cause of lock loss.  This has been going on for a quite a while.

Tonight we also encountered a large peak in the frequency noise around 485 Hz.  Changing the MZ lock point (the spot in the PZT range) solved this.

 

  2030   Thu Oct 1 03:12:56 2009 robUpdateLockingsome progress

Good progress in IFO locking tonight, with the arm powers reaching about half the full resonant maximum. 

Still to do is check out some weirdness with the OMC DAC, fix the wireless network, and look at c1susvme2 timing.

  2036   Thu Oct 1 14:22:28 2009 robUpdateSUSall suspensions undamped

Quote:

Quote:

 The EQ did not change the input beam pointing. All back to normal, except MC2 wachdogs tripped again.

 Round 3 for the day of MC2 watchdogs tripping.

 I've watchdogged all the suspensions while I mess around with computers.  If no one else is using the IFO, we can leave them undamped for a couple of hours to check the resonant frequencies, as long as I don't interrupt data streams with my computer hatcheting.

  2037   Thu Oct 1 15:42:55 2009 robUpdateLockingc1susvme2 timing problems update

Quote:

We've also been having problems with timing for c1susvme2.  Attached is a one-hour plot of timing data for this cpu, known as SRM.  Each spike is an instance of lateness, and a potential cause of lock loss.  This has been going on for a quite a while.

 

 

Attached is a 3 day trend of SRM CPU timing info.  It clearly gets better (though still problematic) at some point, but I don't know why as it doesn't correspond with any work done.  I've labeled a reboot, which was done to try to clear out the timing issues.  It can also be seen that it gets worse during locking work, but maybe that's a coincidence.

  2040   Fri Oct 2 02:55:07 2009 robUpdateLockingmore progress

More progress with locking tonight, with initial acquisition and power ramps working.  The final handoff to RF CARM still needs work.

I found the wireless router was unplugged from the network--just plugging in the cable solved the problem.  For some reason that RJ45 connector doesn't actually latch, so the cable is prone to slipping out of the jack.

 

  2042   Fri Oct 2 15:11:44 2009 robUpdateComputersc1susvme2 timing problems update update

It got worse again, starting with locking last night, but it has not recovered.  Attached is a 3-day trend of SRM cpu load showing the good spell.

  2045   Fri Oct 2 18:04:45 2009 robUpdateCDSDTT no good for OMC channels

I took the output of the OMC DAC and plugged it directly into an OMC ADC channel to see if I could isolate the OMC DAC weirdness I'd been seeing.  It looks like it may have something to do with DTT specifically.

Attachment 1 is a DTT transfer function of a BNC cable and some connectors (plus of course the AI and AA filters in the OMC system).  It looks like this on both linux and solaris.

Attachment 2 is a transfer function using sweepTDS (in mDV), which uses TDS tools as the driver for interfacing with testpoints and DAQ channels. 

Attachment 3 is a triggered time series, taken with DTT, of the same channels as used in the transfer functions, during a transfer function.  I think this shows that the problem lies not with awg or tpman, but with how DTT is computing transfer functions. 

 

I've tried soft reboots of the c1omc, which didn't work.   Since the TDS version appears to work, I suspect the problem may actually be with DTT.

  2046   Fri Oct 2 18:26:32 2009 robAoGEnvironmentearthquake

quake coming through.  I've re-enabled optic damping (except ETMY), and left off the oplevs for now.  We can do a resonant-f  check over the weekend.

Looks like it was a magnitude 5 near Olancha, where they sell really good fresh jerky.  quake

Earthquake Details

Magnitude 5.2
Date-Time
  • Saturday, October 03, 2009 at 01:15:59 UTC
  • Friday, October 02, 2009 at 06:15:59 PM at epicenter
Location 36.393°N, 117.877°W
Depth 0 km (~0 mile) (poorly constrained)
Region CENTRAL CALIFORNIA
Distances
  • 11 km (7 miles) S (182°) from Keeler, CA
  • 16 km (10 miles) ENE (59°) from Cartago, CA
  • 18 km (11 miles) NE (37°) from Olancha, CA
  • 28 km (17 miles) SE (141°) from Lone Pine, CA
  • 239 km (148 miles) W (276°) from Las Vegas, NV
Location Uncertainty horizontal +/- 0.6 km (0.4 miles); depth +/- 2.2 km (1.4 miles)
Parameters Nph=030, Dmin=19 km, Rmss=0.28 sec, Gp= 79°,
M-type=local magnitude (ML), Version=C
Source
Event ID ci14519780
  • This event has been reviewed by a seismologist.

latest news: there's actually been about a dozen earthquakes in Keeler in the last couple hours:   http://earthquake.usgs.gov/eqcenter/recenteqsus/Maps/special/California_Nevada_eqs.php

California_Nevada.gif-Rana

  2047   Sat Oct 3 14:53:24 2009 robUpdateLockingmore progress

Late last night after the ETMY settled down from the quake I made some more progress in locking, with the handoff to RF CARM succeeding once.  The final reduction of the CARM offset to zero didn't work, however.

  2048   Mon Oct 5 02:51:08 2009 robUpdateLockingalmost there

Working well tonight: the handoff of CARM to RF (REFL2I), successful reduction of CARM offset to zero, and transition control of MCL path to the OUT1 from the common mode board.  All that's left in lock acquisition is to try and get the common mode bandwidth up and the boost on.

  2056   Tue Oct 6 01:41:20 2009 robUpdateLockingDC Readout

Lock acquisition working well tonight.  Was able to engage CM boost (not superboost) with bandwidth of ~10kHz.  Also succeeded once in handing off DARM to DC readout.

  2080   Mon Oct 12 14:51:41 2009 robUpdateComputersc1susvme2 timing problems update update update

Quote:

It got worse again, starting with locking last night, but it has not recovered.  Attached is a 3-day trend of SRM cpu load showing the good spell.

 Last week, Alex recompiled the c1susvme2 code without the decimation filters for the OUT16 channels, so these channels are now as aliased as the rest of them.  This appears to have helped with the timing issues: although it's not completely cured it is much better.  Attached is a five day trend.

  2081   Mon Oct 12 17:14:39 2009 robUpdateLockingstability

Last night, 2+ hour lock, probably broken by me driving too hard (DARM_EXC).

  2092   Wed Oct 14 16:59:37 2009 robUpdateLockingdaytime locking

The IFO can now be locked during the daytime.  Well, it's locked now.

  2105   Fri Oct 16 16:08:00 2009 robConfigurationASCloop opened on PZT2 YAW at 3:40 pm

I pushed the "closed loop" button on PZT2 YAW around 3:40 pm today, then roughly recentered it using the DC Offset knob on the PiezoJena controller and the IP ANG QPD readbacks.  There was a large DC shift.    We'll watch and see how much it drifts in this state.

  2120   Mon Oct 19 18:14:28 2009 robUpdateCamerasvideo switch broken

The Chameleon HB (by Knox) video switch that we use for routing video signals into the control room monitors is broken.  Well, either it's broken, or something is wrong with the mv162 EPICS IOC which communicates with it via RS-232.  Multiple reboots/resets of both machines has not yet worked.  The CHHB has two RS-232 inputs--I switched to the second one, and there is now one signal coming through to a monitor but no switching yet. I've been unable to further debug it because we don't have anything in the lab (other than the omega iserver formerly used for the RGA logger) which can communicate with RS-232 ports.  I've been trying to get this thing (the iserver) working again, but can't communicate with it yet.  For now I'm just going to bypass the video switch entirely and use up all the BNC barrel connectors in the lab, so we can at least have the useful video displays back.

  2123   Tue Oct 20 02:14:29 2009 robUpdateIOOMC2 alignment bias changed

the mode cleaner was having trouble locking in a 00 mode, needing several tries.  I changed the MC2 coil biases, and it seems better for now.

  2126   Tue Oct 20 16:35:24 2009 robConfigurationLSC33MHz Mod depth

The 33MHz mod depth is now controlled by the OMC (C1:OMC-SPARE_DAC_CH_15).  The setting to give us the same modulation depth as before is 14000 (in the offset field).

  2138   Fri Oct 23 15:02:00 2009 robUpdateloremarconi phase

So, it appears that one doesn't even have to change the Marconi set frequency to alter the phase of the output signal.  It appears that other front panel actions (turning external modulations on/off, changing the modulation type) can do it as well.  At least that's what I conclude from earlier this morning, when after setting up the f2 Marconi (166MHz) for external AM, the double-demod handoff in the DRMI no longer worked.  Luckily this isn't a real problem now that we have the setDDphases and senseDRM scripts. 

  2141   Mon Oct 26 03:57:06 2009 robUpdateLockingbad

Lock acquisition has gone bad tonight. 

The initial stage works fine, up through handing off control of CARM to MCL.  However, when increasing the AO path (analog gain), there are large DC shifts in the C1:IOO-MC_F signal.  Eventually this causes the pockels cell in the FSS loop to saturate, and lock is lost. 

  2144   Mon Oct 26 18:15:57 2009 robUpdateIOOMC OLG

I measured the mode cleaner open loop gain.  It's around 60kHz with 29 degs of phase margin.

  2148   Tue Oct 27 01:45:02 2009 robUpdateLockingMZ

Quote:
Tonight we also encountered a large peak in the frequency noise around 485 Hz. Changing the MZ lock point (the spot in the PZT range) solved this.


This again tonight.

It hindered the initial acquisition, and made the DD signal handoff fail repeatedly.
  2151   Tue Oct 27 18:01:49 2009 robUpdatePSLhmmm

A 30-day trend of the PCDRIVE from the FSS.

  2152   Tue Oct 27 18:19:14 2009 robUpdateLockingbad

Quote:

Lock acquisition has gone bad tonight. 

The initial stage works fine, up through handing off control of CARM to MCL.  However, when increasing the AO path (analog gain), there are large DC shifts in the C1:IOO-MC_F signal.  Eventually this causes the pockels cell in the FSS loop to saturate, and lock is lost. 

 This problem has disappeared.  I don't know what it was. 

The first plot shows one of the symptoms.  The second plot is a similar section taken from a more normal acquisition sequence the night before.

All is not perfect, however, as now the handoff to RF CARM is not working.

  2154   Wed Oct 28 05:02:28 2009 robUpdateLockingback

LockAcq is back on track, with the full script working well.  Measurements in progress.

  2162   Thu Oct 29 21:51:07 2009 robUpdateLockingbad

Quote:

Quote:

Lock acquisition has gone bad tonight. 

The initial stage works fine, up through handing off control of CARM to MCL.  However, when increasing the AO path (analog gain), there are large DC shifts in the C1:IOO-MC_F signal.  Eventually this causes the pockels cell in the FSS loop to saturate, and lock is lost. 

 This problem has disappeared.  I don't know what it was. 

The first plot shows one of the symptoms.  The second plot is a similar section taken from a more normal acquisition sequence the night before.

All is not perfect, however, as now the handoff to RF CARM is not working.

 

The problem has returned.  I still don't know what it is, but it's making me angry. 

  2163   Fri Oct 30 04:41:37 2009 robUpdateLockingworking again

I never actually figured out exactly what was wrong in entry 2162, but I managed to circumvent by changing the time sequence of events in the up script, moving the big gain increases in the common mode servo to the end of the script.  So the IFO can be locked again.

  2221   Mon Nov 9 18:32:38 2009 robUpdateComputersOMC FE hosed

It won't start--it just sits at Waiting for EPICS BURT, even though the EPICS is running and BURTed.

 

[controls@c1omc c1omc]$ sudo ./omcfe.rtl
cpu clock 2388127
Initializing PCI Modules
3 PCI cards found
***************************************************************************
1 ADC cards found
        ADC 0 is a GSC_16AI64SSA module
                Channels = 64
                Firmware Rev = 3

***************************************************************************
1 DAC cards found
        DAC 0 is a GSC_16AO16 module
                Channels = 16
                Filters = None
                Output Type = Differential
                Firmware Rev = 1

***************************************************************************
0 DIO cards found
***************************************************************************
1 RFM cards found
        RFM 160 is a VMIC_5565 module with Node ID 130
***************************************************************************
Initializing space for daqLib buffers
Initializing Network
Waiting for EPICS BURT


  2222   Mon Nov 9 19:04:23 2009 robUpdateComputersOMC FE hosed

Quote:

It won't start--it just sits at Waiting for EPICS BURT, even though the EPICS is running and BURTed.

 

[controls@c1omc c1omc]$ sudo ./omcfe.rtl
cpu clock 2388127
Initializing PCI Modules
3 PCI cards found
***************************************************************************
1 ADC cards found
        ADC 0 is a GSC_16AI64SSA module
                Channels = 64
                Firmware Rev = 3

***************************************************************************
1 DAC cards found
        DAC 0 is a GSC_16AO16 module
                Channels = 16
                Filters = None
                Output Type = Differential
                Firmware Rev = 1

***************************************************************************
0 DIO cards found
***************************************************************************
1 RFM cards found
        RFM 160 is a VMIC_5565 module with Node ID 130
***************************************************************************
Initializing space for daqLib buffers
Initializing Network
Waiting for EPICS BURT


 

From looking at the recorded data, it looks like the c1omc started going funny on the afternoon of Nov 5th, perhaps as a side-effect of the Megatron hijinks last week.

 

It works when megatron is shutdown.

  2287   Tue Nov 17 21:21:30 2009 robUpdateSUSETMY UL OSEM

Had been disconnected for about two weeks.  I found a partially seated 4-pin LEMO cable coming from the OSEM PD interface board. 

  2309   Fri Nov 20 16:18:56 2009 robConfigurationSUSwatchdog rampdown

I've changed the watchdog rampdown script so it brings the SUS watchdogs to 220, instead of the 150 it previously targeted.  This is to make tripping less likely with the jackhammering going on next door.  I've also turned off all the oplev damping.

  2325   Wed Nov 25 03:05:15 2009 robUpdateLockingMeasured MC length

Quote:

What I meant was the VCO driver, not the FSS box.

As for the frequency, all written numbers were the Marconi displays.
The number on the frequency counter was also recorded, and so will be added to the previous entry shortly... 

Quote:

I propose that from now on, we indicate in the elog what frequencies we're referring to. In this case, I guess its the front panel readback and not the frequency counter -- what is the frequency counter readback? And is everything still locked to the 10 MHz from the GPS locked Rubidium clock?

Plus, what FSS Box? The TTFSS servo box? Or the VCO driver? As far as I know, the RC trans PD doesn't go through the FSS boxes, and so its a real change. I guess that a bad contact in the FSS could have made a huge locking offset.

 

 

Locking has gone sour.  The CARM to MCL handoff, which is fairly early in the full procedure and usally robust, is failing reliably. 

As soon as the SUS-MC2_MCL gain is reduced, lock is broken.  There appears to be an instability around 10Hz.  Not sure if it's related.

  2332   Wed Nov 25 14:29:08 2009 robUpdateLockingMeasured MC length--FSS trend

Quote:

Quote:

What I meant was the VCO driver, not the FSS box.

As for the frequency, all written numbers were the Marconi displays.
The number on the frequency counter was also recorded, and so will be added to the previous entry shortly... 

Quote:

I propose that from now on, we indicate in the elog what frequencies we're referring to. In this case, I guess its the front panel readback and not the frequency counter -- what is the frequency counter readback? And is everything still locked to the 10 MHz from the GPS locked Rubidium clock?

Plus, what FSS Box? The TTFSS servo box? Or the VCO driver? As far as I know, the RC trans PD doesn't go through the FSS boxes, and so its a real change. I guess that a bad contact in the FSS could have made a huge locking offset.

 

 

Locking has gone sour.  The CARM to MCL handoff, which is fairly early in the full procedure and usally robust, is failing reliably. 

As soon as the SUS-MC2_MCL gain is reduced, lock is broken.  There appears to be an instability around 10Hz.  Not sure if it's related.

 Five day minute trend.  FAST_F doesn't appear to have gone crazy.

  2333   Wed Nov 25 15:38:08 2009 robUpdateLockingMeasured MC length

Quote:

Quote:

What I meant was the VCO driver, not the FSS box.

As for the frequency, all written numbers were the Marconi displays.
The number on the frequency counter was also recorded, and so will be added to the previous entry shortly... 

Quote:

I propose that from now on, we indicate in the elog what frequencies we're referring to. In this case, I guess its the front panel readback and not the frequency counter -- what is the frequency counter readback? And is everything still locked to the 10 MHz from the GPS locked Rubidium clock?

Plus, what FSS Box? The TTFSS servo box? Or the VCO driver? As far as I know, the RC trans PD doesn't go through the FSS boxes, and so its a real change. I guess that a bad contact in the FSS could have made a huge locking offset.

 

 

Locking has gone sour.  The CARM to MCL handoff, which is fairly early in the full procedure and usally robust, is failing reliably. 

As soon as the SUS-MC2_MCL gain is reduced, lock is broken.  There appears to be an instability around 10Hz.  Not sure if it's related.

 Whatever the locking problem was, the power of magical thinking has forced it to retreat for now.  The IFO is currently locked, having completed the full up script.  One more thing for which to be thankful.

  2344   Sun Nov 29 16:56:56 2009 robAoGall down cond.sea of red

Came in, found all front-ends down.

 

Keyed a bunch of crates, no luck:

Requesting coeff update at 0x40f220 w/size of 0x1e44
No response from EPICS 

Powered off/restarted c1dcuepics.  Still no luck.

Powered off megatron.  Success!  Ok, maybe it wasn't megatron.  I also did c1susvme1 and c1susvme2 at this time.

 

BURT restored to Nov 26, 8:00am

 

But everything is still red on the C0_DAQ_RFMNETWORK.adl screen, even though the front-ends are running and synced with the LSC.  I think this means the framebuilder or the DAQ controller is the one in trouble--I keyed the crates with DAQCTRL and DAQAWG a couple of times, with no luck, so it's probably fb40m.    I'm leaving it this way--we can deal with it tomorrow.

  2353   Fri Dec 4 23:17:55 2009 robUpdateoplevsOplevs centered, IP_POS and IP_ANG centered

Quote:

[Jenne Koji]

 We aligned the full IFO, and centered all of the oplevs and the IP_POS and IP_ANG QPDs.  During alignment of the oplevs, the oplev servos were disabled.

Koji updated all of the screenshots of 10 suspension screens.  I took a screenshot (attached) of the oplev screen and the QPD screen, since they don't have snapshot buttons.

We ran into some trouble while aligning the IFO.  We tried running the regular alignment scripts from the IFO_CONFIGURE screen, but the scripts kept failing, and reporting "Data Receiving Error".  We ended up aligning everything by hand, and then did some investigating of the c1lsc problem.  With our hand alignment we got TRX to a little above 1, and TRY to almost .9 . SPOB got to ~1200 in PRM mode, and REFL166Q got high while in DRM (I don't remember the number). We also saw a momentary lock of the full initerferometer:   On the camera view we saw that Yarm locked by itself momentarily, and at that same time TRX was above 0.5 - so both arms were locked simultaneously.   We accepted this alignment as "good", and aligned all of the oplevs and  QPDs.

It seems that C1LSC's front end code runs fine, and that it sees the RFM network, and the RFM sees it, but when we start running the front end code, the ethernet connection goes away.  That is, we can ping or ssh c1lsc, but once the front end code starts, those functions no longer work.  During these investigations, We once pushed the physical reset button on c1lsc, and once keyed the whole crate.  We also did a couple rounds of hitting the reset button on the DAQ_RFMnetwork screen.

 A "Data Receiving Error" usually indicates a problem with the framebuilder/testpoint manager, rather than the front-end in question.  I'd bet there's a DTT somewhere that's gone rogue.

  2355   Sat Dec 5 14:41:07 2009 robAoGall down cond.sea of red, again

Taking  a cue from entry 2346, I immediately went for the nuclear option and powered off fb40m.  Someone will probably need to restart the backup script.

  2357   Sat Dec 5 17:34:30 2009 robUpdateIOOfrequency noise problem

There's a large broadband increase in the MC_F spectrum.  I'm not totally sure it's real--it could be some weird bit-swapping thing.  I've tried soft reboots of c1susvme2 and c1iovme, which haven't helped.  In any case, it seems like this is preventing any locking success today.  Last night it was fine.

ELOG V3.1.3-