40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 308 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  587   Sat Jun 28 03:10:25 2008 robUpdateComputersc1iovme

Quote:
C1susvme2 and C1iovme crashed which sent the optics swinging and tripped the watchdogs.

Koji and I were able to restore c1susvme2 without any trouble.

We have been unable to revive c1iovme. We have tried telneting in and running startup.cmd,
the process runs for a while then hangs with "DAQ init failed -- exiting".

Resetting the board doesn't help. I didn't try keying the whole crate.

All optics are back to normal with damping restored.


I tried keying the crate, then keying the DAQ controller & AWG, then powering down & restarting the framebuilder.
On coming up, the framebuild doesn't start a daqd process, and I can't get one to start by hand (it just prints "652", and then stops).
No error messages and daqd doesn't appear in the prstat.

I then tried keying the DAQ controller again (after the fb0 reboot), which blew the watchdogs on all the suspensions. So then I went around and keyed all the crates.

Now, the suspension controllers are back online. Still no c1iovme, and now the framebuilder/DAQ/AWG are also hosed. We can try keying all the crates again, in the order that Yoichi did last week.

After some more poking around, I found the daqd log file. It's now complaining about

Jun 28 03:00:39 fb daqd[546]: [ID 355684 user.info] Fatal error: channel `C1: PSL-FSS_MIXERM_F' is duplicated 126

This is the second error message like this. It first complained about C1: PSL-FSS_FAST_F, so I commented that out of C1IOOF.ini and rebooted the framebuilder (note this is an actual reboot of the full solaris machine). Eventually I discovered that C1IOOF.ini and C1IOO.ini are essentially identical. They presumably will keep getting these duplicate channel errors until one of them is completely removed.

C1IOO.ini has a modification time of seven PM on Friday night. Who did this and didn't elog it? I've now modified C1IOOF.ini, and I don't remember when it was last modified.
  592   Sun Jun 29 14:53:02 2008 robUpdateComputersRebooting

Quote:
All of the computers are now showing green lights.

Remaining problems:

Alignment scripts are failing with "ERROR: LDS - NDS server error #13"
I think this is a server transmission error.

Dataviwer shows all channels as zero.


Fixed. Just started the testpoint manager on fb40m.


su
/usr/controls/tpman &
  614   Tue Jul 1 13:34:29 2008 robUpdateComputersRFM network back

Quote:

For some reason, the computers requiring startup.cmd (like c1lsc) halt after running this command. Actually the computer is running ok, but the command freezes. Basically, what it does is simply to load a kernel module. I don't know what is wrong.
Anyway, I just closed the terminal after running startup.cmd and it seems fine for now.


This is normal. On the linux RTFEs (Real-Time Front Ends), the real-time code totally hijacks the kernel, disallowing any interrupts. The system thus becomes totally unresponsive while the code is running, and communicates only through the RFM and the VME backplane.
  615   Tue Jul 1 14:24:58 2008 robHowToComputer Scripts / Programsconlog time machine

I've written a perl script (now in the $SCRIPTS/general directory) which implements a "conlog restore" command, restoring channels matching a regexp to a given time using the conlog records and the EpicsTools.pm perl module. The script is called time_machine_conlog:


Quote:


op440m:~>time_machine_conlog

time_machine_conlog restores EPICS control settings using a conlog time
usage: time_machine_conlog [<--dryrun>] <date=yyyy/mm/dd,hh:mm:ss> <timezone> <regexp>

Can also accept a gps time, in which case timezone=gps.
Use the option <--dryrun> to see conlog output without restoring any settings.

EXAMPLE: time_machine_conlog 2008/05/30,12:00:00 PDT "C1:SUS-MC.*_(PIT|YAW)_COMM"



It sometimes returns an error message even when the command is successful--this is because conlog stores EPICS settings to an absurd level of precision, but ezcawrite will not write EPICS values to this level (or at least won't indicate if it did). I consider this a bug in ezcawrite so I'm not touching it.

The script is untested with regards to switch settings (such as ENABLE/DISABLE). It's mainly intended for numerical values.
  617   Tue Jul 1 21:27:27 2008 robHowToComputer Scripts / Programsslider twiddling after reboot

Sometimes after we reboot the front-end machines, some of the hardware gets stuck in an unknown state. We generally fix this by twiddling EPICS settings, which refresh the hardware somehow and put it into a known state. I've started a script (slider_twiddle) which we can just run after reboots to do this for us. Right now it just has the QPD whitening gain settings. As we find more stuff, we can add to it. It's in $SCRIPTS/Admin/.
  631   Thu Jul 3 13:54:26 2008 robConfigurationComputersmDV on rosalba

Does mDV work on rosalba? It can't find NDS_GetChannels. Looking on mafalda, I see that NDS_GetChannels is a mexglx. I think this means someone may need to compile it for 64-bit matlab before we can have mDV on rosalba. When that's done, we should get mDV running on megatron.
  632   Thu Jul 3 16:18:51 2008 robSummaryLockingspecgrams
I used ligoDV to make some spectrograms of DARM_ERR (1), QPDX (2), and QPDY (3). These show the massive instability from 30-40Hz growing in the XARM in the last two minutes of a reasonably high power lock (arm powers up to 30). It's strange that it only shows up in one arm.

CARM is on PO-DC, for both the MCL and the AO path.
DARM is on AS166Q.
Attachment 1: darm_specg.png
darm_specg.png
Attachment 2: qpdx_specg.png
qpdx_specg.png
Attachment 3: qpdy_specg.png
qpdy_specg.png
  655   Thu Jul 10 14:59:01 2008 robUpdateLockingRF common mode at zero offset
rob, john, yoichi

Last night we succeeded in reducing the CARM offset to zero.

We handed off control of the common mode servo from PO-DC to POX-I.

We pushed the common mode servo bandwidth to ~19kHz. Without the boosts, it had ~80 degs of phase margin. Didn't measure it after engaging the boosts (Boost + 1 superboost). Trying to engage the second superboost stage broke the lock.

The process is fully scripted, and the script worked all the way through several times.

The DARM ugf was ~200Hz. The RSE peak could clearly be seen. No optical spring, as expected (we're locking in anti-spring mode).

Engaging test mass de-whitening filters did not work (broke the lock).

I'm attaching a lock control sequence diagram and a trend of the arm power during a scripted up-sequence. I think the script can be sped up significantly (especially the long ramp period).

Up next:

Calibrated DARM spectrum
Noise hunting (start with dewhites)
DC - Readout
Lock to the springy side.
Attachment 1: lock_control_sequence_worked.png
lock_control_sequence_worked.png
Attachment 2: trendpowerbuild.png
trendpowerbuild.png
  658   Fri Jul 11 00:30:24 2008 robMetaphysicsComputersstrange SUS controllers

rob, johnnieM

We were hampered early tonight by the fact that someone sneakily turned off the HP RF Ampflier on the AS table.

After that, we were hampered further by mode cleaner strangeness. It would occasionally spontaneously unlock & blow its watchdogs. It never made it through the ontoMCL script (putting DC-CARM onto the MCL). After some investigation, we found that c1susvme1 and c1susvme2 were running stochastically late (SYNC_FE != 0), even though their computation times never got above 61. Also, the end SUS controllers were never late.

Weird.

After rebooting the vertex SUS controllers and the c1lsc, things appear to be working again.
  701   Fri Jul 18 23:24:24 2008 robUpdatePSLPMC PZT investigation

Quote:
I measured the HV coming to the PMC PZT by plugging it off from the PZT and hooking it up to a DVM.
The reading of DVM is pretty much consistent with the reading on EPICS. I got 287V on the DVM when the EPICS says 290V.

Then I used a T to monitor the same voltage while it is connected to the PZT. I attached a plot of the actual voltage measured by the DVM vs the EPICS reading.
It shows a hysteresis.
Also the actual voltage drops by more than a half when the PZT is connected. The output impedance of the HV amp is 64k (according to the schematic). If I believe this number, the impedance of the PZT should also be 64k. The current flowing the PZT is 1.6mA at 200V EPICS reading.
The impedance of the PZT directly measured by the DVM is 1.5M ohm, which is significantly different from the value expected above. I will check the actual output impedance of the HV amp later.
The capacitance of the PZT measured by the DVM is 300nF. I don't know if I can believe the DVM's ability to measure C.

I noticed that when a high voltage is applied, the actual voltage across the PZT shows a decay.
The second plot shows the step response of the actual voltage.
The voltage coming to the PZT was T-ed and reduced by a factor of 30 using a high impedance voltage divider to be recorded by an ADC.
The PMCTRANSPD channel is temporarily used to monitor this signal.
After the voltage applied to the PZT was increased abruptly (to ~230V), the actual voltage starts to exponentially decrease.
When the HV was reduced to ~30V, the actual voltage goes up. This behavior explains the weird exponential motion of the PZT feedback signal when the PMC is locked.
The cause of the actual voltage drop is not understood yet.
From the above measurements, we can almost certainly conclude that the problem of the PMC is in the PZT, not in the HV amp nor the read back.


I'd believe the Fluke's measurement of capacitance. Here's some info from PK about the PZT:


Quote:

But the PMC ones were something like
0.750 in. thick x 0.287 in. thick. 2 microns per 200 V displacement,
resonant frequency greater than 65 kHz. Typical capacitance is around 0.66
uF.


If the PZT capacitance has dropped by a factor of two, that seems like a bad sign. I don't know what to expect for a resistance value of the PZT, but I wouldn't be surprised if it's non-Ohmic. The 64k is the series resistor after the PA85, not the modeled resistance of the PZT itself.
  702   Sat Jul 19 19:39:44 2008 robUpdatePSLPMC PZT investigation

Quote:

Quote:
The 64k is the series resistor after the PA85, not the modeled resistance of the PZT itself.

Yes. What I meant was that because the measured voltage across the PZT was a half of the open voltage of the HV amp, the DC impedance of the PZT is expected to be similar to the output impedance of the HV amp. Of course, I don't think the DC impedance of a normal PZT should be such low.
I'm puzzled by the discrepancy between this expected DC impedance and the directly measured impedance by the Fluke DVM (1.5M Ohm).
One possibility is that the PZT leaks current only when a high voltage is applied.
  714   Tue Jul 22 13:15:14 2008 robUpdatePSLNote from R. Abbott re: the PMC

Quote:
an email from Rich:
Your PZT is broken.

R


Quelle surprise

Frown
  727   Wed Jul 23 21:48:30 2008 robConfigurationGeneralrestore IFO when you're done with it

when you are done with the IFO, please click "Restore last auto-alignment" on the yellow IFO portion of the C1IFO_CONFIGURE.adl screen. Failure to comply will be interpreted as antagonism toward the lock acquisition effort and will be met with excoriation.
  729   Thu Jul 24 01:04:01 2008 robConfigurationLSCIFR2023A (aka MARCONI) settings

Quote:


P.S.: We made a test by changing the frequency of the local oscillator by a little bit and then coming back to the original value. We observed that the phase of the signal can change, so every time this frequency is moved the 3f demod phase need to be retuned.



We discovered this little tidbit in March, and remembered it tonight. Basically we found that whenever you change the frequency on one of these signal generators (and maybe any other setting as well), the phase of the signal can change (it's probably just the sign, but still...), meaning that you when you return settings to their intial value, not everything is exactly as it once was. For most applications, this doesn't matter. For us, where we use one Marconi to demodulate the product of two other Marconis, it means we can easily cause a great deal of grief for ourselves, as the demod phase for the double demod signals can appear to change.

Programmatically, what this means is that every time you touch a Marconi you must elog it. Especially if you change a setting and then put it back.
  731   Thu Jul 24 02:57:26 2008 robUpdateLSCArm cavity g-factor measurement

Quote:

So, now I feel that the method for the TEM01 quest should be reconsidered.

If we have any unbalanced resonance for the phase modulation sidebands, the offset of the error signal is to be observed even with the carrier exactly at the resonance. We don't need to shake or move the cavity mirrors.

Presence of the MC makes the things more complicated. Changing the frequency of the modulation that should go throgh the MC is a bit tricky as the detuning produces FM-AM conversion. i.e. The beam incident on the arm cavity may be not only phase modulated but also amplitude modulated. This makes the measurement of the offset described above difficult.

The setup of the abs length measurement (FSR measurement) will be easily used for the measurement of the transverse mode spacings. But it needs some more time to be realized.


We should be able to see 166MHz sideband resonances using the double demodulated photodetectors. With these, the 33MHz sidebands will be acting as LO when the 166MHz sideband (or mode) resonates. Some modeling may be necessary to determine if the SNR will be good enough to make this worthwhile, however.
  732   Thu Jul 24 03:08:20 2008 robUpdateLocking+f2 DRMI+2ARMS

rob, john, yoichi

Tonight we tried to move the 166MHz (f2) sideband frequency by changing the settings on the Marconi. Reducing the frequency by 4kHz reduced the amplitude of the 166MHz sidebands, but we were still able to lock the DRMI with the +-f2 sidebands by electronically compensating for the gain decrease, and also to lock the DRMI+2ARMs while resonating the -f2 sideband. No luck with the +f2.

Then we larkily tried increasing the frequency by 4kHz, which ~doubled the f2 sideband transmission through the MC. This means our frequencies/MC length have been mismatched for months. Apparently I explained the level of the f2 sidebands by just imagining that I'd (or someone) had set the modulation depth at that level some time in the past.

It's a miracle any locking worked at all in this state. Once this was done and we worked out a few kinks in the script, adjusting some gains to compensate, we managed to get the DRMI+2ARMS to lock a couple of times while resonating the +f2 sideband. It takes a while, but at least it happens. Tomorrow we'll measure the length of the mode cleaner properly and then try again. No need to vent just yet.
  751   Mon Jul 28 23:41:07 2008 robConfigurationPSLFSS/MC gains twiddled

I found the FSS and MC gain settings in a weird state. The FSS was showing excess PC drive and the MC wouldn't lock--even when it did, the boost stage would pull it off resonance. I adjusted the nominal FSS gains and edited the mcup and mcdown scripts. The FSS common gain goes to 30dB, Fast gain to 22dB, and MCL gain goes to 1 (which puts the crossover back around ~85 degrees where phase rises above 40 degrees).
  752   Tue Jul 29 01:03:17 2008 robConfigurationIOOMC length measurement
rob, yoichi

We measured the length of the mode cleaner tonight, using a variant of the Sigg-Frolov method. We used c1omc DAC outputs to inject a signal (at 2023Hz) into the AO path of the mode cleaner and another at DC into the EXT MOD input of the 166MHz IFR2023A. We then moved an offset slider to change the 166MHz modulation frequency until we could not see the 2023Hz excitation in a single-bounce REFL166. This technique could actually be taken a step further if we were really cool--we could actually demodulate the signal at 2023Hz and look for a zero crossing rather than just a powerspec minimum. In any case, we set the frequency on the Marconi by looking at the frequency counter when the Marconi setting+EXT MOD input were correct, then changed the Marconi frequency to be within a couple of Hz of that reading after removing the EXT MOD input. We then did some arithmetic to set the other Marconis.

The new f2 frequency is:

New              OLD
--------------------------
165983145        165977195

  756   Tue Jul 29 14:38:02 2008 robUpdateSUSETMY and PRM have EQ related problems

Quote:
The attached trend shows that ETMY and PRM both had large steps in their sensors
around the time of the EQ and didn't return afterwards. The calibration of the
OSEM sensors is ~0.5 mm/V. The PRM sensors respond when we give it huge biases
but there is very little change in the ETMY. Almost certainly true that the
optics have shifted in their wire slings and that we will have to vent to
examine and repair at least ETMY.

Jenne is looking at the spectra of the other suspensions to see if there is
other more subtle issues.


Some additional notes/update:

ETMY, PRM, & MC2 had OSEM signals at a rail (indicating stuck optics). Driving the optics with full scale DAC output freed ETMY and MC2, so while these may have shifted in their slings it may be possible to avoid a repair vent. PRM is still stuck. One OSEM appears to respond with full range to large drives, but the other three face OSEMS remain disturbingly near the rail (HIGH, which is what would happen if a magnet fell off).
  757   Tue Jul 29 18:15:36 2008 robUpdateIOOMC locked

I used the SUS DRIFT MON screen to return the MC suspensions to near their pre-quake values. This required fairly large steps in the angle biases. Once I returned to the printed values on the DRIFT screen (from 3/08), I could see HOM flashes in the MC. It was then pretty easy to get back to a good alignment and get the MC locked.
  771   Wed Jul 30 15:28:08 2008 robUpdateLSCY arm locked

By using a combination of the SUS-DRIFT mon screen and the optical levers (which turned out pretty well) I steered the BS, ITMY, and ETMY back to their previous positions, and was able to lock the Y arm. The "Restore Y Arm" script on the IFO_CONFIGURE screen works. I couldn't test the alignment script, as a dump truck/construction vehicle showed up and started unlocking the MC.
  848   Mon Aug 18 17:37:14 2008 robUpdateLockingrecovery progress

I removed the beam block after the PSL periscope and opened the PSL shutter.

There was no MC Refl beam on the camera, so I decided to trust the PSL launch
and aligned the MC to the PSL beam. Here are the old and new values for
the MC angle biases:
 __Epics_Channel_Name______   __OLD_____    ___New___
 C1:SUS-MC1_PIT_COMM          4.490900        3.246900 
 C1:SUS-MC1_YAW_COMM          0.105500	      -0.912500
 C1:SUS-MC2_PIT_COMM          3.809700	      3.658600 
 C1:SUS-MC2_YAW_COMM          -1.837100	      -1.217100
 C1:SUS-MC3_PIT_COMM          -0.614200	      -0.812200
 C1:SUS-MC3_YAW_COMM          -3.696800	      -3.303800

After this, the beam looks a *little low* going into the Faraday Isolator.
Nonetheless, after turning on the IFO input steering PZTs, I was able to
quickly steer the PRM get a beam on the REFL camera and into the REFL OSA.
The PRM optical lever beam is also striking the quad.

I then used the ETMX optical lever as a reference for realigning. After
steering around the input PZTs and ITMX, I saw some flashes in Xarm trans, then got
it locked and ran the alignment script ~5 times. The arm power went
up to 0.9, so I tweaked the MC1 to put the MC refl beam back on MCWFS.
The XARM power then went up to .96. Good enough for now.

Then I started to try and re-align the YARM. Since the oplevs for both ITMY
and the BS are untrustworthy, I first tried to get the beam bouncing off ITMX
and the BS back into the AS OSA, to try and recover some BS alignment. This
didn't work, as the AS OSA may not be a good reference anyways. After
wandering around in the dark for a little while, I decided to try an automated
scan of the alignment space. I used the trianglewave script to scan
the angle biases of BS, ITMY, & ETMY, then looked at the trend of the transmitted
power to find the gps time when there were flashes. I then used
time_machine_conlog to restore the biases to that time. This was close
enough to easily recover the alignment. After several rounds of aligning &
centering oplevs, things look good.

Also locked a PRM. Will work on the DRM tomorrow.

I'm leaving the optics in their "aligned" states over night, so they can
start their "training."

Note: The MC is not staying locked. Needs investigation.

For tomorrow:

lock up the DRM
fix the mode cleaner
re-align mode cleaner to optimize beam through Faraday
re-align all optics again (will be much easier than today)
re-align beam onto all PDs after good alignment of suspended optics is established.
Attachment 1: flatlissa.png
flatlissa.png
  862   Wed Aug 20 13:23:32 2008 robUpdateLockingDRMI locked

I was able to lock the DRMI this afternoon. All the optical levers have been centered.
  952   Wed Sep 17 12:55:28 2008 robConfigurationIOOMC length
I measured the mode cleaner length last night:

SR620                Marconi
                     199178070
165981524            165981725
                     132785380
                      33196345


I did the division in Marconi-land, rather than SR620-land.
If someone wants to do this in SR620-land, feel free to do it and post the numbers.
  953   Wed Sep 17 12:58:12 2008 robUpdateLockingbad

Locking was pretty unsuccessful last night. All the subparts were locked (ARMs, PRM, DRM) and
aligned, but no DRMI+2ARMs locks. The alignment may have drifted significantly by the time I
got around to working the full shebang, however.

We should get back into the habit of clicking the
yellow "Restore last auto-alignment" button when we finish using the interferometer.
  961   Thu Sep 18 01:14:23 2008 robSummaryComputersEPICS BAD

Somehow the EPICS system got hosed tonight. We're pretty much dead in the water till we can get it sorted.

The alignment scripts were not working: the SUS_[opt]_[dof]_COMM CA clients were having consistent network failures.
I figured it might be related to the network work going on recently--I tried rebooting the c1susaux (the EPICS VME
processor in 1Y5 which controls all the vertex angle biases and watchdogs). This machine didn't come back after
multiple attempts at keying the crate and pressing the reset button. All the other cards in the crate are displaying
red FAIL lights. The MEDM screens which show channels from this processor are white. It appears that the default
watchdog switch position is OFF, so the suspensions are not receiving any control signals. I've left the damping loops
off for now. I'm not sure what's going on, as there's no way to plug in a monitor and see why the processor is not coming up.

A bit later, the c1psl also stopped communicating with MEDM, so all the screens with PSL controls are also white. I didn't try
rebooting that one, so all the switches are still in their nominal state.
  975   Mon Sep 22 12:06:58 2008 robUpdateSUSITMY UL OSEM


Last week I found the ITMY UL OSEM dead. I went around and checked the connections on the various flat ribbon cables
in the suspension control chain; pushing hard on the rack end of the long cable that goes from the sus electronics rack to the
ITMY sat amplifier fixed the problem. It's been fine since then.

NB: A visual inspection of the cable connection would not have revealed a problem. You just can't trust those flat
ribbon connectors with the hook latches.
  985   Tue Sep 23 13:25:07 2008 robUpdateLockinga bit better
I've been spending time working on the short DOF loops (PRC,MICH,SRC) in an attempt to make the
initial stage of lock acquisition (the DRMI+2ARMs, no spring) better. This seems to have been
largely successful, as last night there were several locks of the DRMI+2ARMs with pretty short
wait times.

The output matrix for the short DOFs is a bit strange, though. The MICH->PRM element is about
3 times too small, which seems to indicate something broken in hardware. The MICH->SRM element
seems normal, though, which suggests the BS is isn't broken--either the PRM has had a sudden
actuation increase or it's a problem with the sensing.
  998   Fri Sep 26 16:08:39 2008 robUpdateLockingsome progress
There's been good progress in locking the last couple of nights. A lot of time was wasted before I found that
all the SUS{POS,PIT,YAW} damping gains on the SRM were set to 0.1 for some reason, which let it get rung up
just a bit during bang locking. After setting these gains to 0.5 (similar to PRM and BS), the initial lock
acquisition of DRMI+2ARMs (nospring) got much quicker. Then more time was wasted by sticky sliders on the
transmon QPD whitening gain, causing the Schmitt triggered HI/LO gain PD switch not to happen. This meant
that the arm power was not reported properly when the CARM offset was reduced, and so loop gain normalizations
were not working properly. After all this, by the end of the night last night, reduced the CARM offset such
that stored power in the arms was about half of the max. Should be able to get to full power with another
good night, and then back to springy locking.
  1005   Mon Sep 29 13:23:40 2008 robSummaryPSLLaser chiller running a little hot

Quote:
I looked at it some last night and my suspicion was the ISS. Whenever the ISS switch came on the FAST got a kick.

We should try to disable the MC locking and ISS and see if the FSS/PMC/MZ are stable this way. If so this may be
a problem with the ISS / Current Shunt.


My entry about the laser chiller got deleted. The PSL appears to be running with the ISS gain at -5dB, so that's good, but the
chiller is still showing 21+ degrees. It should be at twenty, so there's something causing it to run out of
headroom. We'll know more once Yoichi has inspected the ISS.

In the deleted entry I noted that the VCO (AOM driver), which is quite warm, has been moved much closer to the MOPA.
This may be putting some additional load on the chiller (doubtful given the amount of airflow with the HEPAs on,
but it's something to consider).
  1009   Tue Sep 30 13:43:43 2008 robUpdateLockinglast night
Steady progress again in locking again last night. Initial acquisition of DRMI+2ARMs was working well.
Short DOF handoff, CARM->MCL, AO on PO_DC, and power ramping all worked repeatedly, in the cm_step script.
This takes us to the point where the common mode servo is handed off to an RF signal and the CARM offset
is reduced to zero. This last step didn't work, but it should just require some tweaking of the gains
during the handoff.
  1014   Wed Oct 1 02:54:03 2008 robUpdateLockingbad

Tried the spring-y side tonight with a discouraging lack of progress. There were several locks of DRMI+2ARMs with
the +f2 (springy) sideband resonating in the DRM, but they weren't very stable. Moving to just the DRMI and resonating
the +f2, in order to tune up the acquisition and the handoff to the double demod signals, revealed the problem that the
DRM just won't stay locked on the +f2 sideband. It locks quickly, but only for a few seconds. This is different from the
behaviour with the -f2 sideband, which locks quickly and stably. In theory, the two sidebands should behave similarly.
It could be problems with HOMs in the recycling cavities, and so we may try changing the modulation frequency slightly.
  1019   Thu Oct 2 02:45:50 2008 robUpdateLockingmarginally better
Locking the DRMI with the +f2 sideband was marginally better tonight. I was able to get it lock stably enough to take transfer
functions and handoff MICH & PRC to double demod signals. After re-alignment, however, behaviour was similar to last night
(locks quickly but only for a few seconds), so that lends some credence to HOM-as-bad-guy theories.
  1023   Fri Oct 3 15:09:58 2008 robUpdatePSLFAST/SLOW

Last night during locking, for no apparent reason (no common mode), the PSL FAST/SLOW loop starting going just a little
nutz. Attached is a two day plot. The noisy period started around 11-ish last night.
Attachment 1: FASTSLOW.png
FASTSLOW.png
  1024   Fri Oct 3 15:57:05 2008 robUpdateLockinglast night, again
Last night was basically a repeat of the night before--marginally better locking with the DRMI resonating the +f2
sideband. Several stable locks were achieved, and several control handoffs to DDM signals worked, but never from
lock to lock--that is, a given DD handoff strategy would only work once. This really needs to work smoothly before
more progress can be made.

Also, a 24Hz mode got rung up in one/several of the suspensions--this can also impede the stability of locks.
  1025   Fri Oct 3 19:38:02 2008 robMetaphysicsEnvironmentThe Gatekeeper

Found this lady outside the door of the 40m lab a few nights ago.
Attachment 1: DSC_0409.JPG
DSC_0409.JPG
  1038   Fri Oct 10 00:34:52 2008 robOmnistructureComputersFEs are down

The front-end machines are all down. Another cosmic-ray in the RFM, I suppose. Whoever comes in first in the morning should do the all-boot described in the wiki.
  1125   Mon Nov 10 11:06:09 2008 robHowToIOOmode cleaner locked

I found the mode cleaner unlocked, with (at least) MC1 badly mis-aligned. After checking the coil alignment biases and finding everything there looking copasetic, I checked the trends of SUS{PIT,YAW,POS} and found that both MC1 and MC3 took a step this morning. The problem turned out to be loosed/jiggled cables at the satellite amplifiers for these suspensions. Giving them a good hard push to seat them restored the alignment and the mode cleaner locked right up.
  1126   Mon Nov 10 11:32:49 2008 robUpdateComputersc1iscex rebooted

it was running a few cycles late
  1165   Mon Dec 1 15:09:27 2008 robUpdatePEMhalf-micron particle count is alarming
  1198   Sat Dec 20 23:37:43 2008 robOmnistructureGeneralSaturday Night Fever after presumed power failure

Just came by to pick something up...

... alarm handlers screeching...

... TP1 failure--closing V1... call Steve... Steve says ok till tomorrow...

... all front ends down (red)...

... all suspensions watchdogged...

... all (I think) servos off...

... PSL shutter closed ...

... chiller at 15C ... I turned it off to prevent condensation in PA...

... MOPA shutter closed... turned off key on Lightwave power supply

... good luck all, and happy holidays!
  1218   Thu Jan 8 20:26:17 2009 robOmnistructureGeneralEarthquake in San Bernardino
Magnitude 4.5
Date-Time

* Friday, January 09, 2009 at 03:49:46 UTC
* Thursday, January 08, 2009 at 07:49:46 PM at epicenter

Location 34.113N, 117.294W
Depth 13.8 km (8.6 miles)
Region GREATER LOS ANGELES AREA, CALIFORNIA
Distances

* 2 km (1 miles) S (183) from San Bernardino, CA
* 6 km (4 miles) NNE (25) from Colton, CA
* 8 km (5 miles) E (89) from Rialto, CA
* 88 km (55 miles) E (86) from Los Angeles Civic Center, CA

Location Uncertainty horizontal +/- 0.3 km (0.2 miles); depth +/- 0.8 km (0.5 miles)
Parameters Nph=142, Dmin=1 km, Rmss=0.38 sec, Gp= 14,
M-type=moment magnitude (Mw), Version=Q

I felt it from home.

All the watchdogs are tripped, vacuum normal. It looks like all the OSEM sensor values are swinging, so presumably no broken magnets. I'm leaving the suspensions off so we can take fine-res spectra overnight.

Watchout for crappy cables coming loose.
  1222   Mon Jan 12 10:57:38 2009 robUpdateGeneralsome stuff

The AS beam was not hitting the AS166 diode, so I aligned the last little steering mirror and adjusted the phase for MICH locking.

I turned on the HV supplies for the OMC.

Then I realigned the beam onto the AS166 diode, since the steering mirrors came on when I turned on the HV supplies.

It took awhile to find the alignment of the beam into the OMC. Once that was done, the output beam alignment was set, so I aligned onto the AS166 diode a third time.

The bottom two Sorensens in the OMC voltage supply don't look right. They have stickers that say +-24V, but each is sitting at 17.5V and showing no current draw. What's going on here?
  1224   Tue Jan 13 11:10:42 2009 robConfigurationComputersconlogger restarted
unknown how long it's been down.
  1232   Fri Jan 16 11:33:59 2009 robConfigurationDMFDMF start script

Quote:
I tried to restart the DMF using the start_all script: http://dziban.ligo.caltech.edu:40/40m/280

it didn't work Frown


It should work soon. The PATH on mafalda does not include ".", so I added a line to the start_DMF subscript, which sets up the DMF ENV, to prepend this to the path before starting the tools. I didn't put it in the primary login path (such as in the .cshrc file) because Steve objects on philosophical grounds.

Also, the epics tools in general (such as tdsread) on mafalda were not working, due to PATH shenanigans and missing caRepeaters. Yoichi is harmonizing it.
  1303   Sat Feb 14 16:15:19 2009 robConfigurationComputersc1susvme1

c1susvme1 is behaving weirdly.  I've restarted it several times but its computation time is hanging out around 260 usec, making it useless for suspension control and locking.  I also found a PS/2 keyboard plugged in, which doesn't work, so I unplugged it.  It needs to be plugged into a PS/2 keyboard/mouse Y-splitter cable. 

  1304   Sat Feb 14 16:53:26 2009 robUpdateLSCLocking status

Quote:
Yoichi, Jenne, Alberto, Rob

Last night, the locking proceeded until the CARM -> MC_L hand-off.
However, the MC_F gets saturated (as expected) and the IFO loses lock soon after the hand-off.
So we need to offload MC_F.
We ran the offloadMCF script, but it did not work, i.e. just waiting for CARM mode.
Looks like an EPICS flag is not set right.


I found a '$<' in the offloadMCF script. I don't know precisely what that construct means, but I think it caused the script to wait for input when it shouldn't. It probably got in there accidentally. We need to be careful when we're opening scripts just to look at how they work that we don't accidentally change them. I like to use the command 'less' for this purpose.

With this gone, the script worked properly, although the lock didn't last long. I don't know if the next stage in the process is failing or if it's just a bit too noisy in the afternoon. I didn't get a chance to do much testing since the sus controller (susvme1) went nuts. In retrospect, this could be due to something in the script, so maybe we should try a burt restore to Friday afternoon next time someone wants to look at it.
  1311   Mon Feb 16 16:26:29 2009 robUpdateComputersmedm directory wiped on nodus

Quote:

Quote:
I accidentally did an 'rm -rf' on the medm directory in nodus, instead of on my laptop as was intended.

I then did an svn checkout. So everything should be current as of the last update, but I am sure that
we have not done a checkin on all of the latest screen enhancements. So...we may have to revert to the
Sunday morning tar to get the latest changes back.


Indeed, some changes to the medm directory I made were lost.
It was my fault not to check-in those changes.
I asked Alan to restore the directory from the daily rsync backup.
However, the backup job executed this morning have already overwritten the previous (good) backup with the current (bad) medm directory, which Rana restored from the svn. Alan will ask Stuart and Phil if there is still older backup remaining somewhere.

Anyway, I realized that we should stop the backup cron job whenever you think you made a mistake on /cvs/cds/ directory to prevent unwanted overwriting.
The procedure is:
(1) Login to fb40m
(2) Type 'crontab -e'. Emacs will open up in the terminal.
(3) Comment out the backup job (insert # at the beginning of the line containing /cvs/cds/caltech/scripts/backup/rsync.backup ).
(4) Save the file (Ctrl-x Ctrl-s) and exit (Ctrl-x Ctrl-c).

I will post this information on the wiki.


We should change the rsync script so that it does not delete stuff. Maybe it can keep deleted stuff for 6 months or something.
  1343   Fri Feb 27 13:49:19 2009 robUpdateLockingthurs night

Could not get past arm power of ~11 or so.  I was suspicious of the transmon high-gain/low-gain PD handover, so I ran the matchTransMon scripts, but that did not help.  I also removed the line in the cm_step script that increased the CM gain by 18dB at an arm power of 4.  The gain of the CM servo will increase naturally as the power in the IFO builds up, so it may not be good to crank it right away.  I tried several other CM gains, and watched the DARM loop, but still could not get past an arm power of ~10-11.  I'm not sure what's wrong, but it may be that mysterious CM-servo/McWFS conspiracy, so we can try turning down the McWFS gain next time.

  1465   Thu Apr 9 23:11:27 2009 robSummaryLockingLaser PM to PO-DC transfer functions at multiple CARM offsets

I've plotted some transfer functions showing the response at POB DC to laser frequency (phase) noise.  There are transfer functions for multiple CARM offsets.  Basically, the transfer function looks like the DARM transfer function when the CARM is at zero offset, and is super-wonky elsewhere.  POB-DC is not a good CARM signal for intermediate stages of lock acquisition in a dual-recycled interferometer.  We should look into switching back to REFL-DC.

 

Attachment 1: CARMoffs1.png
CARMoffs1.png
Attachment 2: CARMoffs2.png
CARMoffs2.png
Attachment 3: CARMcarpet.png
CARMcarpet.png
ELOG V3.1.3-