40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 319 of 348  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  2144   Mon Oct 26 18:15:57 2009 robUpdateIOOMC OLG

I measured the mode cleaner open loop gain.  It's around 60kHz with 29 degs of phase margin.

  2148   Tue Oct 27 01:45:02 2009 robUpdateLockingMZ

Quote:
Tonight we also encountered a large peak in the frequency noise around 485 Hz. Changing the MZ lock point (the spot in the PZT range) solved this.


This again tonight.

It hindered the initial acquisition, and made the DD signal handoff fail repeatedly.
  2151   Tue Oct 27 18:01:49 2009 robUpdatePSLhmmm

A 30-day trend of the PCDRIVE from the FSS.

Attachment 1: pcdrive_trend.png
pcdrive_trend.png
  2152   Tue Oct 27 18:19:14 2009 robUpdateLockingbad

Quote:

Lock acquisition has gone bad tonight. 

The initial stage works fine, up through handing off control of CARM to MCL.  However, when increasing the AO path (analog gain), there are large DC shifts in the C1:IOO-MC_F signal.  Eventually this causes the pockels cell in the FSS loop to saturate, and lock is lost. 

 This problem has disappeared.  I don't know what it was. 

The first plot shows one of the symptoms.  The second plot is a similar section taken from a more normal acquisition sequence the night before.

All is not perfect, however, as now the handoff to RF CARM is not working.

Attachment 1: MCF_issue.png
MCF_issue.png
Attachment 2: no_MCF_issue.png
no_MCF_issue.png
  2154   Wed Oct 28 05:02:28 2009 robUpdateLockingback

LockAcq is back on track, with the full script working well.  Measurements in progress.

  2162   Thu Oct 29 21:51:07 2009 robUpdateLockingbad

Quote:

Quote:

Lock acquisition has gone bad tonight. 

The initial stage works fine, up through handing off control of CARM to MCL.  However, when increasing the AO path (analog gain), there are large DC shifts in the C1:IOO-MC_F signal.  Eventually this causes the pockels cell in the FSS loop to saturate, and lock is lost. 

 This problem has disappeared.  I don't know what it was. 

The first plot shows one of the symptoms.  The second plot is a similar section taken from a more normal acquisition sequence the night before.

All is not perfect, however, as now the handoff to RF CARM is not working.

 

The problem has returned.  I still don't know what it is, but it's making me angry. 

Attachment 1: itsback.png
itsback.png
  2163   Fri Oct 30 04:41:37 2009 robUpdateLockingworking again

I never actually figured out exactly what was wrong in entry 2162, but I managed to circumvent by changing the time sequence of events in the up script, moving the big gain increases in the common mode servo to the end of the script.  So the IFO can be locked again.

  2221   Mon Nov 9 18:32:38 2009 robUpdateComputersOMC FE hosed

It won't start--it just sits at Waiting for EPICS BURT, even though the EPICS is running and BURTed.

 

[controls@c1omc c1omc]$ sudo ./omcfe.rtl
cpu clock 2388127
Initializing PCI Modules
3 PCI cards found
***************************************************************************
1 ADC cards found
        ADC 0 is a GSC_16AI64SSA module
                Channels = 64
                Firmware Rev = 3

***************************************************************************
1 DAC cards found
        DAC 0 is a GSC_16AO16 module
                Channels = 16
                Filters = None
                Output Type = Differential
                Firmware Rev = 1

***************************************************************************
0 DIO cards found
***************************************************************************
1 RFM cards found
        RFM 160 is a VMIC_5565 module with Node ID 130
***************************************************************************
Initializing space for daqLib buffers
Initializing Network
Waiting for EPICS BURT


  2222   Mon Nov 9 19:04:23 2009 robUpdateComputersOMC FE hosed

Quote:

It won't start--it just sits at Waiting for EPICS BURT, even though the EPICS is running and BURTed.

 

[controls@c1omc c1omc]$ sudo ./omcfe.rtl
cpu clock 2388127
Initializing PCI Modules
3 PCI cards found
***************************************************************************
1 ADC cards found
        ADC 0 is a GSC_16AI64SSA module
                Channels = 64
                Firmware Rev = 3

***************************************************************************
1 DAC cards found
        DAC 0 is a GSC_16AO16 module
                Channels = 16
                Filters = None
                Output Type = Differential
                Firmware Rev = 1

***************************************************************************
0 DIO cards found
***************************************************************************
1 RFM cards found
        RFM 160 is a VMIC_5565 module with Node ID 130
***************************************************************************
Initializing space for daqLib buffers
Initializing Network
Waiting for EPICS BURT


 

From looking at the recorded data, it looks like the c1omc started going funny on the afternoon of Nov 5th, perhaps as a side-effect of the Megatron hijinks last week.

 

It works when megatron is shutdown.

  2287   Tue Nov 17 21:21:30 2009 robUpdateSUSETMY UL OSEM

Had been disconnected for about two weeks.  I found a partially seated 4-pin LEMO cable coming from the OSEM PD interface board. 

  2309   Fri Nov 20 16:18:56 2009 robConfigurationSUSwatchdog rampdown

I've changed the watchdog rampdown script so it brings the SUS watchdogs to 220, instead of the 150 it previously targeted.  This is to make tripping less likely with the jackhammering going on next door.  I've also turned off all the oplev damping.

  2325   Wed Nov 25 03:05:15 2009 robUpdateLockingMeasured MC length

Quote:

What I meant was the VCO driver, not the FSS box.

As for the frequency, all written numbers were the Marconi displays.
The number on the frequency counter was also recorded, and so will be added to the previous entry shortly... 

Quote:

I propose that from now on, we indicate in the elog what frequencies we're referring to. In this case, I guess its the front panel readback and not the frequency counter -- what is the frequency counter readback? And is everything still locked to the 10 MHz from the GPS locked Rubidium clock?

Plus, what FSS Box? The TTFSS servo box? Or the VCO driver? As far as I know, the RC trans PD doesn't go through the FSS boxes, and so its a real change. I guess that a bad contact in the FSS could have made a huge locking offset.

 

 

Locking has gone sour.  The CARM to MCL handoff, which is fairly early in the full procedure and usally robust, is failing reliably. 

As soon as the SUS-MC2_MCL gain is reduced, lock is broken.  There appears to be an instability around 10Hz.  Not sure if it's related.

  2332   Wed Nov 25 14:29:08 2009 robUpdateLockingMeasured MC length--FSS trend

Quote:

Quote:

What I meant was the VCO driver, not the FSS box.

As for the frequency, all written numbers were the Marconi displays.
The number on the frequency counter was also recorded, and so will be added to the previous entry shortly... 

Quote:

I propose that from now on, we indicate in the elog what frequencies we're referring to. In this case, I guess its the front panel readback and not the frequency counter -- what is the frequency counter readback? And is everything still locked to the 10 MHz from the GPS locked Rubidium clock?

Plus, what FSS Box? The TTFSS servo box? Or the VCO driver? As far as I know, the RC trans PD doesn't go through the FSS boxes, and so its a real change. I guess that a bad contact in the FSS could have made a huge locking offset.

 

 

Locking has gone sour.  The CARM to MCL handoff, which is fairly early in the full procedure and usally robust, is failing reliably. 

As soon as the SUS-MC2_MCL gain is reduced, lock is broken.  There appears to be an instability around 10Hz.  Not sure if it's related.

 Five day minute trend.  FAST_F doesn't appear to have gone crazy.

Attachment 1: FSStrendpowerjump.png
FSStrendpowerjump.png
  2333   Wed Nov 25 15:38:08 2009 robUpdateLockingMeasured MC length

Quote:

Quote:

What I meant was the VCO driver, not the FSS box.

As for the frequency, all written numbers were the Marconi displays.
The number on the frequency counter was also recorded, and so will be added to the previous entry shortly... 

Quote:

I propose that from now on, we indicate in the elog what frequencies we're referring to. In this case, I guess its the front panel readback and not the frequency counter -- what is the frequency counter readback? And is everything still locked to the 10 MHz from the GPS locked Rubidium clock?

Plus, what FSS Box? The TTFSS servo box? Or the VCO driver? As far as I know, the RC trans PD doesn't go through the FSS boxes, and so its a real change. I guess that a bad contact in the FSS could have made a huge locking offset.

 

 

Locking has gone sour.  The CARM to MCL handoff, which is fairly early in the full procedure and usally robust, is failing reliably. 

As soon as the SUS-MC2_MCL gain is reduced, lock is broken.  There appears to be an instability around 10Hz.  Not sure if it's related.

 Whatever the locking problem was, the power of magical thinking has forced it to retreat for now.  The IFO is currently locked, having completed the full up script.  One more thing for which to be thankful.

  2344   Sun Nov 29 16:56:56 2009 robAoGall down cond.sea of red

Came in, found all front-ends down.

 

Keyed a bunch of crates, no luck:

Requesting coeff update at 0x40f220 w/size of 0x1e44
No response from EPICS 

Powered off/restarted c1dcuepics.  Still no luck.

Powered off megatron.  Success!  Ok, maybe it wasn't megatron.  I also did c1susvme1 and c1susvme2 at this time.

 

BURT restored to Nov 26, 8:00am

 

But everything is still red on the C0_DAQ_RFMNETWORK.adl screen, even though the front-ends are running and synced with the LSC.  I think this means the framebuilder or the DAQ controller is the one in trouble--I keyed the crates with DAQCTRL and DAQAWG a couple of times, with no luck, so it's probably fb40m.    I'm leaving it this way--we can deal with it tomorrow.

  2353   Fri Dec 4 23:17:55 2009 robUpdateoplevsOplevs centered, IP_POS and IP_ANG centered

Quote:

[Jenne Koji]

 We aligned the full IFO, and centered all of the oplevs and the IP_POS and IP_ANG QPDs.  During alignment of the oplevs, the oplev servos were disabled.

Koji updated all of the screenshots of 10 suspension screens.  I took a screenshot (attached) of the oplev screen and the QPD screen, since they don't have snapshot buttons.

We ran into some trouble while aligning the IFO.  We tried running the regular alignment scripts from the IFO_CONFIGURE screen, but the scripts kept failing, and reporting "Data Receiving Error".  We ended up aligning everything by hand, and then did some investigating of the c1lsc problem.  With our hand alignment we got TRX to a little above 1, and TRY to almost .9 . SPOB got to ~1200 in PRM mode, and REFL166Q got high while in DRM (I don't remember the number). We also saw a momentary lock of the full initerferometer:   On the camera view we saw that Yarm locked by itself momentarily, and at that same time TRX was above 0.5 - so both arms were locked simultaneously.   We accepted this alignment as "good", and aligned all of the oplevs and  QPDs.

It seems that C1LSC's front end code runs fine, and that it sees the RFM network, and the RFM sees it, but when we start running the front end code, the ethernet connection goes away.  That is, we can ping or ssh c1lsc, but once the front end code starts, those functions no longer work.  During these investigations, We once pushed the physical reset button on c1lsc, and once keyed the whole crate.  We also did a couple rounds of hitting the reset button on the DAQ_RFMnetwork screen.

 A "Data Receiving Error" usually indicates a problem with the framebuilder/testpoint manager, rather than the front-end in question.  I'd bet there's a DTT somewhere that's gone rogue.

  2355   Sat Dec 5 14:41:07 2009 robAoGall down cond.sea of red, again

Taking  a cue from entry 2346, I immediately went for the nuclear option and powered off fb40m.  Someone will probably need to restart the backup script.

  2357   Sat Dec 5 17:34:30 2009 robUpdateIOOfrequency noise problem

There's a large broadband increase in the MC_F spectrum.  I'm not totally sure it's real--it could be some weird bit-swapping thing.  I've tried soft reboots of c1susvme2 and c1iovme, which haven't helped.  In any case, it seems like this is preventing any locking success today.  Last night it was fine.

Attachment 1: mcf.png
mcf.png
  2359   Sat Dec 5 22:31:52 2009 robUpdateIOOfrequency noise problem

Quote:

There's a large broadband increase in the MC_F spectrum.  I'm not totally sure it's real--it could be some weird bit-swapping thing.  I've tried soft reboots of c1susvme2 and c1iovme, which haven't helped.  In any case, it seems like this is preventing any locking success today.  Last night it was fine.

 Rebooting c1iovme (by keying off the crate, waiting 30 seconds, and then keying it back on and restarting) has resolved this.  The frequency noise is back to the 'usual' trace.

  2379   Thu Dec 10 09:51:06 2009 robUpdatePSLRCPID settings not saved

Koji, Jenne, Rob

 

We found that the RCPID servo "setpoint" was not in the relevant saverestore.req file, and so when c1psl got rebooted earlier this week, this setting was left at zero.  Thus, the RC got a bit chilly over the last few days.  This channel has been added. 

 

Also, RCPID channels have been added (manually) to conlog_channels. 

  2412   Mon Dec 14 13:17:33 2009 robUpdateTreasureWe are *ROCKSTARS* ! IFO is back up

 

 

Attachment 1: two-thumbs-up.jpeg
two-thumbs-up.jpeg
  2566   Wed Feb 3 09:01:42 2010 robUpdateloreIFO isn't playing nice tonight

Quote:

I checked the situation from my home and the problem was solved.

The main problem was undefined state of the autolocker and the strange undefined switch states, being associated with the bootfest and burtrestore.

- MC UP/DOWN status shows it was up and down. So I ran scripts/MC/mcup and scripts/MC/mcdown. These cleared the MC autolocker status.

- I had a problem handling the FSS. After mcup/mcdown above, I randomly pushed the "enable/disable" buttons and others, and with some reason, it recovered the handling. Actually it acquired the lock autonomously. Kiwamu may have also been working on it at the same time???

- Then, I checked the PSL loop. I disconnected the loop by pushing the "test" button. The DC slider changes the PZT voltage only 0~+24V. This is totally strange and I started pushing the buttons randomly. As soon as I pushed the  "BLANK"/"NORMAL" button, the PZT output got back under the control.

- Then I locked the PMC, MZ, and MC as usual.

Alberto: You must be careful as the modulations were restored.

Quote:

[Jenne, Kiwamu]

It's been an iffy last few hours here at the 40m.  Kiwamu, Koji and I were all sitting at our desks, and the computers / RFM network decided to crash.  We brought all of the computers back, but now the RefCav and PMC don't want to lock.  I'm a wee bit confused by this.  Both Kiwamu and I have given it a shot, and we can each get the ref cav to sit and flash, but we can't catch it.  Also, when I bring the PMC slider rail to rail, we see no change in the PMC refl camera.  Since c1psl had been finicky coming back the first time, I tried soft rebooting, and then keying the crate again, but the symptoms remained the same.  Also, I tried burt restoring to several different times in the last few days, to see if that helped.  It didn't.  I did notice that MC2 was unhappy, which was a result of the burtrestores setting the MCL filters as if the cavity were locked, so I manually ran mcdown.  Also, the MC autolocker script had died, so Kiwamu brought it back to life.

Since we've spent an hour on trying to relock the PSL cavities (the descriptive word I'm going to suggest for us is persistent, not losers), we're giving up in favor of waiting for expert advice in the morning.  I suppose there's something obvious that we're missing, but we haven't found it yet......

 

 

This is a (sort of) known problem with the EPICS computers: it's generally called the 'sticky slider' problem, but of course it applies to buttons as well.  It happens after a reboot, when the MEDM control/readback values don't match the actual applied voltages.  The solution (so far) is just to `twiddle' the problematic sliders/button.  There's a script somewhere called slider_twiddle that does this, but I don't remember if it has PSL stuff in it.  A better solution is probably to have an individual slider twiddle script for each target machine, and add the running of that script to the reboot ritual in the wiki.

  2578   Mon Feb 8 15:01:46 2010 robUpdateABSLSuddenly a much better alignment of PRC

Quote:

I just aligned PRM and locked PRC and I noticed that SPOB is much higehr than it used to be. It's now about 1800, vs 1200 than it used to be last week.

Isn't anyone related to that? If so, may I please know how he/she did it?

 oops, my bad.  I cranked the 33MHz modulation depth and forgot to put it back.  The slider should go back to around 3. 

  2885   Thu May 6 11:34:35 2010 robUpdateCDSlsc.mdl and ifo.mdl to build (with caveats)

Quote:

I got around to actually try building the LSC and IFO models on megatron.  Turns out "ifo" can't be used as a model name and breaks when trying to build it.  Has something to do with the find and replace routines I have a feeling (ifo is used for the C1, H1, etc type replacements throughout the code).  If you change the model name to something like ifa, it builds fine though.  This does mean we need a new name for the ifo model.

Also learned the model likes to have the cdsIPCx memory locations terminated on the inputs if its being used in a input role (I.e. its bringing the channel into the model).  However when the same part is being used in an output role (i.e. its transmitting from the model to some other model), if you terminate the output side, it gives errors when you try to make.

Its using the C1.ipc file (in /cvs/cds/caltech/chans/ipc/) just fine.  If you have missing memory locations in the C1.ipc file (i.e. you forgot to define something) it gives a readable error message at compile time, which is good.  The file seems to be being parsed properly, so the era of writing "0x20fc" for block names is officially over.

 I suggest "ITF" for the model name.

  2945   Tue May 18 12:04:13 2010 robUpdateIOOFirst steps toward MC mode measuring

Quote:

Another note: Don't trust the PSL shutter and the switch on the MEDM screens! Always use a manual block in addition!!! We discovered upon closeup that hitting the "Closed" button, while it reads back as if the shutter is closed (with the red box around the buttons), does not in fact close the shutter.  The shutter is still wide open.  This must be fixed.

 Has anyone tried pushing the "reset" button on the Uniblitz driver?

  1659   Sat Jun 6 01:44:53 2009 rob UpdateLocking?

Lock acquisition is proceeding smoothly for the most part, but there is a very consistent failure point near the end of the cm_step script.

Near the end of the procedure, while in RF common mode, the sensing for the MCL path of the common mode servo is transitioned from a REFL 166I signal which comes into the LSC whitening board from the demodulator, to another copy of the signal which has passed through the common mode board, and is coming out of the Length output of the common mode board.  We do this because the signal which comes through the CM board sees the switchable low-frequency boost filter, and so both paths of the CM servo (AO and MCL) can get that filter switched on at the same time.

The problem is occurring after this transition, which works reliably.  However, when the script tries to remove the final CARM offset, and bring the offset to zero, lock is abruptly lost.  DARM, CM, and the crossover all look stable, and no excess noise appears while looking at the DARM, CARM, MCF spectra.  But lock is always lost right about the same offset. 

Saturation somewhere?

  2076   Fri Oct 9 16:36:13 2009 rob UpdateIOOfrequency noise problem

I used the XARM as a reference to measure the frequency noise after the MC.  It's huge around 4kHz--hundreds of times larger than the frequency noise the MC servo is actually squashing.  This presents a real problem for our noise performance.

An elog search reveals that this noise has been present (although not calibrated till now) for years.  We're not sure what's causing it, but suspicion falls on the piezojena input PZTs. 

I didn't bother too much about it before because we previously had enough common mode servo oomph to squash it below other DARM noises, and I didn't worry too much about stuff at 4kHz..  Now that we have a weaker FSS and thus much weaker CM servo, we can't squash it, and the most interesting feature of our IFO is at 4kHz. 

I'll measure the actual voltage noise going to the PZTs.  I remember doing this before and concluding it was ok, but can't find an elog entry.  So this time maybe I'll  do it right.

Attachment 1: freqnoiseaftermc.png
freqnoiseaftermc.png
  2172   Tue Nov 3 03:45:04 2009 rob UpdateIOOfrequency noise problem

Quote:

I used the XARM as a reference to measure the frequency noise after the MC.  It's huge around 4kHz--hundreds of times larger than the frequency noise the MC servo is actually squashing.  This presents a real problem for our noise performance.

An elog search reveals that this noise has been present (although not calibrated till now) for years.  We're not sure what's causing it, but suspicion falls on the piezojena input PZTs. 

I didn't bother too much about it before because we previously had enough common mode servo oomph to squash it below other DARM noises, and I didn't worry too much about stuff at 4kHz..  Now that we have a weaker FSS and thus much weaker CM servo, we can't squash it, and the most interesting feature of our IFO is at 4kHz. 

I'll measure the actual voltage noise going to the PZTs.  I remember doing this before and concluding it was ok, but can't find an elog entry.  So this time maybe I'll  do it right.

 

This level of frequency noise has not changed, but we now have increased common mode servo gain and so it's not as huge of a deal, although we should still probably do something about it. 

 

Attached is a plot of the piezojena noise measurement, estimated into Hz, along with another measurement of frequency noise as described above. 

To get the piezojena voltage noise into Hz, I estimated the PZTs within have a flat 2 micron/V response (based on a rough knowledge of their geometry and assuming a 10 milliradian / 150V steering range).  This is the voltage noise with the PZTs operating in closed loop mode, which is how we normally run them.  This plot also ignores the transfer function of the Pomona box, as we are mainly looking at noise in the kHz band.  I think this plot shows that these PZTs are a good candidate for creating this frequency noise, especially near their mechanical resonances (the manual says the unloaded resonances are in the 3-4kHz range).   

I've been operating one DOF of the piezojenas in open loop mode for a couple of weeks now, and the feared drift has not been a problem at all.  If we plan to keep using these after the upgrade, we should definitely put some big resistors in series at the outputs and operate them in open loop mode.

Also attached is a plot of RF DARM noise, with a frequency noise spectrum.  That spectrum is a REFL 2I spectrum put into DARM units using a measured TF (driving MC_L and measuring REFL 2I and DARM_ERR), and then put into meters using the same DARM calibration as used for the DARM curve.

Attachment 1: noise.png
noise.png
Attachment 2: spectra.pdf
spectra.pdf
  1740   Mon Jul 13 23:03:14 2009 rob, albertoOmnistructureEnvironmentRemoval of the cold air deflection device for the MOPA chiller

Quote:
Around 2 PM today, I removed the blue flap which has been deflecting the cold air from the AC down into the laser chiller.
Let's watch the laser trends for a few days to see if there's any effect.


Alberto has moved us to stage 2 of this experiment: turning off the AC.

The situation at the control room computers with the AC on minus the blue flap is untenable--it's too cold and the air flow has an unpleasant eye-drying effect.
  1741   Tue Jul 14 00:32:46 2009 rob, albertoOmnistructureEnvironmentRemoval of the cold air deflection device for the MOPA chiller

Quote:

Quote:
Around 2 PM today, I removed the blue flap which has been deflecting the cold air from the AC down into the laser chiller.
Let's watch the laser trends for a few days to see if there's any effect.


Alberto has moved us to stage 2 of this experiment: turning off the AC.

The situation at the control room computers with the AC on minus the blue flap is untenable--it's too cold and the air flow has an unpleasant eye-drying effect.


I turned the AC back on because the temperature of the room was going up so also that of the laser chiller.
  1836   Wed Aug 5 15:33:05 2009 rob, albertoDAQGeneralcan't get trends

We can't read minute trends from either Dataviewer or loadLIGOData from before 11am this morning. 

 

fb:/frames>du -skh minute-trend-frames/
 106G   minute-trend-frames

So the frames are still on the disk.  We just can't get them with our usual tools (NDS).

 

 Trying to read 60 days of minute trends from C1:PSL-PMC_TRANSPD yields:

Connecting to NDS Server fb40m (TCP port 8088)
Connecting.... done
258.0 minutes of trend displayed
read(); errno=9
read(); errno=9
T0=09-06-06-22-34-02; Length=5184000 (s)
No data output.

 

Trying to read 3 seconds of full data works.

Second trends are readable after about 4am UTC this morning, which is about 9 pm last night.

 


  616   Tue Jul 1 16:48:42 2008 rob, johnConfigurationPSLMZ servo switch problem resolved forever

Quote:
C1:PSL-MZ_BLANK switch (to turn on/off the servo) is not working again. The switch is always off regardless of the epics state.
I pushed the cables into the xycom card, but it did not fix the problem.


We have fixed this problem forever, by totally disabling this switch. Looking at the schematic for the MZ servo and the datasheet of the AD602, we found that a HI TTL on pin 4 disables the output of the AD602. Since the MZ servo was stuck in the off position, this seemed to indicate that it may be the XYCOM220 itself which is broken, constantly putting out a +5V signal regardless of the EPICS controls. We thought we might be able to get around this by disconnecting this signal at the cross-connect, but ultimately we couldn't find it because there is no wiring diagram for the Mach-Zehnder (!). So, we pulled the board and wired pin 9A of P1 to ground, permanently NORMALizing the MZ_BLANK switch. John has marked up the schematic, and someone should modify the MEDM screen and check the new screen into svn.

We can still the turn the MZ servo on and off by using the test input 1 switch.

Someone also will need to modify the MZ autolocker to use the test input 1 (MZ_SW1) instead of the old MZ_BLANK.
  1611   Wed May 20 01:53:48 2009 rob, peteUpdateLockingviolin mode filters in drstep_bang

Recently the watch script was having difficulty grabbing a lock for more than a few seconds.  Rob discovered that the violin notch filters which were activated in the script were causing the instability.  We're not sure why yet.  The script seems significantly more stable with that step commented out.

  1622   Fri May 22 17:05:24 2009 rob, peteUpdateComputershard reboot of vertex suspension controllers

we did a hard reboot of c1susvme1, c1susvme2, c1sosvme, and c1susaux.  We are hoping this will fix some of the weird suspension issues we've been having (MC3 side coil, ITMX alignment).

  1654   Fri Jun 5 01:10:13 2009 rob, peteUpdateLockingundermined

We were stymied early in the evening by a surreptitiously placed, verbo-visually obfuscated command in the drstep script. 

  1657   Fri Jun 5 16:45:28 2009 rob, peteHowToComputerstdsavg failure in cm_step script

Quote:

Quote:

the command

tdsavg 5 C1:LSC-PD4_DC_IN1

was causing grievous woe in the cm_step script.  It turned out to fail intermittently at the command line, as did other LSC channels.  (But non-LSC channels seem to be OK.)  So we power cycled c1lsc (we couldn't ssh).

Then we noticed that computers were out of sync again (several timing fields said 16383 in the C0DAQ_RFMNETWORK screen).  We restarted c1iscey, c1iscex, c1lsc, c1susvme1, and c1susvme2.  The timing fields went back to 0.  But the tdsavg command still  intermittently said "ERROR: LDAQ - SendRequest - bad NDS status: 13".

The channel C1:LSC-SRM_OUT16 seems to work with tdsavg every time.

Let us know if you know how to fix this. 

 

 Did you try restarting the framebuilder?

 

What you type is in bold:

op440m> telnet fb40m 8087

daqd> shutdown

 

Restarting the framebuilder didn't work, but the problem now appears to be fixed.

Upon reflection, we also decided to try killing all open DTT and Dataviewer windows.  This also involved liberal use of ps -ef to seek out and destroy all diag's, dc3's, framer4's, etc.

 

That may have worked, but it happened simultaneously to killing the tpman process on fb40m, so we can't be sure which is the actual solution.

 

To restart the testpoint manager:

what you type is in bold:

rosalba> ssh fb40m

fb40m~> pkill tpman

The tpman is actually immortal, like Voldemort or the Kurgan or the Cylons in the new BG.  Truly slaying it requires special magic, so the pkill tpman command has the effect of restarting it.

 

In the future, we should make it a matter of policy to close DTTs and Dataviewers when we're done using them, and killing any unattended ones that we encounter.

 

  2224   Mon Nov 9 19:44:38 2009 rob, ranaUpdateComputersOMC FE hosed

 

We found that someone had set the name of megatron to scipe11. This is the same name as the existing c1aux in the op440m /etc/hosts file.

We did a /sbin/shutdown on megatron and the OMC now boots.

Please: check to see that things are working right after playing with megatron or else this will sabotage the DR locking and diagnostics.

  1621   Fri May 22 17:03:14 2009 rob, steveUpdatePSLMOPA takes a holiday

The MOPA is taking the long weekend off.

Steve went out to wipe off the condensation inside the MOPA and found beads of water inside the NPRO box, perilously close to the PCB board.  He then measured the water temperature at the chiller head, which is 6C.  We decided to "reboot" the MOPA/chiller combo, on the off chance that would get things synced up.  Upon turning off the MOPA, the neslab chiller display immediately started displaying the correct temperature--about 6C.  The 22C number must come from the MOPA controller.  We thus tentatively narrowed down the possible space of problems to: broken MOPA controller and/or clog in the cooling line going to the power amplifier.  We decided to leave the MOPA off for the weekend, and start plumbing on Tuesday.  It is of course possible that the controller is the problem, but we think leaving the laser off over the weekend is the best course of action.

 

 

  9830   Fri Apr 18 14:00:48 2014 rolfUpdateCDSmx_stream not starting on c1ioo

 

 To fix open-mx connection to c1ioo, had to restart the mx mapper on fb machine. Command is /opt/mx/sbin/mx_start_mapper, to be run as root. Once this was done, omx_info on c1ioo computer showed fb:0 in the table and mx_stream started back up on its own. 

  4697   Thu May 12 00:59:45 2011 ryan, ranaUpdatePSLReturn of the PSL temperature box

The PSL temperature box has returned to service, with some circuit modifications. The 1k resistors on all the temp. sensor inputs (R3, R4, R7, R8, R12, R12) were changed to 0 Ohm. Also, the 10k resistors R26, R28, R29, and R30 were changed to 10.2k metal film. The DCC document will be updated shortly. There is now an offset in the MINCOMEAS channel compared to the others, which will be corrected in the morning after looking at the overnight trend.

  5071   Sat Jul 30 19:06:25 2011 ryan, ranaUpdatePSLReturn of the PSL temperature box

Quote:

The PSL temperature box has returned to service, with some circuit modifications. The 1k resistors on all the temp. sensor inputs (R3, R4, R7, R8, R12, R12) were changed to 0 Ohm. Also, the 10k resistors R26, R28, R29, and R30 were changed to 10.2k metal film. The DCC document will be updated shortly. There is now an offset in the MINCOMEAS channel compared to the others, which will be corrected in the morning after looking at the overnight trend.

 Forgot to do this in May. Have just changed the values in the psl.db file now as well as updating them live via Probe.

To make the appropriate change, I took the measured offset (5.31 deg) and added 2x this to the EGUF and EGUL field for the MINCO_MEAS channel. (see instructions here)

Committed the .db file to the SVN.

attached plot shows 8 days of trend with 5.31 degC added to the black trace using the XMGRACE Data Set Transformations

Attachment 1: rctempbox.png
rctempbox.png
  3539   Tue Sep 7 23:17:45 2010 sanjitConfigurationComputersrossa notes

Quote:

* rossa needs to be able move windows between monitors: Xinerama?

 Xinerama support has been enabled on rossa using nvidia-settings.

  3541   Tue Sep 7 23:49:08 2010 sanjitConfigurationComputersaldabella network configuration

 

added name server 192.169.113.20 as the first entry in /etc/resolv.conf

changed the host IPs in /etc/hosts to 192.168.xxx.yyy

made:

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhos6

as the first two lines of /etc/hosts

 

/cvs/cds mounts

on ethernet, DNS look-up works without the explicit host definitions in /etc/hosts,

but those entries are needed for wifi only connection.

 

  2043   Fri Oct 2 15:24:29 2009 sanjit, ranaSummaryIOOmcwfs centered

we set the offsets for the MCWFS DC and for the MCWFS demod outputs and then turned off the lights put the MZ at half fringe and then centered the spots on the MCWFS heads.

The MCREFL beam looks symmetric again and the MC REFL power is low. 

Attachment 1: Untitled.png
Untitled.png
  3071   Sat Jun 12 18:03:00 2010 sharmilaUpdateelogTemperature Controller

Kiwamu and I setup a serial port terminal for receiving data from TC200 via a RS-232 USB interface. It was done using a Python code. Some command definitions need to be done to get the output from TC-200.

  14001   Thu Jun 21 23:59:12 2018 shrutiUpdatePEMSeismometer temp control

We (Rana and I) are re-assembling the temperature controls on the seismometer to attempt PID control and then improve it using reinforcement learning.

We tried to re-assemble the connections for the heater and in-loop temperature sensor on the can that covers the seismometer.

We fixed (soldered) two of the connections from the heater circuit to the heater, but did not manage to get the PID working as one of the wires attached to the MOSFET had come off. Re-soldering the wire would be attempted tomorrow.

Equipment for undertaking all this is still left at the X-end of the interferometer and will be cleared soon.

  14002   Fri Jun 22 00:06:13 2018 shrutiUpdateGeneralover-head fluorescent lights down

Two out of the four over-head fluorescent lights in the X end of the interferometer were flickering today.

  14016   Mon Jun 25 22:27:57 2018 shrutiUpdatePEMSeismometer temp control - heater circuit

After removing all the clamping screws from the heater circuit board, I soldered the wire connecting IRF630 to the output of OP27, which had come off earlier. This can only be a temporary fix as the wire was not long enough to be able to make a proper solder joint. I also tried fixing two other connections which were also almost breaking.

After re-assembling everything I found out that one of the LEDs was not working. The most likely cause seems to be an issue with LM791, LM 781 or the LED itself. Due to the positioning of the wires, I was unable to test them today but will try again possibly tomorrow.

Equipment used for this is still lying at the X end.

Quote:

We (Rana and I) are re-assembling the temperature controls on the seismometer to attempt PID control and then improve it using reinforcement learning.

We tried to re-assemble the connections for the heater and in-loop temperature sensor on the can that covers the seismometer.

We fixed (soldered) two of the connections from the heater circuit to the heater, but did not manage to get the PID working as one of the wires attached to the MOSFET had come off. Re-soldering the wire would be attempted tomorrow.

Equipment for undertaking all this is still left at the X-end of the interferometer and will be cleared soon.

  14030   Thu Jun 28 11:05:48 2018 shrutiUpdatePEMSeismometer temp control equipment

Earlier today I cleared up most of the equipment at the X end near the seismometer to make the area walkable. 

In the process, I removed the connections to the temperature sensor and placed the wires on top of the can.

  14979   Fri Oct 18 20:21:33 2019 shrutiUpdateALSAM measurement attempt at X end

[Shruti, Rana]

- At the X end, we set up the network analyzer to begin measurement of the AM transfer function by actuation of the laser PZT.

- The lid of the PDH optics setup was removed to make some checks and then replaced.

- From the PDH servo electronics setup the 'GREEN_REFL' and 'TO AUX-X LASER PZT' cables were removed for the measurement and then re-attached after.

- The signal today was too low to make a real measurement of the AM transfer function, but the GPIB scripts and interfacing was tested. 

ELOG V3.1.3-