40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 326 of 357  Not logged in ELOG logo
ID Date Author Type Category Subject
  1593   Sun May 17 14:35:52 2009 YoichiUpdateVACVC1 opened
I found the VC1 was closed and the pressure was 4.5e-3 torr.
I tweaked the optical sensor (cryopump temperature), and opened VC1.
  1592   Sat May 16 16:20:33 2009 robUpdateLSCarms, coils, locks, #2

Quote:

This is the two arms locked, for an hour.  No integrator in either loop, but from this it looks like ETMY may have a bigger length2angle problem than ETMX.  I'll put some true integrators in the loops and do this again.

 

 

 There appear to be at least two independent problems: the coil balancing for ETMY is bad, and something about ITMX is broken (maybe a coil driver). 

The Y-arm becomes significantly misaligned during long locks, causing the arm power to drop.  This misalignment tracks directly with the DC drive on ETMY.  Power returns to the maximum after breaking and re-establishing lock.

ITMX alignment wanders around sporadically, as indicated by the oplevs and the X-arm transmitted power.  Power returns to previous value (not max) after breaking and re-establishing lock.

Both loops have integrators.

Attachment 1: twoproblems.png
twoproblems.png
Attachment 2: coil_imbalanceETMY.png
coil_imbalanceETMY.png
Attachment 3: ITMXalignment.png
ITMXalignment.png
  1591   Fri May 15 17:30:00 2009 robUpdateLSCarms, coils, locks

This is the two arms locked, for an hour.  No integrator in either loop, but from this it looks like ETMY may have a bigger length2angle problem than ETMX.  I'll put some true integrators in the loops and do this again.

 

 

Attachment 1: armslock_no_int.png
armslock_no_int.png
  1590   Fri May 15 16:47:44 2009 josephbUpdateCamerasImproved camera code

At Rob's request I've added the following features to the camera code.

The camera server, which can be started on Ottavia by just typing pserv1 (for camera 1) or pserv2 (for camera 2), now has the ability to save individual jpeg snap shots, as well as taking a jpeg image every X seconds, as defined by the user.

The first text box is for the file name (i.e. ./default.jpg will save the file to the local directory and call it default.jpg).  If the camera is running (i.e. you've pressed start), prsessing "Take Snapshot to" will take an image immediately and save it.  If the camera is not running, it will take an image as soon as you do start it.

If you press "Start image capture every X seconds", it will do exactly that.  The file name is the same as for the first button, but it appends a time stamp to the end of the file.

There is also a viedo recording client now.  This is access by typing "pcam1-mov" or "pcam2-mov".  The text box is for setting the file name.  It is currently using the open source Theora encoder and Ogg format (.ogm).  Totem is capable of reading this format (and I also believe vlc).  This can be run on any of the Linux machines.

The viewing client is still accessed by "pcam1" or "pcam2".

I'll try rolling out these updates to the sites on Monday.

The configuration files for camera 1 and camera 2 can be found by typing in camera (which is aliased to cd /cvs/cds/caltech/apps/linux64/python/pcamera) and are called pcam1.ini, pcam2.ini, etc.

 

  1589   Fri May 15 14:05:14 2009 DmassHowToComputersHow To: Crash the Elog

The Elog started crashing last night. It turns out I was the culprit, and whenever I tried to upload a certain 500kb .png picture, it would die. It has happened both when choosing "upload" of a picture, and when choosing "submit" after successfully uploading a picture. Both culprits were ~500kb .png files.

  1588   Fri May 15 00:02:34 2009 peteUpdateSUSETMX coils look OK

I checked the four rear coils on ETMX by exciting XXCOIL_EXC channel in DTT with amplitude 1000@ 500 Hz and observing the oplev PERROR and YERROR channels.  Each coil showed a clear signal in PERROR, about 2e-6 cts.  Anyway, the coils passed this test.

 

  1587   Thu May 14 16:07:20 2009 peteSummarySUSChannel Hopping: That ancient enemy (MC problems)

Quote:

Quote:
The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps.


This started around April 23, around the time that TP1 failed and we switched to the cryopump, and also when there was a mag 4 earthquake in LA. My money's on the EQ. But I don't know how.


I wonder if this is still a problem. It has been quiet for a day now. I've attached a day-long trend. Let's see what happens.
Attachment 1: mc3_5days.jpg
mc3_5days.jpg
  1586   Thu May 14 15:28:28 2009 steveSummarySUSApril 24 earthquake effect on MC2

Quote:

Quote:
The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps.


This started around April 23, around the time that TP1 failed and we switched to the cryopump, and also when there was a mag 4 earthquake in LA. My money's on the EQ. But I don't know how.



Only MC2 moved in this earth quake. Was the MC alignment touched up since than?
Have you guys swapped satellite amp of MC3 yet?
Attachment 1: eq042409.jpg
eq042409.jpg
  1585   Thu May 14 02:36:05 2009 peteUpdateLockingunstable IFO

It seems that the MC3 problem is intermittent (one-day trend attached).  I tried to take advantage of a "clean MC3" night, but the watch script would usually fail at the transition to DC CARM and DARM.  It got past this twice and then failed later, during powering up.   I need to check the handoff.

 

Attachment 1: mc3.jpg
mc3.jpg
  1584   Thu May 14 00:15:39 2009 robSummarySUSChannel Hopping: That ancient enemy (MC problems)

Quote:
The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps.


This started around April 23, around the time that TP1 failed and we switched to the cryopump, and also when there was a mag 4 earthquake in LA. My money's on the EQ. But I don't know how.
Attachment 1: sidemon.png
sidemon.png
  1583   Wed May 13 21:15:04 2009 ranaSummarySUSChannel Hopping: That ancient enemy (MC problems)
The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps.
  1582   Wed May 13 14:43:29 2009 robSummaryloreChannel Hopping: That ancient enemy (MC problems)

Quote:

We were stymied tonight by a problem which began late this afternoon.  The MC would periodically go angularly unstable, breaking lock and tripping the MC2 watchdogs.  Suspicion fell naturally upon McWFS.

Eventually I traced the problem to the MC3 SIDE damping, which appeared to not work--it wouldn't actually damp, and the Vmon values did not correspond to the SDSEN outputs.  Suspicion fell on the coil driver.

Looking at the LEMO monitors on the MC3 coil driver, with the damping engaged, showed clear bit resolution at the 100mV level, indicating a digital/DAC problem.  Rebooting c1sosvme, which acquires all the OSEM sensor signals and actually does the side damping, resolved the issue. 

 Lies!  The problem was not resolved. The plot shows a 2-day trend, with the onset of the problem yesterday clearly visible as well as the ineffectiveness of the soft-reboot done yesterday.   So we'll try a hard-reboot.

Attachment 1: MC3sidemon.png
MC3sidemon.png
  1581   Wed May 13 12:41:14 2009 josephbUpdateCamerasTiming and stability tests of GigE Camera code

At the request of people down at LLO I've been trying to work on the reliability and speed of the GigE camera code.  In my testing, after several hours, the code would tend to lock up on the camera end.  It was also reported at LLO after several minutes the camera display would slow down, but I haven't been able to replicate that problem.

I've recently added some additional error checking and have updated to a more recent SDK which seems to help.  Attached are two plots of the frames per second of the code.  In this case, the frames per second  are measured as the time between calls to the C camera code for a new frame for gstreamer to encode and transmit.  The data points in the first graph are actually the averaged time for sets of 1000 frames.  The camera was sending 640x480 pixel frames, with an exposure time of 0.01 seconds.  Since the FPS was mostly between 45 and 55, it is taking the code roughly 0.01 second to process, encode, and transmit a frame.

During the test, the memory usage by the server code was roughly 1% (or 40 megabytes out of 4 gigabytes) and 50% of a CPU (out a total of  CPUs).

Attachment 1: newCodeFPS.png
newCodeFPS.png
Attachment 2: newCodeFPS_hist.png
newCodeFPS_hist.png
  1580   Wed May 13 03:05:13 2009 peteUpdateoplevsetmy oplev quad was bad

Quote:

Pete, Rob

After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115)  was noisy.  Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag).  We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts.  I popped in the ETMX quad and everything looked fine.  I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine.  We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads. 

Attached is a plot.  The reference curves are with the faulty quad (115).  The others are with the 121.

 

 I adjusted the ETMY quad gains up by a factor of 10 so that the SUM is similar to what it was before.

  1579   Wed May 13 02:53:12 2009 robSummaryloreChannel Hopping: That ancient enemy (MC problems)

We were stymied tonight by a problem which began late this afternoon.  The MC would periodically go angularly unstable, breaking lock and tripping the MC2 watchdogs.  Suspicion fell naturally upon McWFS.

Eventually I traced the problem to the MC3 SIDE damping, which appeared to not work--it wouldn't actually damp, and the Vmon values did not correspond to the SDSEN outputs.  Suspicion fell on the coil driver.

Looking at the LEMO monitors on the MC3 coil driver, with the damping engaged, showed clear bit resolution at the 100mV level, indicating a digital/DAC problem.  Rebooting c1sosvme, which acquires all the OSEM sensor signals and actually does the side damping, resolved the issue. 

  1578   Tue May 12 17:26:56 2009 peteUpdateoplevsetmy oplev quad was bad

Pete, Rob

After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115)  was noisy.  Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag).  We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts.  I popped in the ETMX quad and everything looked fine.  I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine.  We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads. 

Attached is a plot.  The reference curves are with the faulty quad (115).  The others are with the 121.

 

Attachment 1: bad_oplev_quad.pdf
bad_oplev_quad.pdf
  1577   Tue May 12 15:22:09 2009 YoichiUpdateLSCArm Finesse

Quote:

It looks as if the measured DARM response is skewed by an extra low pass filter at high frequencies. I don't know why is it so.


One large uncertainty in the above estimate is the cavity pole of X-arm because I simply assumed that the ITMX reflectivity to be the designed value.
I think we can directly measure the X-arm finesse from Alberto's absolute length measurements (i.e. from the width of the resonant peaks in his scans).
By looking at Alberto and Koji's posts (elog:1244 elog:838), it looks like the FWHM of the peaks are around 3kHz. With the FSR ~ 3.8MHz, it gives a finesse of about 1300, which is reasonable.
Alberto, can you check your data and measure the FWHM more precisely ?
Note that we want to measure the FWHM of the peak in the *power* of the beat signal. The beat amplitude is proportional to the electric field *amplitude* of the transmitted auxiliary laser. What we need to get a finesse is the FWHM of the transmitted laser *power*. Thus we need to take the power of the beat signal.
  1576   Tue May 12 01:22:51 2009 YoichiUpdateLSCArm loss
Using the armLoss script (/cvs/cds/caltech/scripts/LSC/armLoss), I measured the round trip loss (RTL) of the arms.

The results are:
XARM: RTL= 171 (+/-2) ppm
YARM: RTL = 181 (+/-2) ppm

To get the results above, I assumed that the transmissivity of the ITMs are the same as the designed value (0.005).
This may not be true though.
  1575   Tue May 12 01:11:55 2009 YoichiUpdateLSCDARM response (DC Readout)
I measured the DARM response with DC readout.

This time, I first measured the open loop transfer function of the X single arm lock.
The open loop gain (Gx) can be represented as a product of the optical gain (Cx), the filter (Fx), and the suspension response (S), i.e. Gx = Cx*Fx*S.
We know Fx because this is the transfer function of the digital filters. Cx can be modeled as a simple cavity pole, but we need to know the finesse to calculate it.
In order to estimate the current finesse of the XARM cavity, I ran the armLoss script, which measures the ratio of the reflected light power between the locked and the unlocked state. Using this ratio and the designed transmissivity of the ITMX (0.005), I estimated the round trip loss in the XARM, which was 170 ppm. From this number, the cavity pole was estimated to be 1608Hz.
Using the measured Gx, the knowledge of Fx and the estimated Cx, I estimated the ETMX suspension response S, which is shown in the first attachment.
Note that this is not a pure suspension response. It includes the effects of the digital system time delay, the anti-imaging and anti-aliasing filters and so on.

Now the DARM open loop gain (Gd) can also be represented as a product of the optical gain (Cd), the filter (Fd) and the suspension response (S).
Since the actuations are applied again to the ETMs and we think ETMX and ETMY are quite similar, we should be able to use the same suspension response as XARM for DARM. Therefore, using the knowledge of the digital filter shape and the measured open loop gain, we can compute the DARM optical gain Cd.
The second attachment shows the estimated DARM response along with an Optickle prediction.
The DARM loop gain was measured with darm_offset_dc = 350. Since we haven't calibrated the DARM signal, I don't know how many meters of offset does this number correspond to. The Optickle prediction was calculated using a 20pm DARM offset. I chose this to make the prediction look similar to the measured one, though they look quite different around the RSE peak. The input power was set to 1.7W in the Optickle model (again this is just my guess).

It looks as if the measured DARM response is skewed by an extra low pass filter at high frequencies. I don't know why is it so.
Attachment 1: SUS_Resp.png
SUS_Resp.png
Attachment 2: DARM_Resp.png
DARM_Resp.png
  1574   Mon May 11 12:25:03 2009 josephb,AlexUpdateComputersfb40m down for patching

The 40m frame builder is currently being patched to be able utilize the full 14 TB of the new raid array (as opposed to being limited to 2 TB).  This process is expected to take several hours, during which the frame builder will be unavailable.

  1573   Mon May 11 11:49:20 2009 steveUpdatePSLMOPA cooling water lines are backwards

Quote:
This is 8 days of 10-minute trend.

DTEC is just the feedback control signal required to keep the NPRO's pump diode at a constant temperature.
Its not the amplifier or the actual NPRO crystal's temperature readout.

There is no TEC for the amplifier. It looks like to me that by opening up the flow to the NPRO some more
we have reduced the flow to the amplifier (which is the one that needs it) and created these temperature
fluctuations.

What we need to do is choke down the needle valve and ream out the NPRO block.




I have measured the "input" line temp at the MOPA box 10 C and the "out" line 8 C

This must be corrected.

However look at the 80 days plot of operation where the head temp variation is nothing new
Attachment 1: htempvar80d.jpg
htempvar80d.jpg
  1572   Sun May 10 13:41:17 2009 steveUpdateVACETMY damping restored, VC1 opened

ETMY damping restored.

Cryo  interlock closed VC1 ~2 days ago. P1 is 6.3 mTorr. Cryo temp 12K stable, reset photoswitch and opened VC1

  1571   Sun May 10 13:34:32 2009 carynUpdatePEMUnplugged Guralp channels

I unplugged Guralp EW1b and Guralp Vert1b and plugged in temp sensors temporarily. Guralp NS1b is still plugged in.

  1570   Sat May 9 15:19:10 2009 ranaUpdatePSLLaser head temperature oscillation
This is 8 days of 10-minute trend.

DTEC is just the feedback control signal required to keep the NPRO's pump diode at a constant temperature.
Its not the amplifier or the actual NPRO crystal's temperature readout.

There is no TEC for the amplifier. It looks like to me that by opening up the flow to the NPRO some more
we have reduced the flow to the amplifier (which is the one that needs it) and created these temperature
fluctuations.

What we need to do is choke down the needle valve and ream out the NPRO block.
Attachment 1: Picture_2.png
Picture_2.png
  1569   Sat May 9 02:20:11 2009 JenneUpdatePSLLaser head temperature oscillation

Quote:
After the laser cooling pipe was unclogged, the laser head temperature has been oscillating in 24h period.
The laser power shows the same oscillation.
Moreover, there is a trend that the temperature is slowly creeping up.
We have to do something to stop this.
Or Rob has to finish his measurements before the laser dies.


How's DTEC doing? I thought DTEC was kind of in charge of dealing with these kinds of things, but after our laser-cooling-"fixing", DTEC has been railed at 0, aka no range.

After glancing at DTEC with Dataviewer along with HTEMP and AMPMON (my internet is too slow to want to post the pic while ssh-ed into nodus), it looks like DTEC is oscillating along with HTEMP in terms of frequency, but perhaps DTEC is running out of range because it is so close to zero? Maybe?
  1568   Sat May 9 00:15:21 2009 YoichiUpdatePSLLaser head temperature oscillation
After the laser cooling pipe was unclogged, the laser head temperature has been oscillating in 24h period.
The laser power shows the same oscillation.
Moreover, there is a trend that the temperature is slowly creeping up.
We have to do something to stop this.
Or Rob has to finish his measurements before the laser dies.
Attachment 1: laser.png
laser.png
  1567   Fri May 8 16:29:53 2009 ranaUpdateComputer Scripts / Programselog and NDS
Looks like the new NDS client worked. Attached is 12 hours of BLRMS.
Attachment 1: Untitled.png
Untitled.png
  1566   Fri May 8 16:03:31 2009 JenneUpdatePEMUpdate on Jenne's Filtering Stuff

To include the plots that I've been working on in some form other than on my computer, here they are:

First is the big surface plot of all the amplitude spectra, taken in 10min intervals on one month of S5 data. The times when the IFO is unlocked are represented by vertical black stripes (white was way too distracting).  For the paper, I need to recreate this plot, with traces only at selected times (once or twice a week) so that it's not so overwhelmingly large.  But it's pretty cool to look at as-is.

Second is the same information, encoded in a pseudo-BLRMS.  (Pseudo on the RMS part - I don't ever actually take the RMS of the spectra, although perhaps I should).  I've split the data from the surface plot into bands (The same set of bands that we use for the DMF stuff, since those seem like reasonable seismic bands), and integrated under the spectra for each band, at each time.  i.e. one power spectra gives me 5 data points for the BLRMS - one in each band.  This lets us see how good the filter is doing at different times.

At the lower frequencies, after ~25 days, the floor starts to pick up.  So perhaps that's about the end of how long we can use a given Wiener filter for.  Maybe we have to recalculate them about every 3 weeks.  That wouldn't be tragic. 

I don't really know what the crazy big peak in the 0.1-0.3Hz plot is (it's the big yellow blob in the surface plot).  It is there for ~2 days, and it seems awfully symmetric about it's local peak.  I have not yet correlated my peaks to high-seismic times in the H1 elog.  Clearly that's on the immediate todo list. 

Also perhaps on the todo list is to indicate in some way (analagous to the black stripes in the surface plot) times when the data in the band-limited plot is just extrapolated, connecting the dots between 2 valid data points.

 

A few other thoughts:  The time chosen for the training of the filter for these plots is 6:40pm-7:40pm PDT on Sept 9, 2007 (which was a Sunday night).  I need to try training the filter on a more seismically-active time, to see if that helps reduce the diurnal oscillations at high frequency.  If that doesn't do it, then perhaps having a "weekday filter" and an "offpeak" filter would be a good idea.  I'll have to investigate.

Attachment 1: H1S5OneMonthWienerCompBLACK.png
H1S5OneMonthWienerCompBLACK.png
Attachment 2: H1S5BandLimitedTimePlot.png
H1S5BandLimitedTimePlot.png
  1565   Fri May 8 15:40:44 2009 peteUpdateLockingprogressively weaker locks

the align script was run after the third lock here.  it would have been interesting to see the arm powers in a 4th lock 

Attachment 1: powers_3lock.pdf
powers_3lock.pdf
  1564   Fri May 8 10:05:40 2009 AlanOmnistructureComputersRestarted backup since fb40m was rebooted

Restarted backup since fb40m was rebooted.

  1563   Fri May 8 04:46:01 2009 rana, yoichiSummaryoplevsBS/PRM/SRM table bad!
We went to center the oplevs because they were far off and found that (as usual) the numbers changed
a little after we carefully centered the oplevs and came back to the control room.

To see if the table was on something soft, we tried pushing the table: no significant effect with ~10 pounds of static force.

With ~10 pounds of vertical force, however, we saw a large change: ~0.25 Oplev units. This corresponds to
~20-30 microradians of apparent optic pitch.

In the time series below you can see the effects:

2.5 s: lid replaced on table after centering.

2.5 - 11 s: various force tests on table

11 s: pre-bias by aligning beams to +0.25 in pitch and then add lid.


So there's some kind of gooey behavior in the table. It takes ~1 s to
settle after we put the lid on. Putting the laptops on the table also
has a similar effect. Please do not put anything on this table lid.
Attachment 1: a.png
a.png
  1562   Fri May 8 04:31:35 2009 ranaUpdateComputer Scripts / Programselog and NDS
In the middle of searching through the elog, its stopped responding. So I followed the Wiki instructions
and restarted it (BTW, don't use the start-elog-nodus script that's in that directory). Seems OK now,
but I am suspicious of how it sometimes does the PDF preview correctly and sometimes not. I found a
'gs' process on there running and taking up > 85% of the CPU.

I also got an email from Chris Wipf at MIT to try out this trick from LASTI to maybe fix the
problems I've been having with the DMF processes failing after a couple hours. I had compiled but
not tested the stuff a couple weeks ago.

Today after it failed, I tried running other stuff in matlab and got some "too many files open" error messages.
So I have now copied the 32-bit linux NDS mex files into the mDV/nds_mexs/ directory. Restarted the
seisBLRMS.m about an hour ago.
  1561   Fri May 8 02:39:02 2009 pete, ranaUpdateLockingcrossover

attached plot shows MC_IN1/MC_IN2.  needs work.

This is supposed to be a measurement of the relative gain of the MCL and AO paths in the CM servo. We expect there to

be a more steep slope (ideally 1/f). Somehow the magnitude is very shallow and so the crossover is not stable. Possible

causes? Saturations in the measurement, broken whitening filters, extremely bad delay in the digital system? needs work.

 

Attachment 1: crossover.pdf
crossover.pdf
Attachment 2: photo.jpg
photo.jpg
  1560   Fri May 8 02:08:59 2009 peteUpdateLockinglock stretches

locks last for about an hour.  this was true last night as well (see "arm power curve" entries).   the second lock shown here evolves differently for unknown reasons.  the jumps in the arm powers of the first lock are due to turning on DC readout.  length-to-angle needs tuning.

 

 

Attachment 1: powers_oplev.pdf
powers_oplev.pdf
  1559   Thu May 7 23:34:59 2009 robUpdateSEIseisBLRMS already lost

Can't find hostname 'fb40m'

 

it only lasted a few hours

  1558   Thu May 7 23:21:04 2009 peteUpdateLockingarm power curve

Quote:

I've plotted TRX, TRY, PD12I and PD11Q.  Arm powers after locking increase for a few tens of minutes, peak out, and then decrease before lock is lost.

 

 

 I should have mentioned that the AS port camera image seems to get progressively uglier over the course of these locks.  Maybe we can use the JoeCam to make a movie of it. 

  1557   Thu May 7 18:12:12 2009 peteUpdateLockingarm power curve

I've plotted TRX, TRY, PD12I and PD11Q.  Arm powers after locking increase for a few tens of minutes, peak out, and then decrease before lock is lost.

 

 

Attachment 1: 2009_may_7_powers.jpg
2009_may_7_powers.jpg
  1556   Thu May 7 17:59:23 2009 AlbertoConfiguration MC WFS
This afternoon the MC could not get locked.
I first checked the Osems values at the MC mirrors and compared them to the trend of the last few hours. That showed that the alignment of the mirrors had slightly changed. I then brought each mirror back to its old alignment state.
 
That let the LSC loop lock the MC, although the reflected power was still high (1.5V) and the WFS control wouldn't engage.
 
Since earlier during the day I was working on the AS table, it is possible that I inadvertently hit the MC REFL beam splitter misaligning the beam to the MC WFS.
To exclude that there was a problem in the suspensions, before touching the WFS, I checked that the cables at the MC's ends and those going to the ADC in the rack were well pushed in.
 
Then I proceeded in centering the beam on both the WFS, balancing the power over the QPDs.
 
In the end the MC could lock again properly.
 
  1555   Thu May 7 15:22:19 2009 josephb, albertoConfigurationComputersfb40m

Quote:

Having determined that Rana (the computer) was having to many issues with testing the new Raid array due to age of the system, we proceeded to test on fb40m.

 

We brought it down and up several times between 11 and noon.  We  eventually were able to daisy chain the old raid and the new raid so that fb40m sees both.  At this time, the RAID arrays are still daisy chained, but the computer is setup to run on just the original raid, while the full 14 TB array is initialized (16 drives, 1 hot spare, RAID level 5 means 14 TB out of the 16 TB are actually available).  We expect this to take a few hours, at which point we will copy the data from the old RAID to the new RAID (which I also expect to take several hours).  In the meantime, operations should not be affected.  If it is, contact one of us.

 

 

 

 

This afternoon the alignment script chrashed after returning sysntax errors. We found that the tpman wasn't running on the framebuilder becasue it had probably failed to get restarted in one of the several reboots executed in the morning by Alex and Jo.

Restarting the tpman was then sufficient for the alignment scripts to get back to work.

  1554   Thu May 7 12:21:36 2009 josephb, alexConfigurationComputersfb40m

Having determined that Rana (the computer) was having to many issues with testing the new Raid array due to age of the system, we proceeded to test on fb40m.

 

We brought it down and up several times between 11 and noon.  We  eventually were able to daisy chain the old raid and the new raid so that fb40m sees both.  At this time, the RAID arrays are still daisy chained, but the computer is setup to run on just the original raid, while the full 14 TB array is initialized (16 drives, 1 hot spare, RAID level 5 means 14 TB out of the 16 TB are actually available).  We expect this to take a few hours, at which point we will copy the data from the old RAID to the new RAID (which I also expect to take several hours).  In the meantime, operations should not be affected.  If it is, contact one of us.

 

 

  1553   Thu May 7 10:28:20 2009 steveUpdateVACretrofitted maglev's needs

 

 Our spare Osaka maglev purchased in Oct 2005 turned out to be having a viton o-ring seal connection on the intake.

It was shipped back to San Jose for retrofitting it with 6" conflat flange ( CF ) This CF is using copper gasket so there will be no permeation of He when you leak check the IFO

 

The digital controller and cable are here. The controller needs to be interfaced with the interlocks and computer system; those have been in a neglected condition lately.

see elog #1505  Historically after every REBOOT of c1vac2 the readbacks works for 3-4 days only. Fixing of this was postponed many times in the past as low priority or lack of knowledgeable

enthusiast.

 

The maglev TG390MCAB wil be back on Tuesday, May 4, 2009.  The mourning of our fateful 360 will only end at the first levitation of the 390.

 

  1552   Wed May 6 19:04:11 2009 ranaSummaryVACvac images
Since there's no documentation on this besides Steve's paper notebooks...

and BTW, since when did the elog start giving us PNG previews of PDFs?
Attachment 1: vacrack.pdf
vacrack.pdf
  1551   Wed May 6 16:56:35 2009 rana, alex, joeConfigurationComputersdaqd log, cront, etc.
While Alex came over, we investigated the log file problems with DAQD and NDS on FB0. There was a lot of
the standard puzzling and mumbling, but eventually we saw that it doesn't create its log file and so it
doesn't write to it. The log file is /usr/controls/main_daqd.log. The other files called daqd.log.DATE
in the logs/ directory are actually not written to. Its awesome.

We also have put in a fix for the overflowing jobs/ directory. It gets a file written to it every time
you make and NDS request and our seisBLRMS has been overloading it. There's now a cron for it in the fb0
crontab which cleans out week-old files at 6:30 AM every day.

We also changed the time of the daily backup from 3:30 AM (when people are still working) to 5:50 AM
(by which time the seismic has ramped up and interferometerists should be asleep). I didn't like the
idea of a bandwidth hog nailing the framebuilder during the peak of interferometer work.

#
# Script to backup via rsync the most recent 40m minute trends and
# any changes to the /cvs/cds filesystem.
#
50 05 * * * /cvs/cds/caltech/scripts/backup/rsync.backup < /dev/null > /cvs/cds/caltech/scripts/\
backup/rsync.backup.log 2>&1

30 06 * * * find /usr/controls/jobs -mtime +7 -exec /bin/rm -f {} \;

seisBLRMS.m restarted on mafalda.
  1550   Wed May 6 02:39:20 2009 YoichiHowToLockingHow to go to DC readout
I wrote a script called DC_readout, which you can find in /cvs/cds/caltech/scripts/DRFPMI/bang/nospring/.

Currently, the locking script succeeds 1/3 of the time. The freaky parts are the MC_F hand off and REFL_DC hand off.
MC_F hand off succeeds 70% of the time. REFL_DC goes well about a half of the time. Combined, the success rate is about 1/3.
We need some work on those hand offs.
Once you pass those freaky parts, the cm_step script usually goes smoothly and you will reach the full RF lock with the boost and the super boost1 engaged on the CM board.

To go to DC readout from there, run the DC_readout script.
First, this script will put some offset to the DARM loop so that some carrier light will leak to the AS port.
You are prompted to lock the OMC. Move the OMC length offset slider to find the carrier resonance and lock the OMC.
You have to make sure that it is carrier, not the 166MHz sideband. Usually the carrier light pulsates around 10Hz or so whereas the 166MHz SB is stable.
Once you locked the OMC to the carrier, hit enter on the terminal running the DC_readout script.
The script will do the rest of the hand off.
Once the script has finished, you may want to check darm_offset_dc in the C1LSC_LA_SET screen. This value sets the DC offset (a.k.a. the homodyne phase).
You may want to change it to what you want.
  1549   Tue May 5 14:02:16 2009 robUpdateLSCDARM DC response varies with DARM offset

Note the effect of quadrature rotation for small offsets.

Attachment 1: DARM_DARM_AS_DC_2.png
DARM_DARM_AS_DC_2.png
Attachment 2: DARM_DARM_AS_DC_3.png
DARM_DARM_AS_DC_3.png
Attachment 3: DARM_DARM_AS_DC_2.pdf
DARM_DARM_AS_DC_2.pdf
Attachment 4: DARM_DARM_AS_DC_3.pdf
DARM_DARM_AS_DC_3.pdf
  1548   Tue May 5 11:44:33 2009 robUpdateLocking DARM response

Here's the RF DARM optical response, on the anti-spring side, from optickle. Note that for the f1 sideband, changing the demod phase mostly adjusts the overall gain, while for the f2 sideband a change in demod phase alters the shape of the response. This is the quadrature-selecting power of using a single RF sideband as a local oscillator.
Attachment 1: DARMtf_nospring.png
DARMtf_nospring.png
Attachment 2: DARMtf_demodphases.png
DARMtf_demodphases.png
  1547   Tue May 5 10:42:18 2009 steveUpdateMOPAlaser power is back

Quote:

As PSL-126MOPA_DTEC went up, the power out put went down yesterday

 The NPRO cooling water was clogged at the needle valve. The heat sink temp was around ~37C

The flow-regulator  needle valve position is locked with a nut and it is frozen. It is not adjustable. However Jeenne's tapping and pushing down on the plastic hardware cleared the way for the water flow.

We have to remember to replace this needle valve when the new NPRO will be swapped in. I checked on the heat sink temp this morning. It is ~18C

There is condensation on the south end of the NPRO body, I wish that the DTEC value would just a little higher like 0.5V

The wavelenght of the diode is temp dependent: 0.3 nm/C. The fine tuning of this diode is done by thermo-electric cooler ( TEC )

To keep the diode precisely tuned to the absorption of the laser gain material the diode temp is held constant using electronic feedback control.

This value is zero now.

 

Attachment 1: uncloged.jpg
uncloged.jpg
  1546   Tue May 5 09:22:46 2009 carynUpdatePEMzeros

For several of the channels on the PEM ADCU, zeros are occuring at the same time. Does anyone know why that might happen or how to fix it?

Attachment 1: zerotest2.png
zerotest2.png
Attachment 2: zerotest.png
zerotest.png
  1545   Tue May 5 08:26:56 2009 robUpdateLockingDC Readout and DARM response

Quote:
Tonight, I was able to switch the DARM to DC readout a couple of times.
But the lock was not as stable as the RF DARM. It lost lock when I tried to measure the DARM loop gain.

I also measured DARM response when DARM is on RF.
The attached plot shows the DARM optical gain (from the mirror displacement to the PD output).
The magnitude is in an arbitrary unit.

I measured a transfer function from DARM excitation to the DARM error signal. Then I corrected it for the DARM open loop gain and the pendulum response to get the plot below.

There is an RSE peak at 4kHz as expected. The origin of the small bump and dip around 2.5kHz and 1.5kHz are unknown.
I will consult with the Optickle model.
I don't know why the optical gain decreases below 50Hz (I don't think it actually decreases).
Seems like the DARM loop gain measured at those frequencies are too low.
I will retry the measurement.


The optical gain does decrease below ~50Hz--that's the optical spring in action. The squiggles are funny. Last time we did this we measured the single arm TFs to compensate for any tough-to-model squiggles in the transfer functions which might arise from electronics or the suspensions.
  1544   Tue May 5 05:16:12 2009 YoichiUpdateLockingDC Readout and DARM response
Tonight, I was able to switch the DARM to DC readout a couple of times.
But the lock was not as stable as the RF DARM. It lost lock when I tried to measure the DARM loop gain.

I also measured DARM response when DARM is on RF.
The attached plot shows the DARM optical gain (from the mirror displacement to the PD output).
The magnitude is in an arbitrary unit.

I measured a transfer function from DARM excitation to the DARM error signal. Then I corrected it for the DARM open loop gain and the pendulum response to get the plot below.

There is an RSE peak at 4kHz as expected. The origin of the small bump and dip around 2.5kHz and 1.5kHz are unknown.
I will consult with the Optickle model.
I don't know why the optical gain decreases below 50Hz (I don't think it actually decreases).
Seems like the DARM loop gain measured at those frequencies are too low.
I will retry the measurement.
Attachment 1: DARM-TF.png
DARM-TF.png
ELOG V3.1.3-