40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 312 of 346  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  10172   Thu Jul 10 01:02:13 2014 ranaUpdatePSLmore PMC science

Increased gain and SNR in PMC LO monitor circuit.

  1. R20: 499 -> 50k Ohms (increases gain by 100)
  2. Used Marconi to drive the LO input and readout C1:PSL-PMC_LODET
  3. Fit this function and loaded it into the psl.db file. The old Kalmus way used LOGE, but I wanted to use log10, so I did. The sensor is only useful in a narrow band. Since the signal is so low at low levels, I just fit to the highest 4 points because I was too lazy to do proper weighting. Do as I say, not as I do.

Plot with data and fit attached.

** N.B.: in order to update the calibration without rebooting, I used the following command: z write C1:PSL-PMC_LOCALC.CALC "2.235*LOG(B)+12.265". This allows us to update EPICS CALC records without rebooting the IOC.

Attachment 1: PMCloCal.pdf
  15849   Sun Feb 28 16:59:39 2021 rana, gautamUpdateLSCmore PRMI checks here: what it is ain't exactly clear

On Friday evening we checked out a few more things, somewhat overlapping with previous tests. All tests done with PRMI on carrier lock (REFL11_I -> PRC, AS55_Q-> MICH):

  • check that PRC drive appropriately minimizes in REFL55_Q. I:Q ratio is ~100:1; good enough.
  • put sine waves around 311 and 333 Hz into PRCL and MICH at the LSC output matrix using awggui and LSC osc. not able to adjust LSC/OSC output matrix to minimize the MICH drive in REFL_I.
  • measured the TF from BS & PRM LSC drive to the REFL55_I/Q outputs. very nearly the same audio frequency phase, so the problem is NOT in the electronics || mechanical transfer functions of the suspensions.


Further questions:

  1. is this something pathological in the PRMI carrier lock? we should check by locking on sidebands to REFL55 and REFL165 and repeat tests.
  2. Can it be a severe mode mismatch from IMC output to PRMI mode? the cavity should be stable with the flipped folding mirrors, but maybe something strange happening. How do we measure the mode-matching to the PRC quantitatively?
  3. huge RAM is ruled out by Gautam's test of looking at REFL demod signals: dark offset vs. offset with a single bounce off of PRM (with ITMs mis-aligned)
  4. if there is a large (optical) offset in the AS55_Q lock point, how big would it have to be to mess up the REFL phase so much?
  5. what is going on with the REFL55 whitening/AA electronics?

unrelated note: Donatella the Workstation was ~3 minutes ahead of the FE machines (you can look at the C0:TIM-PACIFIC_STRING on many of the MEDM screens for a rough simulacrum). When the workstation time is so far off, DTT doesn't work right (has errors like test timed out, or other blah blah). I installed NTP on donatella and started the service per SL7 rules. Since we want to migrate all the workstations to Debian (following the party line), lets not futz with this too much.

gautam, 1 Mar 1600: In case I'm being dumb, I attach the screen grab comparing dark offset to the single bounce off PRM, to estimate the RAM contribution. The other signals are there just to show that the ITMs are sufficiently misaligned. The PRCL PDH fringe is usually ~12000 cts in REFL11, ~5000cts in REFL55, and so the RAM offset is <0.1% of the horn-to-horn PDH fringe.

P.S. I know generally PNGs in the elog are frowned upon. But with so many points, the vector PDF export by NDS (i) is several megabytes in size and (ii) excruciatingly slow. I'm proposing a decimation filter for the export function of ndscope - but until then, I claim plotting with "rasterized=True" and saving to PDF and exporting to PNG are equivalent, since both yield a rasterized graphic.

Attachment 1: RAMestimate.png
  15850   Sun Feb 28 22:53:22 2021 gautamUpdateLSCmore PRMI checks here: what it is ain't exactly clear

I looked into this a bit more and crossed off some of the points Rana listed. In order to use REFL 55 as a sensor, I had to fix the frequent saturations seen in the MICH signals, at the nominal (flat) whitening gain of +18 dB. The light level on the REFL55 photodiode (13 mW), its transimpedance (400 ohm), and this +18dB (~ x8) gain, cannot explain signal saturation (0.7A/W * 400 V/A * 8 ~ 2.2kV/W, and the PRCL PDH fringe should be ~1 MW/m, so the PDH fringe across the 4nm linewidth of the PRC should only be a couple of volts). Could be some weird effect of the quad LT1125. Anyway, the fix that has worked in the past, and also this time, is detailed here. Note that the anomalously high noise of the REFL55_Q channel in particular remains a problem. After taking care of that, I did the following:

  1. PRMI (ETMs misaligned) locking with sidebands resonant in the PRC was restored - REFL55_I was used for PRCL sensing and REFL55_Q was used for MICH sensing. The locks are acquired nearly instantaneously if the alignment is good, and they are pretty robust, see Attachment #1 (the lock losses were IMC related and not really any PRC/MICH problem).
  2. Measured the loop OLTFs using the usual IN1/IN2 technique. The PRCL loop looks just fine, but the MICH loop UGF is very low apparently. I can't just raise the loop gain because of the feature at ~600 Hz. Not sure what the origin of this is, it isn't present in the analogous TF measurement when the PRMI is locked with carrier resonant (REFL11_I for PRCL sensing, AS55_Q for MICH sensing). I will post the loop breakdown later. 
  3. Re-confirmed that the MICH-->PRCL coupling couldn't be nulled completely in this config either.
    • The effect is a geometric one - then 1 unit change in MICH causes a 1/sqrt(2) change in PRCL. 
    • The actual matrix element that best nulls a MICH drive in the PRCL error point is -0.34 (this has not changed from the PRMI resonant on carrier locking). Why should it be that we can't null this element, if the mechanical transfer functions (see next point) are okay?
  4. Looked at the mechanical actuator TFs are again (since we forgot to save plots on Friday), by driving the BS and PRM with sine waves (311.1 Hz), one at a time, and looking at the response in REFL55_I and REFL55_Q. Some evidence of some funkiness here already. I can't find any configuration of digital demod phase that gives me a PRCL/MICH sensing ratio of ~100 in REFL55_I, and simultaneously, a MICH/PRCL sensing ratio of ~100 in REFL55_Q. The results are in Attachments #5
  5. Drove single frequency lines in MICH and PRCL at 311.1 and 313.35 Hz respectively, for 5 minutes, and made the radar plots in Attachments #2 and #3. Long story short - even in the "nominal" configuration where the sidebands are resonant in the PRC and the carrier is rejected, there is poor separation in sensing. 
    • Attachments #2 is with the digital REFL55 demod phase set to 35 degrees - I thought this gave the best PRCL sensing in REFL55_I (eyeballed it roughly by looking at ndscope free-swinging PDH fringes).
    • But the test detailed in bullet #4, and Attachments #2 itself, suggested that PRCL was actually being sensed almost entirely in the Q phase signal.
    • So I changed the digital demod phase to -30 degrees (did a more quantitative estimate with free-swinging PDH fringes on ndscope, horn-to-horn voltages etc).
    • The same procedure of sine-wave-driving now yields Attachments #3. Indeed, now PRCL is sensed almost perfectly in REFL55_I, but the MICH signal is also nearly in REFL55_I. How can the lock be so robust if this is really true? 
  6. Attachments #4 shows some relevant time domain signals in the PRMI lock with the sidebands resonant. 
    • REFL11_I hovers around 0 when REFL55_I is used to sense and lock PRCL - good. The m/ct calibration for REFL11_I and REFL55_I are different so this plot doesn't directly tell us how good the PRCL loop is based on the out-of-loop REFL11_I sensor.
    • ASDC is nearly 0, good.
    • POP22_I is ~200cts (and POP22_Q is nearly 0) - I didn't see any peak at the drive frequency when driving PRCL with a sine wave, so no linear coupling of PRCL to the f1 sideband buildup, which would suggest there is no PRCL offset.
    • Couldn't do the analogous test for AS110 as I removed that photodiode for the AS WFS - it is pretty simple to re-install it, but the ASDC level already doesn't suggest anything crazy here.

Rana also suggested checking if the digital demod phase that senses MICH in REFL55_Q changes from free-swinging Michelson (PRM misaligned), to PRMI aligned - we can quantify any macroscopic length mismatch in the PRC length using this measurement. I couldn't see any MICH signal in REFL55_Q with the PRM misaligned and the Michelson fringing. Could be that +18dB is insufficient whitening gain, but I ran out of time this afternoon, so I'll check later. But not sure if the double attenuation by the PRM makes this impossible.

Attachment 1: PRMI_SBres_REFL55.png
Attachment 2: PRMI1f_noArmssensMat.pdf
Attachment 3: PRMI1f_noArmssensMat.pdf
Attachment 4: PRMI_locked.png
Attachment 5: actTFs.pdf
  7671   Mon Nov 5 19:38:52 2012 jamie, jenne, ayaka, denUpdateAlignmentmore alignment woes

Earlier this morning we thought things were looking pretty good.  IPPOS, IPANG, and the AS and REFL spots looked like they hadn't moved too much over the weekend.  Our plan was to do a quick check of things, check clearances, etc., tweak up the oplevs, and then close up.  This is when I made the ill-fated decisions to check the table levelling.

The BS table was slightly off so I moved one of the thick disk weights off of the other disk weight that it was sitting on, and on to the table next to it.  This seemed to improve things enough so I left it there.  ITMY didn't need any adjustment, and I move a couple smaller weights around on ITMX.  Meanwhile Jenne was adjusting the output PSL power back into it's nominal range (<100mW), and re-tweaking up the mode cleaner.

When we then looked at the vertex situation again it was far off in yaw.  This was clearly evident on PZT2, where the beam was no longer centered on the PZT2 mirror and was near the edge.  This was causing us to clip at the BS aperture.

We took some deep breaths and tried to figure out what we did that could have messed things up.

Jenne noticed that we had moved slightly on the PSL QPDs, so she adjusted the PSL output pointing to re-aquire the previous pointing, and realigned the MC.  This had a very small positive affect, but not nearly enough to compensate for whatever happened.

We spent some more time trying to track down what might have changed, but were unable to come up with anything conclusive.  We then decided to see if we could recover things by just adjusting the PZT input steering mirrors.  We couldn't; recentering at PRM, BS, ITMY, and ETMY was moving us off of PR3.

Jenne suggested we look at the spot positions on the MMT mirrors.  I had checked MMT1 and it looked ok, but we hadn't looked at MMT2.  When we checked MMT2 we noticed that we were also off in yaw.  This made us consider the possibility that the BS table had twisted, most likely when I was securing the moved mass.  Sure enough, when I manually twisted BS table, by grabbing it with my hand, very little force would cause the input beam to walk much of the way across PZT2, more than accounting for the offset.  The effect was also very clearly hysteretic as well; I could twist the table a little and it would stay in the new position.

At this point we had fucked things up enough that we realized that we're basically going to have to walk through the whole alignment procedure again, for the third time this vent.  We were able to recover the PRM retro-reflection a bit, but the tip-tilts have drifted in pitch (likely again because of the table levelling).  So we're going to have to walk through the whole procedure systematically again.

Lessons learned:  Many things are MUCH more sensitive than I had been assuming.  The tip-tilts are of course ridiculous, in that lightly touching the top or bottom of the mirror mount will move it by quite a lot in pitch.  The tables are also much more sensitive than I had realized.  In particular, tightening screws can twist the table hystereticly by milliradians, which can of course completely loose the pointing.  We need to be a lot more careful.

Assuming the table hasn't moved too much we should be able to recover the alignment by just adjusting the PZTs and tweaking the pitch of the tip-tilts.  At least that's the hope.    No more touching the table.  No more leveling.  Hopefully we can get this mostly done tomorrow morning.

  8059   Mon Feb 11 17:17:30 2013 JamieSummaryGeneralmore analysis of half PRC with flipped PR2


We need expected finesse and g-factor to compare with mode-scan measurement. Can you give us the g-factor of the half-PRC and what losses did you assumed to calculate the finesse?

This is exactly why I added the higher order mode spacing, so you could calculate the g parameter.  For TEM order N = n + m with spacing f_N, the overall cavity g parameter should be:

g = (cos( (f_N/f_FSR) * (\pi/N) ))^2

The label on the previous plat should really be f_N/FSR, not \omega_{10,01}

BUT, arbcav does not currently handle arbitrary ABCD matrices for the mirrors, so it's going to be slightly less accurate for our more complex flipped mirrors.  The affect would be bigger for a flipped PR3 than for a flipped PR2, because of the larger incidence angle, so arbcav will be a little more correct for our flipped PR2 only case (see below).


Also, flipped PR2 should have RoC of - R_HR * n_sub (minus measured RoC of HR surface multiplied by the substrate refractive index) because of the flipping.

This is not correct.  Multiplying the RoC by -N would be a very large change.  For an arbitrary ABCD matrix:

R_eff = -2 / C

When the incident angle in non-zero:

tangential: R_eff = R_eff / cos(\theta)
sagittal:   R_eff = R_eff * cos(\theta)

For flipped PR2, with small 1.5 degree incident angle and RoC of -706 at HR:

M_t = M_s = [1.0000, 0.0131; -0.0028, 1.0000]
R_eff = 705.9

For flipped PR3, with large 41 degree incident angle and RoC of -700 at HR:

M_t = [1.0000, 0; 0.0038, 1.0000]
M_s = [1.0000, 0; 0.0022, 1.0000]
R_eff = 592.4

The affect of the substrate is negligible for flipped PR2 but significant for flipped PR3.

The current half-PRC setup

OK, I have now completely reconciled my alamode and arbcav calculations.  I found a small bug in how I was calculating the ABCD matrix for non-flipped TTs that made a small difference.  I now get the exact same g parameter values with both with identical input parameters.


According to Jenne dictionary, HR curvature measured from HR side is;

PRM: -122.1 m
PR2: -706 m
PR3: - 700 m
TM in front of BS: -581 m

Sooooo, I have redone my alamode and arbcav calculations with these updated values.  Here are the resulting g parameters

  arbcav a la mode measurement
g tangential 0.9754 0.9753 0.986 +/- 0.001
g sagital 0.9686 0.9685 0.968 +/- 0.001

So the sagittal values all agree pretty well, but the tangential measurement does not.  Maybe there is an actual astigmatism in one of the optics, not due to angle of incidence?

arbcav HOM plot:


  9452   Tue Dec 10 10:07:01 2013 SteveUpdateIOOmore beam traps

 New razor beam dump installed to trap reflected beam of the input vacuum window.


Attachment 1: InputWindowRefDump.jpg
Attachment 2: InpWindowRefDupm.jpg
  2997   Thu May 27 02:22:24 2010 kiwamuUpdateGreen Lockingmore details

 Here are some more plots and pictures about the end PDH locking with the green beam. 

-- DC reflection

 I expected that the fluctuation of the DC reflection had 1% from the resonant state to the anti-resonant state due to its very low finesse.

This values are calculated from the reflectivity of ETM measured by Mott before (see the wiki).

In my measurement I obtained  DC reflection of V_max=1.42 , V_min=1.30  at just after the PD.

These numbers correspond to 7.1% fluctuation. It's bigger than the expectation.

I am not sure about the reason, but it might happen by the angular motion of test masses (?)


--- time series

Here is a time series plot. It starts from openloop state (i.e. feedback disconnected).

At t=0 sec I connected a cable which goes to the laser pzt, so now the loop is closed.

You can see the DC reflection slightly decreased and stayed lower after the connection.

The bottom plot represents the feedback signal measured before a sum amp. which directly drives the pzt.




-- length fluctuation  

One of the important quantities in the green locking scheme is the length fluctuation of the cavity.

It gives us how much the frequency of the green beam can be stabilized by the cavity. And finally it will determine the difficulty of PLL with the PSL.

I measured a spectrum of the pzt driving voltage [V/Hz1/2] and then converted it to a frequency spectrum [Hz/Hz1/2].

I used the actuation efficiency of 1MHz/V for the calibration, this number is based on the past measurement.


RMS which is integrated down to 1Hz  is 1.6MHz.

This number is almost what I expected assuming the cavity swings with displacement of x ~< 1um.


-- flashing

A picture below is a ETMx CCD monitor.

One of the spot red circled in the picture blinks when it's unlocked. And once we get the lock the spot stays bright.



  2999   Thu May 27 09:43:50 2010 ranaUpdateGreen Lockingmore details


 RMS which is integrated down to 1Hz  is 1.6MHz.

This number is almost what I expected assuming the cavity swings with displacement of x ~< 1um.

 Its OK, but the real number comes from measuring the time series of this in the daytime (not the spectrum). What we care about is the peak-peak value of the PZT feedback signal measured on a scope for ~30 seconds. You can save the scope trace as a PNG.

  308   Mon Feb 11 14:24:19 2008 steveUpdatePEMmore earthquakes
ITMX and ITMY sus damping restored after Baja earthquake 5.1 mag at 10:29 this morning.

The ground preparation for The ITS building is almost finished.
Activity is winding down, however the Baja California_ Mexico earthquake zone
"Guadala Victoria" started acting up on Friday.

Attachment 1: eqfeb11.jpg
  11655   Thu Oct 1 19:49:52 2015 jamieUpdateDAQmore failed attempts at getting new fb working


I've not really been able to make additional progress with the new 'fb1' DAQ.  It's still flaky as hell.  Therefore we're still using old 'fb'.



The mx_stream processes on the front ends initially run fine, connecting to the daqd and transferring data, with both DAQ-..._STATUS and FE-..._FB_NET_STATUS indicators green.  Then after about two minutes all the mx_stream processes on all the front ends die.  Monit eventually restarts them all, at which point they come up green for a while until the crash again ~2 minutes later.  This is essentially the same situation as reported previously.

In the daqd logs when the mx_streams die:

Aborted 2 send requests due to remote peer 00:30:48:be:11:5d (c1iscex:0) disconnected
Aborted 2 send requests due to remote peer 00:14:4f:40:64:25 (c1ioo:0) disconnected
Aborted 2 send requests due to remote peer 00:30:48:d6:11:17 (c1iscey:0) disconnected
Aborted 2 send requests due to remote peer 00:25:90:0d:75:bb (c1sus:0) disconnected
Aborted 1 send requests due to remote peer 00:30:48:bf:69:4f (c1lsc:0) disconnected
mx_wait failed in rcvr eid=000, reqn=176; wait did not complete; status code is Remote endpoint is closed
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=177; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=178; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=179; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=180; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786407 gps=1127786425

[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786408 gps=1127786426

[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786408 gps=1127786426

In the mx_stream logs:

controls@c1iscey ~ 0$ /opt/rtcds/caltech/c1/target/fb/mx_stream -r 0 -W 0 -w 0 -s 'c1x05 c1scy c1tst' -d fb1:0
mmapped address is 0x7f0df23a6000
mmapped address is 0x7f0dee3a6000
mmapped address is 0x7f0dea3a6000
send len = 263596
Connection Made
isendxxx failed with status Remote Endpoint Unreachable
disconnected from the sender


While the mx_stream processes are running daqd seems to write out data just fine.  At least for the full frames.  I manually verified that there is indeed data in the frames that are written.

Eventually, though, daqd itself crashes with the same error that we've been seeing:

main profiler warning: 0 empty blocks in the buffer

I'm not exactly sure what the crashes are coincident with, but it looks like they are also coincident with the writing out of the minute and/or second trend files.  It's unclear how it's related to the mx_stream crashes, if at all.  The mx_stream crashes happen every couple of minutes, whereas the daqd itself crashes much less frequently.

The new daqd can't handle EDCU files.  If an EDCU file is specified (e.g. C0EDCU.ini in our case), the daqd will segfault very soon after startup.  This was an issue with the current daqd on fb, but was "fixed" by moving where the EDCU file was specified in the master file.


There are a number of differences between the fb1 and fb configurations:

  • newer OS (Debian 7 vs. ancient gentoo)
  • newer advLigoRTS (trunk vs. 2.9.4)
  • newer framecpp library installed from LSCSoft Debian repo (2.4.1-1+deb7u0 vs. 1.19.32-p1)

It's possible those differences could account for the problems (/opt/rtapps/epics incompatible with this Debian install, for instance).  Somehow I doubt it.  I wonder if all the weird network issues we've been seeing are somehow involved.  If the NFS mount of chiara is problematic for some reason that would affect everything that mounts it, which includes all the front ends and fb/fb1.

There are two things to try:

  • Fix the weird network problem.  Try remove EVERYTHING from the network except for chiara, fb/fb1, and the front ends and see if that helps.
  • Rebuild fb1 with Ubuntu and daqd as prescribed by Keith Thorne.
  11656   Thu Oct 1 20:24:02 2015 jamieUpdateDAQmore failed attempts at getting new fb working

I just realized that when running fb1, if a single mx_stream dies they all die.

  11664   Sun Oct 4 14:28:03 2015 jamieUpdateDAQmore failed attempts at getting new fb working

I tried to look at fb1 again today, but still haven't made any progress.

The one thing I did notice, though, is that every hour on the hour the fb1 daqd process dies in an identical manor to how the fb daqd dies, with these:

[Sun Oct  4 12:02:56 2015] main profiler warning: 0 empty blocks in the buffer

errors right as/after it tries to write out the minute trend frames.

This makes me think that this new hardware isn't actually going to fix the problem we've been seeing with the fb daqd, even if we do get daqd "working" on fb1 as well as it's currently working on fb.

  5279   Mon Aug 22 21:32:10 2011 kiwamuUpdateGeneralmore in-vac work : AS clipping fixed and OSEM/oplev adjustment

[Keiko / Jenne / Jamie / Kiwamu]

 We did the following things today :

  + fixed the AS clipping issue

  + realigned all the oplevs

  + checked and adjusted the all OSEM DC values, including PRM, SRM, BS, ITMs, ETMs, MC1 and MC3


Since we touched the OSEMs the alignment has changed somewhat.

Right now Jenne, Suresh and I are working on the "confirmation alignment".

Once we find the alignment is still good (steerable by the PZTs and the DC coil bias), tomorrow we will do the drag&wipe and door closing.

Quote from #5275

We need to check/fix the AS beam clipping and once it's done we will readjust the OSEM mid range and the oplevs.


  5284   Tue Aug 23 06:49:24 2011 AnamariaUpdateGeneralmore in-vac work : AS clipping fixed and OSEM/oplev adjustment

Where was the AS clipping?! Ah, the suspense...


  + fixed the AS clipping issue


Quote from #5275

We need to check/fix the AS beam clipping and once it's done we will readjust the OSEM mid range and the oplevs.




  4389   Wed Mar 9 04:46:13 2011 kiwamuUpdateGreen Lockingmore intensity noise measurement


Here is a diagram for our intensity noise coupling measurement.



The below is a plot for the intensity noise on the DCPD. (I forgot to take a spectra of the PD dark noise)

For some reason, the RIN spectrum becomes sometimes noisier and sometimes quieter. Note that after 10 pm it's been in the quiet state for most of the time.

An interesting thing is that the structure below 3 Hz looks like excited by motion of the MC when it's in the louder state.


Quote: from #4383

A photo diode and an AOM driver have been newly setup on the PSL table to measure the intensity noise coupling to the beat note signal.

We tried taking a transfer function from the PD to the beat, but the SNR wasn't sufficient on the PD. So we didn't get reasonable data.

  1082   Fri Oct 24 11:09:08 2008 steveUpdateSAFETYmore lexan plates under cameras
The MC2, MC3&1 and BSC-SUS cameras were repositioned somewhat in the
process of placing lexan disks under neat them.
MC1&3 will have to be readjusted.

Now all horizontal viewports are protected.
  1728   Thu Jul 9 19:05:32 2009 ClaraUpdatePEMmore mic position changes; mics not picking up high frequencies

Bonnie has been strung up on bungees in the PSL so that her position/orientation can be adjusted however we like. She is now hanging pretty low over the table, rather than being attached to the hanging equipment shelf thing. Butch Cassidy has been hung over the AS table.

Moving Bonnie increased the coherence for the PMC_ERR_F signal, but not the MC_L. Butch Cassidy doesn't have much coherence with either.

I noticed that the coherence would drop off very sharply just after 10 kHz - there would be no further spikes or anything of the sort. I used my computer to play a swept sine wave (sweeping from 20Hz to 10kHz) next to Butch Cassidy to see if the same drop-off occurred in the microphone signal itself. Sure enough, the power spectrum showed a sharp drop around 10kHz. Thinking that the issue was that the voltage dividers had too high impedance, I remade one of them with two 280 Ohm and one 10 Ohm resistor, but that didn't make any difference. So, I'm not sure what's happening exactly. I didn't redo the other voltage divider, so Bonnie is currently not operating.


Attachment 1: DSC_0569.JPG
Attachment 2: DSC_0570.JPG
Attachment 3: bonnie_psl_hi_mcl12.pdf
Attachment 4: bonnie_psl_hi_errf12.pdf
Attachment 5: bc_as_table.pdf
Attachment 6: powerspec.pdf
  1729   Thu Jul 9 19:24:50 2009 ranaUpdatePEMmore mic position changes; mics not picking up high frequencies
Might be the insidious 850 Hz AA filters in the black AA box which precedes the ADC.

Dan Busby fixed up the PSL/IOO chassis. WE might need to do the same for the PEM stuff.
  2040   Fri Oct 2 02:55:07 2009 robUpdateLockingmore progress

More progress with locking tonight, with initial acquisition and power ramps working.  The final handoff to RF CARM still needs work.

I found the wireless router was unplugged from the network--just plugging in the cable solved the problem.  For some reason that RJ45 connector doesn't actually latch, so the cable is prone to slipping out of the jack.


  2047   Sat Oct 3 14:53:24 2009 robUpdateLockingmore progress

Late last night after the ETMY settled down from the quake I made some more progress in locking, with the handoff to RF CARM succeeding once.  The final reduction of the CARM offset to zero didn't work, however.

  12095   Thu Apr 28 00:41:08 2016 gautamUpdateendtable upgrademore progress - Transmon PD installed

The IR Transmon system is almost completely laid out, only the QPD remains to be installed. Some notes:

  1. The "problem" with excessive green power reflected from the harmonic separator has been resolved. It is just very sensitive to the angle of incidence. In the present configuration, there is ~10uW of green power reflected from either side, which shouldn't be too worrisome. But this light needs to be dumped. Given the tiny amount, I think a black glass + sticky tape solution is best suited, given the space constraints. This does not reach the Transmon PDs because there is a filter in the path that is transmissive to IR only. 
  2. I aligned the transmitted beam onto the Thorlabs PD, and reconnected the signal BNC cable (the existing cable wasn't long enough so I had to use a barrel connector and a short extension cable). I then reverted the LSC trigger for the X arm back to TRX DC and also recompiled c1ass to revert to TRX for the dither alignment. At the moment, both arms are stably locked, although the X arm transmission is saturated at ~0.7 after running the dither alignment. I'm not sure if this is just a normalization issue given the new beam path or if there is something else going on. Further investigations tomorrow.
  3. It remains to dump some of the unwanted green light from the addition of the harmonic separator...
  4. We may want to redesign some (or all) of the Transmon path - the lens currently in use seems to have been chosen arbitrarily. Moreover, it is quite stubbornly dirty, there are some markings which persist after repeated first contact cleaning...

I feel like once the above are resolved, the next step would be to PDH lock the green to the arm and see what sort of transmission we get on the PSL table. It may be the polarization or just alignment, but for some reason, the transmitted green light from the X arm is showing up at GTRY now (up to 0.5, which is the level we are used to when the Y arm has green locked!). So a rough plan of action:

  1. Install transmon QPD
  2. PDH lock green to X arm
  3. Fix the window situation - as Steve mentioned in an earlier elog, the F.C. cleaning seems to have worked well, but a little remains stuck on the window (though away from where any laser radiation is incident). This is resolved easily enough if we apply one more layer of F.C., but the bottle-neck right now is we are out of PEEK which is what we use to remove the F.C. once dried. Steve thinks a fresh stock should be here in the next couple of days...
  4. Once 3 is resolved, we can go ahead and install the Oplev.
  5. Which leaves the lst subsystem, coupling to the fiber and a power monitor for the NPRO. I have resolved to do both these using the 1% transmitted beam after the beamsplitter immediately after the NPRO rather than pick off at the harmonic separator after the doubling oven. I need to do the mode-matching calculation for coupling into the fiber and also adjust the collimating lens...
  6. Clean-up: make sure cables are tied down, strain-relieved and hooked up to whatever they are supposed to be hooked up to...
  2673   Mon Mar 15 09:43:47 2010 steveUpdatePEMmore sandblasting today

Do not open IFO vacuum envelope today! They are sandblasting again at CES

  6959   Wed Jul 11 11:18:21 2012 steveUpdatePEMmore seismic noise next week
   The fabricators of the big flume in the CES lab have begun testing the sediment feed system which is the noisiest component and plan to test off and on during the day for the next week.
   Please let me know if you detect the noise or have any issues.
Brian Fuller

phone: 626-395-2465

  6961   Wed Jul 11 13:45:01 2012 JenneUpdatePEMmore seismic noise next week

   The fabricators of the big flume in the CES lab have begun testing the sediment feed system which is the noisiest component and plan to test off and on during the day for the next week.
   Please let me know if you detect the noise or have any issues.
Brian Fuller

phone: 626-395-2465

 Masha and Yaakov - this is an excellent opportunity for you guys to test out your triangulation stuff!  Also, it might give a lot of good data times for the learning algorithms.

Maybe you should also put out the 3 accelerometers that Yaakov isn't using (take them off their cube, so they can be placed separately), then you'll have 6 sensors for vertical motion.  Or you can leave the accelerometers as a cube, and have 4 3-axis sensors (3 seismometers + accelerometer set).

  12794   Fri Feb 3 11:03:06 2017 jamieUpdateCDSmore testing fb1; DAQ DOWN DURING TEST

More testing of fb1 today.  DAQ DOWN UNTIL FURTHER NOTICE.

Testing Wednesday did not resolve anything, but Jonathan Hanks is helping.

  10714   Fri Nov 14 08:25:29 2014 SteveUpdateSUSmorning issues

PRM, SRM and the ENDs are kicking up.  Computers are down.  PMC slider is stuck at low voltage.

Attachment 1: morning.png
  9726   Fri Mar 14 09:44:34 2014 SteveUpdateLSCmorning lock
Attachment 1: 2hrsMorningLock.png
  13504   Fri Jan 5 17:50:47 2018 ranaConfigurationComputersmotif on nodus

I had to do 'sudo yum install motif' on nodus so that we could get libXm.so.4 so that we could run MEDM. Works now.

  12582   Thu Oct 27 09:38:32 2016 SteveUpdatePEMmouse

We may have a mouse in the lab.  Do not leave any food scrap in trash ! Traps will be set.


Attachment 1: mouse.jpg
  12605   Tue Nov 8 08:51:59 2016 SteveUpdatePEMmouse hole sealed

This is where the mammal came though. It is reach able from room 108 CES


We may have a mouse in the lab.  Do not leave any food scrap in trash ! Traps will be set.



Attachment 1: CES108rs.jpg
  3646   Tue Oct 5 09:26:04 2010 steveConfigurationPEMmoved accelerometers

Accelerometers xyz were moved from IOO-south/west  corner to under PSL table. They were turned off for ~20 minutes.

Guralp was also moved eastward about 2 ft. It is not leveled.

This is part of the preparation to remove access connector.

  698   Fri Jul 18 19:30:20 2008 MashaUpdateAuxiliary lockingmoving from 40m
I will be working in the basement of Bridge probably starting next week; I moved the NPRO laser and some of the optics from my mach zehnder setup on the SP table to Bridge. Thanks for your help!
  2182   Thu Nov 5 16:30:56 2009 peteUpdateComputersmoving megatron

 Joe and I moved megatron and its associated IO chassis from 1Y3 to 1Y9, in preparations for RCG tests at ETMY.

  12521   Wed Sep 28 04:27:33 2016 ericqUpdateGeneralmucking about

PMC was terribly misaligned. The PMCR camera seems to have drifted somewhat off target too, but I didn't touch it.

Realigned ITMX for the nth time today.

Finding ALSY beatnote was easy, ALSX eludes me. I did a rough one-point realignment on the X beat PD which is usually enough, but it's probably been long enough that near/far field alignmnet is neccesary. 

ALSY noise is mostly nominal, but there is a large 3Hz peak that is visible in the spot motion, and also modulates the beat amplitude by multiple dBs.

It looked to me that the ETMY oplev spot was moving too much, which led me to measure the oplev OLGs. There is some wierd inter-loop interference going on between OLPIT and OLYAW. With both on (whether OSEM damping is on or off, so input matrix shenanigans can't be to blame) there is a very shallow "notch" at around 4.5Hz, which leads to very little phase at 3Hz, and thus tons of control noise. Turning the OL loop not being measured off makes this dip go away, but the overall phase is still signfinicantly less than we should have. I'm not sure why. I'll just show the PIT plot, but things look pretty much the same for YAW. 

I did some more ETMX tests. Locked arm, raised the servo output limit to 15k, then increased the gain to make the loop unstable. I saw the SUS LSC signals go up to tens of thousands of counts when the unlock happened. I did this a dozen times or so, and every time the ETM settled in the same angular position according to the oplev.

Right now, another hysteresis script is running, misaliging in pitch and yaw. Amplitude 1V in each direction. So far, everything is stable after three on/off cycles.

Attachment 1: alscheck.pdf
Attachment 2: weird_olpit.pdf
  12522   Thu Sep 29 09:49:53 2016 ranaUpdateGeneralmucking about

With the WFS and OL, we never have figured out a good way to separate pit and yaw. Need to figure out a reference for up/down and then align everything to it: quad matrix + SUS output matrix

  12527   Sat Oct 1 10:03:28 2016 ericqUpdateGeneralmucking about

Some things I did last night:

I measured the X PDH OLG, and turned the gain down by ~6dB to bring the UGF back to 10kHz, ~50deg phase margin, 10dB gain margin. However, the error signal on the oscilloscope remained pretty ratty. Zooming in, it was dominated by glitches occuring at 120Hz. I went to hook up the SR785 to the control signal monitor to see what the spectrum of these glitches looked like, but weirdly enough connecting the SR785's input made the glitches go away. In fact, with one end of a BNC connector plugged into a floating SR785 input, touching the other end's shield to any of the BNC shields on the uPDH chassis made the glitches go away.

This suggested some ground loop shenanigans to me; everything in the little green PDH shelves is plugged into a power strip which is itself plugged into a power strip at the X end electronics rack, behind all of the sorensens. I tried plugging the power strip into some different places (including over by the chamber where the laser and green refl PD are powered), but nothing made the glitches go away. In fact, it often resulted in being unable to lock the PDH loop for unknown reasons. This remains unsolved.

As Gautam and Johannes observed, the X green beat was puny. By hooking up a fast scope directly to the beat PD output, I was able to fine tune the alignment to get a 80mVpp beat, which I think is substaintially bigger than what we used to have. (Is this plus the PDH gain changed really attributable to arm loss reduction? Hm)

However, the DFD I and Q outputs have intermittent glitches that are big enough to saturate the ADC when the whitening filters are on, even with 0dB whitening gain, which makes it hard to see any real ALS noise above a few tens of Hz or so. Turning off the whitening and cranking up the whitening gain still shows a reasonably elevated spectrum from the glitches. (I left a DTT instance with a spectrum on in on the desktop, but forgot to export...) The glitches are not uniformly spaced at 120Hz as in the PDH error signal. However, the transmitted green power also showed intermittant quick drops. This also remains unsolved for the time being. 

  12529   Tue Oct 4 02:59:48 2016 ericqUpdateGeneralmucking about

[ericq, gautam]

We poked around trying to figure out the X PDH situation. In brief, the glitchiness comes and goes, not sure what causes it. Tried temp servo on/off and flow bench fan on/off. Gautam placed a PD to pick off the pre-doubler AUX X IR light to see if there is some intermittant intensity fluctuation overnight. During non-glitchy times, ALSX noise profile doesn't look too crazy, but some new peak around 80Hz and somewhat elevated noise compared to historical levels above 100Hz. It's all coherent with the PDH control up there though, and still looks like smooth frequency noise...

NB: The IR intensity monitoring PD is temporarily using the high gain Transmon PD ADC channel, and is thus the source of the signal at C1:LSC-TRY_OUT_DQ. If you want to IR lock the X arm, you must change the transmon PD triggering to use the QPD.

Attachment 1: 2016-10-04_ALSXspectra.pdf
  2294   Wed Nov 18 16:58:36 2009 kiwamuUpdateElectronicsmulti-resonant EOM --- EOM characterization ---

In designing the whole circuit it is better to know the characteristic of the EOM.

I made impedance measurement with the EOM (New Focus model 4064) and I found it has capacitance of 10pF.

This is good agreement with the data sheet which says "5-10pF".

The measured plot is attached below. For comparison there also plotted "open" and "10pF mica".

In the interested band( from 1MHz to 100MHz), EOM looks just a capacitor.

But indeed it has lead inductance of 12nH, resistance of 0.74[Ohm], and parasitic capacitance of 5.5pF.

In some case we have to take account of those parasites in designing.



  2295   Wed Nov 18 22:38:17 2009 KojiUpdateElectronicsmulti-resonant EOM --- EOM characterization ---

How can I get those values from the figure?


But indeed it has lead inductance of 12nH, resistance of 0.74[Ohm], and parasitic capacitance of 5.5pF. 


  2292   Wed Nov 18 14:55:59 2009 kiwamuUpdateElectronicsmulti-resonant EOM --- circuit design ----

The circuit design of multi-resonant EOM have proceeded.

By using numerical method, I found the some best choice of the parameters (capacitors and inductors).

In fact there are 6 parameters (Lp, L1, L2, Cp, C1, C2) in the circuit to be determined.


In general the less parameter gives the less calculation time with performing the numerical analysis. Of course it looks 6 parameters are little bit large number.

In order to reduce the arbitrary parameters, I put 4 boundary conditions.

Each boundary conditions fixed resonant peaks and valleys; first peak=11MHz, third peak=55MHz, first valley=19MHz, second valley=44MHz.


So now the remaining arbitrary parameters successfully get reduced to 2. Only we have to do is optimize the second peak as it to be 29.5MHz.

Then I take C1 and C2 as free parameters seeing how the second peak agree with 29.5MHz by changing the value of the C1 and C2.


the red color represents the good agreement with 29.5MHz, in contrast blue contour represents the bad.

 You can see some best choice along the yellow belt. Now what we should do is to examine some of that and to select one of those.

  2263   Fri Nov 13 05:03:09 2009 kiwamuUpdateElectronicsmulti-resonant EOM --- input impedance of LC tank ----

I measured the input impedance of the LC tank circuit with the transformer. The result is attached.

It looks interesting because the input impedance is almost dominated

by the primary coil of the transformer with inductance of 75nH (see attachment 1).

The impedance at the resonance is ~100 [Ohm], I think this number is quite reasonable because I expected that as 93 [Ohm]


Note that the input impedance can be derived as follower;

(input impedance) = L1 + Z/n^2.

Where L1 is the inductance of the primary coil, Z is the load in the secondary loop and n is the turn ratio.


I think now I am getting ready to enter the next phase \(^o^)/

Attachment 1: input_impedance.png
Attachment 2: input_impedance2.png
  2262   Fri Nov 13 03:38:47 2009 kiwamuUpdateElectronicsmulti-resonant EOM --- impedance of LC tank circuit ----

I have measured the impedance of the LC tank circuit which I referred on my last entry.

The configuration of the circuit is exactly the same as that time.

In order to observe the impedance, by using Koji's technique I injected a RF signal into the output of the resonant circuit.

In addition I left the input opened, therefore the measured impedance does not include the effect of the transformer.


- - - - - - - - - - - - results

The measured impedance is attached below; "LCtank_impedance.png"

The peak around 50MHz is the main resonance and it has impedance of ~1500 [Ohm], which should go to infinity in the ideal case (no losses).

In fact the impedance looked from the input of the circuit gets reduced by 1/n^2, where "n" is the turn ratio of the transformer.

By putting the n=4, the input impedance of the circuit should be 93 [Ohm]. This is a moderate value we can easily perform impedance-matching by some technique.

I also fitted the data with a standard model of equivalent circuit (see attachment 2).

In the figure.2 red component and red letter represents the design. All the other black stuff are parasites.

But right now I have no idea the fitted value is reasonable or not.

For the next I should check the input impedance again by the direct way; putting the signal into the input.




Attachment 1: LCtank_impedance.png
Attachment 2: LCtank_model.png
  346   Thu Feb 28 19:37:41 2008 robConfigurationComputersmultiple cameras running and seisBLRMS

1) Mafalda is now connected via an orange Cat5E ethernet cord to the gigabit ethernet switch in rack in the office space. It has been labeled at both ends with "mafalda".

2) Both the GC650M camera (from MIT) and the GC750M are working. I can run the sampleviewer code and get images simultaneously. Unforutnately, the fps on both cameras seems to drop roughly in half (not an exact measurement) when displaying both simultaneously at full resolution.

3)Discovered the Gigabit ethernet card in Mafalda doesn't support jumbo packets (packets of up to 9k bytes), which is what they recommend for optimum speed.

4)However, connecting the cameras through only gigabit switches to Mafalda did seem to increase the data rate anyways, roughly by a factor of 2. (Used to take about 80 seconds to get 1000 frames saved, now takes roughly 40 seconds).

5)Need to determine the bottleneck on the cameras. It may be the ethernet card, although its possible to connect multiple gigabit cards to single computer (depending on the number of PCI slots it has). Given the ethernet cards are cheap ($300 for 20) compared to even a single camera (~$800-1500), it might be worth while outfitting a computer with multiple.

I found the SampleViewer running and displaying images from the two cameras. This kept mafalda's network so busy that the seisBLRMS program fell behind by a half-hour from its nominal delay (so 45 minutes instead of 12), and was probably getting steadily further behind. I killed the SampleViewer display on linux2, and seisBLRMS is catching up.
  6152   Tue Dec 27 22:17:56 2011 kiwamuUpdateLSCmultiple-LOCKIN new screens

Some new screens have been made for the new multiple-LOCKIN system running on the LSC realtime controller.

The medm screens are not so pretty because I didn't spend so long time for it, but it is fine for doing some actual measurements with those new screens.

So the basic works for installing the multiple-LOCKIN are done.


 The attached figure is a screen shot of the LOCKIN overview window.

As usual most of the components shown in the screen are clickable and one can go to deeper levels by clicking them. 


Quote from #6150
The multiple LOCKIN module has been newly added on the LSC realtime model.
I will make some MEDM screens for this multiple-LOCKIN system.

  6150   Mon Dec 26 14:01:45 2011 kiwamuUpdateLSCmultiple-LOCKIN newly added
The multiple LOCKIN module has been newly added on the LSC realtime model.
The purpose is to demodulate ALL the LSC sensors at once while a particular DOF is excited by an oscillator.
So far the model has been successfully compiled and running okay.
I will make some MEDM screens for this multiple-LOCKIN system.

(Some details)

The picture below is a screen shot of the LSC real time model, zoomed in the new LOCKIN part.


The LOCKIN module consists of three big components:

  1. A Master oscillator
    • This shakes a desired DOF through the LSC output matrix and provides each demodulator with sine and cosine local oscillator signals.
    • This part is shown in the upper side of the screen shot.
    • The sine and cosine local oscillator signals appear as red and blue tags respectively in the screen shot.
  2. An input matrix
    • To allow us to select the signals that we want to demodulate.
    • This is shown in the left hand side of the screen shot.
  3. Demodulators
    • These demodulators demodulate the LSC sensor signals by the sine and cosine signals provided from the master oscillator.
    • With the input matrix fully diagonalized, one can demodulate all the LSC signals at once.
    • The number of demodulators is 27, which corresponds to that of available LSC error signals (e.g. AS55_I, AS55_Q, and etc.).
    • This part is shown in the middle of the screen shot.
  7457   Mon Oct 1 16:05:01 2012 jamieUpdateCDSmx stream restart required on all front ends

For some reason the frame builder and mx stream processes on ALL front ends were down.  I restarted the frame builder and all the mx_stream processes and everything seems to be back to normal.  Unclear what caused this.  The CDS guys are aware of the issue with the mx_stream stability and are working on it.

  11103   Thu Mar 5 19:13:17 2015 ranaUpdateCDSmxStream all restart at ~7:10 pm
  6609   Sun May 6 00:11:00 2012 DenUpdateCDSmx_stream

c1sus and c1iscex computers could not connect to framebuilder, I restarted it, did not help. Then I restarted mx_stream daemon on each of the computers and this fixed the problem.

sudo /etc/init.d/mx_stream restart

  9825   Thu Apr 17 17:15:54 2014 jamieUpdateCDSmx_stream not starting on c1ioo

While trying to get dolphin working on c1ioo, the c1ioo mx_stream processes mysteriously stopped working.  The mx_stream process itself just won't start now.  I have no idea why, or what could have happened to cause this change.  I was working on PCIe dolphin stuff, but have since backed out everything that I had done, and still the c1ioo mx_stream process will not start.

mx_stream relies on the open-mx kernel module, but that appears to be fine:

controls@c1ioo ~ 0$ /opt/open-mx/bin/omx_info  
Open-MX version 1.3.901
 build: root@fb:/root/open-mx-1.3.901 Wed Feb 23 11:13:17 PST 2011

Found 1 boards (32 max) supporting 32 endpoints each:
 c1ioo:0 (board #0 name eth1 addr 00:14:4f:40:64:25)
   managed by driver 'e1000'
   attached to numa node 0

Peer table is ready, mapper is 00:30:48:d6:11:17
  0) 00:14:4f:40:64:25 c1ioo:0
  1) 00:30:48:d6:11:17 c1iscey:0
  2) 00:25:90:0d:75:bb c1sus:0
  3) 00:30:48:be:11:5d c1iscex:0
  4) 00:30:48:bf:69:4f c1lsc:0
controls@c1ioo ~ 0$ 

However, if trying to start mx_stream now fails:

controls@c1ioo ~ 0$ /opt/rtcds/caltech/c1/target/fb/mx_stream -s c1x03 c1ioo c1als -d fb:0
mmapped address is 0x7f885f576000
mapped at 0x7f885f576000
send len = 263596
OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
mx_connect failed
controls@c1ioo ~ 1$ 

I'm not quite sure how to interpret this error message.  The "00:00:00:00:00:00" has the form of a 48-bit MAC address that would be used for a hardware identifier, ala the second column of the OMC "peer table" above, although of course all zeros is not an actual address.  So there's some disconnect between mx_stream and the actually omx configuration stuff that's running underneath.

Again, I have no idea what happened.  I spoke to Rolf and he's going to try to help sort this out tomorrow.

Attachment 1: c1ioo_no_mx_stream.png
  9830   Fri Apr 18 14:00:48 2014 rolfUpdateCDSmx_stream not starting on c1ioo


 To fix open-mx connection to c1ioo, had to restart the mx mapper on fb machine. Command is /opt/mx/sbin/mx_start_mapper, to be run as root. Once this was done, omx_info on c1ioo computer showed fb:0 in the table and mx_stream started back up on its own. 

ELOG V3.1.3-