40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 140 of 346  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  9478   Mon Dec 16 02:20:49 2013 DenUpdateLSCMICH rms is improved

When PRMI is locked on REFL 165 I&Q signals MICH rms is dominated by the 60 Hz line and harmonics. It comes from demodulation board.

To increase SNR ZFL-100 LN amplifier (+23.5dB) was installed in LSC analog rack. MICH 60 Hz and harmonics are improved as shown on the plot "mich_err"

I have also added a few resg at low frequencies. MICH rms is not 3*10-10. In Optickle I simulated power dependence in PRC and ARMs on MICH motion. Plot is attached.

 I think we need to stabilize MICH even more, down to ~3*10-11 . We can think about increasing RF amplifier gain, modulation index and power on BB PD.

CARM offset reduction was a little better today due to improved MICH RMS. Power in arms increases up to 15 and than starts to oscillate up to 70 and then PRMI looses lock.

Tomorrow we need to discuss where to put RF amplifier. Current design has several drawbacks:

  • DC power for the amplifier is wired from a custom (not rack based) +15V power supply that was already inside the lsc rack and used for other ZFL-100LN
  • BNC cables are used because I could not find any long SMA cables
  • we would like gain of ~40 dB instead of 23.5 dB
Attachment 1: MICH_ERR.pdf
MICH_ERR.pdf
Attachment 2: DC_power.pdf
DC_power.pdf
Attachment 3: ARM_OFFSET.pdf
ARM_OFFSET.pdf
  9767   Mon Mar 31 17:47:57 2014 ericqSummaryLSCMICH sensing oddities in REFL 3F

Last week, while I had the PRMI locked on REFL33, I did some poking around with mirror excitation to RFPD quadrature transfer functions. I got some indication of weird things with sensing MICH with the 3F REFL signals, but it should be explored more before taken as a real thing. I just figured I would show what I saw. 

With that disclaimer out of the way, here's what I did:

  • Locked PRMI on PRCL:REFL33_I and MICH:REFL33_Q, as detailed in my earlier ELOG
  • Created PRCL and MICH excitations at two different frequencies, notched said frequencies out of the control filters
  • Took transfer functions from mirror LSC output signals to 33 I, 33 Q, 165 I, 165 Q in DTT
  • For each DOF, look at the measured transfer functions only at the excitation frequency. (Assuming good coherence, which was there)

The basic idea was, some PRCL motion (for instance), has a transfer function to both the I and Q quadratures at a given PD. As the PRCL excitation sine wave goes through one cycle, the REFL signals at the excitation frequency go through some coherent cycle. Thus, the excitation traces out some trajectory in the I vs. Q plane. I believe this is analogous to the typical "radar plot" that we make for sensing matrix elements. 

However, the straight line that we normally plot in the radar plots assumes a certain phase relationship between the DOF-> I and DOF->Q transfer functions that results in a straight line. Here are the trajectories I actually measured, normalized by the excitation amplitudes.

REFL_33_traj.pdfREFL_165_traj.pdf

The plotted traces are (x,y) = (H_prcl->I * prcl, H_prcl->Q * prcl) and  (x,y) = (H_mich->I * mich, H_mich->Q * mich) where H_prcl->I is the measured complex transfer function from prcl to REFL I, for instance, and prcl and mich are the excitation signals, normalized to unit amplitude.

PRCL looks like a nice straight line in both of these, and pretty well phased, but not only is MICH not very orthogonal to PRCL, there is quite a bit of ellipticity present, which means we can't fully decouple the two DOFs, even if they were nominally orthogonal. 

I'm not sure what may cause this. To back up this measurement/interpretation, I tried to take measurements of these transfer functions across different excitation frequencies via swept sine DTT, but seismic activity kept me from staying locked long enough...

  9768   Mon Mar 31 21:23:30 2014 GabrieleSummaryLSCMICH sensing oddities in REFL 3F

I'm not sure what may cause this. To back up this measurement/interpretation, I tried to take measurements of these transfer functions across different excitation frequencies via swept sine DTT, but seismic activity kept me from staying locked long enough...

I guess that you get an ellipse when the transfer functions to I and Q have a different phase. One mechanism could be that when driving MICH we make some residual PRCL and this couples with a different transfer function to both I and Q. However, I would expect no phase lag in the PRMI configuration, since there is no enough optical delay in the system to give significant dephasing at few hundreds Hz. This effect might come from mechanical resonances.

It is worth measuring the optical transfer functions from both PRCL and MICH to REFL signals at all frequencies, to see if we have strange features in the TFs.

  13395   Thu Oct 19 15:42:03 2017 jamieSummaryLSCMICH/PRCL reconstruction neural network running on c1lsc

Gabriele's PRCL/MICH reconstruction neural network is now running on c1lsc.  Summary:

  • front-end model is called c1dnn, and is running as an experimental user-space process
  • c1dnn is getting most of it's needed inputs from existing SHMEM IPC outputs from c1lsc
  • none of the output signals from the network are being sent anywhere yet (grounded)
  • c1dnn has not been integrated in any way, into the DAQ etc.  it is being run manually by hand, and will be completely shut down after this test

Simple MEDM screen I made to monitor the input/output signals:

The RTS process seems to run fine, but there is quite a bit of jitter in the CPU_METER, at the 50% level:

It's not running over the limit, but it is jumping around more than I think it should be.  Will look into that...

cpuset for cpu isolation for user-space model

The c1dnn model is running on CPU6 on c1lsc.  CPU6 was isolated from the rest of the system using cpuset.  The "cset" utility was used to create a "system" CPU set that was assigned to CPU0, and the kernel was instructed to move all running processes to that set:

controls@c1lsc:~ 2$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y   343    0 /
controls@c1lsc:~ 0$ sudo cset set -c 0 -s system --cpu_exclusive
cset: --> created cpuset "system"
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y   342    1 /
       system          0 y       0 n     0    0 /system
controls@c1lsc:~ 0$ sudo cset proc --move -f root -t system -k
cset: moving all tasks from root to /system
cset: moving 292 userspace tasks to /system
cset: moving 0 kernel threads to: /system
cset: --> not moving 50 threads (not unbound, use --force)
[==================================================]%
cset: done
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    50    1 /
       system          0 y       0 n   292    0 /system
controls@c1lsc:~ 0$ sudo cset proc --move -f root -t system -k --force
cset: moving all tasks from root to /system
cset: moving 50 kernel threads to: /system
[==================================================]%
cset: **> 29 tasks are not movable, impossible to move
cset: done
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    29    1 /
       system          0 y       0 n   313    0 /system
controls@c1lsc:~ 0$

I then created a set for the RTS process ("rts-c1dnn") on CPU6, and executed the c1dnn model in that set:

controls@c1lsc:~ 0$ sudo cset set -c 6 -s rts-c1dnn --cpu_exclusive
cset: --> created cpuset "rts-c1dnn"
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    24    2 /
    rts-c1dnn          6 y       0 n     0    0 /rts-c1dnn
       system          0 y       0 n   340    0 /system
controls@c1lsc:~ 0$ sudo cset proc -s rts-c1dnn --exec /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -- -m c1dnn
cset: --> last message, executed args into cpuset "/rts-c1dnn", new pid is: 27572
sysname = c1dnn
....

When done I just hit Ctrl-C.

I left the cpusets as they are, with all system processes in the "system" set.  This should not pose any problems since it's the identical configuration as would be if a normal kernel-level model was running in CPU6.

The c1dnn process and it's EPICS sequencer were shutdown after this test.

  8816   Tue Jul 9 23:27:17 2013 KojiSummaryLSCMICH: ITMX/Y <=> PRM/BS

The MICH actuation with PRM/BS was investigated again.

(ITMX -1 / ITMY +1) is equivalent to (PRM -0.267 and BS +0.50).


- PRMIsb was locked with REFL33I&AS55Q.

- Using the locking module in the LSC model, actuate ITMX (-1) and ITMY (+1) at 580.1Hz. Note that the notch filters in the MICH/PRCL servos were on.

- Look at the peak in the AS55Q spectrum. Tune the BS element in the output matrix of the lock-in to minimize the peak height.
=> The peak was minimized at BS = -0.50.

- Look at the peak in the REFL33I spectrum. Tune the PRM element in the output matrix of the lock-in to minimize the peak height.
=> The peak was minimized at PRM = +0.267

- These measurement leads to the conclusion mentioned above.

  8490   Thu Apr 25 04:10:09 2013 JenneUpdateLockingMICH_CTRL drifting away??

Koji is elogging separately of his exploration of different configurations.  The lock stretch that I'm looking at here uses the same parameters as Koji had for PRMI sb lock, using AS55Q for MICH and REFL33I for PRCL, with MICH gain of -0.8 and PRCL gain of 0.05 .

All of these plots are the same few second lock stretch, with different zooming.  Jamie's super-sweet getdata python script only accepts integers for the start time and duration parameters, so lots of this zooming happened by hand, but I tried to always keep the time axis aligned within each screenshot.  Sometimes the plot axis labels say differently, but they're lying to you.

Plot 1:  gps start time is 1050915916, duration = 6 seconds.  Overall view of the lock stretch.

1050915916-6.png

Plot 2:  gps start time is 1050915921, duration = 1 second.  We're looking at the lockloss that happens at the left side of the plots.

1050915921-1.png

Plot 3:  zoomed in (along the time-axis) version of plot 2, so much shorter time duration.  Some zooming on y-axes.

1050915921-zoom.png

Plot 4:  zoomed in (along y-axes) version of plot 2.

1050915921-1-zoom.png

It seems to me from these plots that maybe MICH CTRL is drifting away?  It seems like we lose the MICH lock, and that destroys the whole thing. 

Koji made some comments to me earlier, regarding his work this evening, that the MICH signal quality is poor in general, and that we should calculate/think about changing our schnupp asymmetry. 

  11499   Wed Aug 12 16:39:46 2015 IgnacioUpdateIOOMISO WIener (T240-X and T240-Y) FF of MCL

Last night I performed some MISO FF on MCL using the T240-X and T240-Y as witnesses. Here are the results:

Filter:

T240-X

T240-Y

 

Training data + Predicted FIR and IIR subtraction:

Online subtraction results:

MCL
YARM

Subtraction performace:

Attachment 1: stsx.png
stsx.png
Attachment 2: stsy.png
stsy.png
Attachment 3: performance.png
performance.png
Attachment 4: sub.png
sub.png
Attachment 5: mcliir.png
mcliir.png
Attachment 6: yarmiir.png
yarmiir.png
  11547   Sun Aug 30 23:47:02 2015 IgnacioUpdateIOOMISO Wiener Filtering of MCL

I decided to give MISO Wiener filtering a try again. This time around I managed to get working filters. The overall performance of these MISO filters is much better than the SISO I constructed on elog:11541 .

The procedure I used to develope the SISO filters did not work well for the construction of these MISO filters. I found a way, even more systematic than what I had before to work around Vectfit's annoyances and get the filters in working condition. I'll explain it in another eLOG post.

Anyways, here are the MISO filters for MCL using the T240-X and T240-Y as witnesses:

 Now the theoretical offline prediction:

 

 

The online subtractions for MCL, YARM and XARM. I show the SISO subtraction for reference.

 And the subtraction performance:

  11549   Mon Aug 31 09:36:05 2015 IgnacioUpdateIOOMISO Wiener Filtering of MCL

MISO Wiener filters for MCL kept the mode cleaner locked for a good 8+ hours.

  15363   Tue Jun 2 14:05:24 2020 HangUpdateBHDMM telescope actuation range requirments

We computed the required actuation range for the telescope design in elog:15357. The result is summarized in the table below. Here we assume we misalign an IFO mirror by 1 urad, and then compute how many urad do we need to move the (AS1, AS4) or (LO1, LO2) mirrors to simultaneously correct for the two gouy phases. 

Actuation requirement in urad per urad misalignment
[urad/urad] ITMX ITMY ETMX ETMY BS PRM PR2 PR3 SR3 SRM
AS1 1.9 2.1 -5.0 -5.5 0.5 0.5 -0.3 0.2 0.1 0.6
AS4 2.9 2.0 -8.8 -5.5 -5.9 -0.7 1.3 -0.7 -0.5 0.7
LO1 -4.0 -3.9 11.0 10.4 1.9 -0.4 -0.2 0.1 0.0 -1.1
LO2 -5.0 -3.7 15.1 10.4 8.7 0.8 1.9 1.1 0.7 -1.3

The most demanding ifo mirrors are the ETMs and the BS, for every 1 urad misalignment the telescope needs to move 10-15 urad to correct for that. However, it is unlikely for those mirrors to move more 100 nrad for a locked ifo with ASC engaged. Thus a few urad actuation should be sufficient. For the recycling mirrors, every 1 urad misalignment also requires ~ 1 urad actuation. 

As a result, if we could afford 10 urad actuation range for each telescope suspension, then the gouy phase separations we have should be fine. 

================================================================

Edits:

We looked at the oplev spectra from gps 1274418500 for 512 sec. This should be a period when the ifo was locked in the PRFPMI state according to elog:15348. We just focused on the yaw data for now. Please see the attached plots. The solid traces are for the ASD, and the dotted ones are the cumulative rms. The total rms for each mirror is also shown in the legend. 

I am now confused... The ITMs looked somewhat reasonable in that at least the < 1 Hz motion was suppressed. The total rms is ~ 0.1 urad, which was what I would expect naively (~ x100 times worse than aLIGO). 

There seems to be no low-freq suppression on the ETMs though... Is there no arm ASC at the moment???

Attachment 1: TM_OL_spec_1274418500_512.pdf
TM_OL_spec_1274418500_512.pdf
Attachment 2: CORNER_OL_spec_1274418500_512.pdf
CORNER_OL_spec_1274418500_512.pdf
  15386   Tue Jun 9 14:55:43 2020 JonUpdateBHDMM telescope actuation range requirments

I don't think we ever discussed why the angular RMS of the ETMs is so much higher than the ITMs. Maybe that's a separate matter because, even assuming the worst case, the actuation range requirement is

(0.82 μrad RMS) x (15 μrad/μrad) x (10 safety factor) = 0.12 mrad

which is still only order 1% of the pitch/yaw pointing range of the Small Optic Suspensions, according to P1600178 (sec. IV. A). Can we check this requirement off the list?

Quote:

We computed the required actuation range for the telescope design in elog:15357. The result is summarized in the table below. Here we assume we misalign an IFO mirror by 1 urad, and then compute how many urad do we need to move the (AS1, AS4) or (LO1, LO2) mirrors to simultaneously correct for the two gouy phases. 

Actuation requirement in urad per urad misalignment
[urad/urad] ITMX ITMY ETMX ETMY BS PRM PR2 PR3 SR3 SRM
AS1 1.9 2.1 -5.0 -5.5 0.5 0.5 -0.3 0.2 0.1 0.6
AS4 2.9 2.0 -8.8 -5.5 -5.9 -0.7 1.3 -0.7 -0.5 0.7
LO1 -4.0 -3.9 11.0 10.4 1.9 -0.4 -0.2 0.1 0.0 -1.1
LO2 -5.0 -3.7 15.1 10.4 8.7 0.8 1.9 1.1 0.7 -1.3

The most demanding ifo mirrors are the ETMs and the BS, for every 1 urad misalignment the telescope needs to move 10-15 urad to correct for that. However, it is unlikely for those mirrors to move more 100 nrad for a locked ifo with ASC engaged. Thus a few urad actuation should be sufficient. For the recycling mirrors, every 1 urad misalignment also requires ~ 1 urad actuation. 

As a result, if we could afford 10 urad actuation range for each telescope suspension, then the gouy phase separations we have should be fine. 

================================================================

Edits:

We looked at the oplev spectra from gps 1274418500 for 512 sec. This should be a period when the ifo was locked in the PRFPMI state according to elog:15348. We just focused on the yaw data for now. Please see the attached plots. The solid traces are for the ASD, and the dotted ones are the cumulative rms. The total rms for each mirror is also shown in the legend. 

I am now confused... The ITMs looked somewhat reasonable in that at least the < 1 Hz motion was suppressed. The total rms is ~ 0.1 urad, which was what I would expect naively (~ x100 times worse than aLIGO). 

There seems to be no low-freq suppression on the ETMs though... Is there no arm ASC at the moment???

 

  8084   Thu Feb 14 10:42:41 2013 JamieSummaryAlignmentMMT, curved TTs does not explain beam ellipticity at Faraday

After using alamode to calculate the round-trip mode of the beam at the Faraday exit after retro-reflection form the PRM, I'm not able to blame the MMT and TT curvature for the beam ellipticity.

I assume an input waist at the mode cleaner of [0.00159, 0.00151] (in [T, S]).  Propagating this through the MMT to PRM, then retro-reflecting back with flat TTs I get

w_t/w_s = 0.9955,  e = 0.0045

If I give the TTs a -600 m curvature, I get:

w_t/w_s = 1.0419,  e = 0.0402

That's just a 4% ellipticity, which is certainly less than we see.  I would have to crank up the TT curvature to -100m or so to see an ellipticity of 20%.  We're seeing something that looks bigger than 50% to me.

Below are beam size through MMT + PRM retro-reflection, TT RoC = -600m:

 

T.pdfS.pdf

  3578   Wed Sep 15 16:12:35 2010 koji, steveUpdateMOPAMOPA Controller is taken out of the PSL rack

We removed the Lightwave MOPA Controller, PA#102, NPRO206 power supply to make room for the IOO chassy at 1X1 (south) rack.

The umbilical cord was a real pain to take out. It is shading its plastic cover. The unused Minco was disconnected and removed.

The ref. cavity ion pump controller- power supply was temporarily taken out also.

Attachment 1: P1060843.JPG
P1060843.JPG
  663   Sun Jul 13 17:19:29 2008 ranaSummaryPSLMOPA SLOWM Calibration
John, Rana

We first unlocked the FSS and ramped the SLOW actuator. With the PMC locked we observed the PMC PZT voltage
as a function of SLOWM (SLOW loop actuator voltage). We believed this to be ~1-5 GHz / V. Since this is
not so precise we then ran a slow (2 min. period) triangle wave into the slow actuator and looked at the
ref cav transmission peaks to calibrate it.

Plot is attached>

We assume that the reference cavity length = 203.2 mm then the FSR = 737.7 MHz. So looking at the plot
and using our eye to measure the SLOWM calibration is 1054 +/- 30 MHz/V. This is probably dominated by
our eye method.

Note: we tried to get the length from T010159-00-R (Michele, Weinstein, Dugolini). In that doc,
the length used is 203.3 mm whereas its 203.2 mm in the PSL FDD (?). The calculation of the FSR is also
incorrect (looks like they used c = 299460900 instead of 299792458 m/s). We took the length from the PSL FDD
(T990025-00-D) but not the FSR, since they also did not find the right value of 'c'. I guess that the speed
of light just ain't what it used to be.
Attachment 1: SLOWDCcalibration.png
SLOWDCcalibration.png
  1573   Mon May 11 11:49:20 2009 steveUpdatePSLMOPA cooling water lines are backwards

Quote:
This is 8 days of 10-minute trend.

DTEC is just the feedback control signal required to keep the NPRO's pump diode at a constant temperature.
Its not the amplifier or the actual NPRO crystal's temperature readout.

There is no TEC for the amplifier. It looks like to me that by opening up the flow to the NPRO some more
we have reduced the flow to the amplifier (which is the one that needs it) and created these temperature
fluctuations.

What we need to do is choke down the needle valve and ream out the NPRO block.




I have measured the "input" line temp at the MOPA box 10 C and the "out" line 8 C

This must be corrected.

However look at the 80 days plot of operation where the head temp variation is nothing new
Attachment 1: htempvar80d.jpg
htempvar80d.jpg
  1281   Fri Feb 6 16:20:52 2009 YoichiUpdatePSLMOPA current slider fixed

I fixed the broken slider to change the current of the PA.

The problem was that the EPICS database assigned a wrong channel of the DAC to the slider.

I found that the PA current adjustment signal lines are connected to the CH3 &CH4 of VMIC4116 #1. However in the database file (/cvs/cds/caltech/target/c1psl/psl.db), the slider channel (C1:PSL-126MOPA_DCAMP) was assigned to CH2. I fixed the database file and rebooted c1psl. Then the PA current started to follow the slider value.

I moved the slider back and forth by +/-0.3V while the ISS loop was on. I observed that the amount of the low frequency fluctuation of the MOPA power changed with the slider position. At some current levels, the ISS instability problem went away.

Kakeru is now taking open-loop TFs and current shunt responses at different slider settings.

  2133   Thu Oct 22 15:44:16 2009 ZachUpdateWIKI-40M UpdateMOPA diagram

 I have updated the PSL Diagram wiki page to include MOPA. As with the PSL diagram, clicking the photo on the main page takes you to a larger image. The inventory is pretty meager as I didn't have time to sit and read labels (if indeed there are any). I will look through the documentation at the 40m to see if there is a record of what is there. Again, if you know something, please amend the list!!

http://lhocds.ligo-wa.caltech.edu:8000/40m/PSL_Table_Diagram

  1254   Wed Jan 28 12:42:51 2009 YoichiUpdatePSLMOPA dying
Yoichi, Jenne, Peter

As most of you know, the MOPA output power has been declining rapidly since Jan 21. (See the attachment 1)
There was also an increase in the NPRO power observed in LMON, which is an internal power monitor of the NPRO.
Similar trend can be seen in 126MON, which picks up some scattered light from the NPRO but there may be some contributions from the PA output.

The drop in the AMPMON, LMON and CURMON (NPRO current) from the middle of Jan 26 to the end of Jan27 was caused by me.
I tried to decrease the NPRO current to put the NPRO power back to the level when the MOPA output was higher. But it did not bring back the MOPA power.
So I put back the current after an hour. This caused the sharp power drop on Jan26.
By mistake, I did not fully recover the current at that time and left it like that for a day. This accounts for the long power drop period continued until Jan27.

Shortly after I tweaked the current, the MOPA output power started to fluctuate a lot. This drives the ISS crazy.
To see if this was caused by the NPRO or power amplifier,
we decided to fix the 126MON to monitor the real NPRO power.
We opened the MOPA box and installed a mirror to direct a picked off NPRO beam to the outside of the box through an unused hole.
We set up a lens and a PD outside of the MOPA box to receive this beam. The output from the PD is connected to the 126MON cable.
So 126MON is now serving as the real monitor of the NPRO power. It has not yet been calibrated.

The second attachment shows a short time series of the MOPA power and NPRO power. When the beam is blocked, the 126MON goes to -22.
So the RIN of the NPRO is less than 1%, whereas the MOPA power fluctuates about 5%. There is also no clear correlation between the power fluctuation of the MOPA and the NPRO. So probably the MOPA power fluctuation is not caused by NPRO.

At this moment, all the feedback signals (current shunt, slow and fast actuators) are physically disconnected from MOPA box so that we can see the behavior of MOPA itself.
Attachment 1: Recent10Days.png
Recent10Days.png
Attachment 2: 126_MOPA.png
126_MOPA.png
  3132   Tue Jun 29 10:20:58 2010 ranaUpdateMOPAMOPA is NOT dead

Not dead. It just had a HT fault. You can tell by reading the front panel. Cycling the power usually fixes this.

  3137   Tue Jun 29 16:44:12 2010 Jenne, ranaUpdateMOPAMOPA is NOT dead, was just asleep

Quote:

Not dead. It just had a HT fault. You can tell by reading the front panel. Cycling the power usually fixes this.

MOPA is back onliine.  Rana found that the fuse in the AC power connector's fuse had blown.  This was evident by smelling all of the inputs and outputs of the MOPA controller. The power cord we were using for this was only rated for 10A and therefore was a safety hazard. The fuse should be rated to blow before the power cord catches on fire. The power cord end was slightly melted. I don't know why it hadn't failed in the last 12 years, but I guess the MOPA was drawing a lot of extra current for the DTEC or something due to the high temperature of the head.

We got some new fuses from Todd @ Downs. 

The ones we got however were fast-blow, and that's what we want  The fuses are 10A, 250V.  The fuses are ~.08 inches long, 0.2 inches in diameter. 

  3130   Tue Jun 29 08:41:06 2010 steveUpdateMOPAMOPA is dead

I found the laser dead this morning.

The crane people are here to unjam it.

Laser hazard mode is lifted and LASER SAFE MODE is in place. No safety glasses but CRANE HAZARD is still active.

Stay out of the 40m lab !

 

 

Attachment 1: laserisdead.jpg
laserisdead.jpg
  1042   Mon Oct 13 11:32:50 2008 YoixhiUpdatePSLMOPA is in trouble now
Steve, Alberto, Yoichi

A quick update.
The MOPA output went down to zero on Sunday early morning (00:28 AM).
We found that the NPRO beam is mis-aligned on the power monitoring PD (126MON).
We don't know yet if it is also mis-aligned to the power amplifier (PA) because the mechanical shutter is not working (always closed).
Most likely the beam is not aligned to the PA.
A mystery is that although the beam is terribly (more than a half inch) missing the monitor PD, the beam still goes through two faradays.
Another mystery is that the NPRO output power is now increased to 600mW.

The power drop was a very fast phenomenon (less than 1/16 sec).
We are trying to figure out what happened.
The first step is to fix the mechanical shutter. We have a spare in our hand.
Attachment 1: powerdrop.png
powerdrop.png
  1044   Mon Oct 13 13:56:03 2008 YoichiUpdatePSLMOPA is not that much in trouble now
The problem was found to be all to do with the shutter.
The shutter started to work again, after a while, apparently for no clear reason.
The alignment to the PA was actually not screwed, and the MOPA output is now slowly increasing.
We figured that the 126MON PD has been mis-aligned for a long time. It was just picking the
scattered light from the output of the PA. So when the shutter is closed, it is natural that 126MON also goes down to zero.
It is a bit difficult to center the beam on the PD because there is not much room for moving the PD.
However, Alberto came up with a configuration (flip the PD and reflect back the beam with a mirror to the PD), which seems to
be feasible. We will do this modification when the MOPA is confirmed to be ok.

Here is more detail about the shutter problem:
The shutter is controlled by the MOPA power supply. There are three ways to command the power supply.
The switch on the front panel of the power supply, the EPICS switch (through a XYCOM XY220), and the interlock.
The ribbon cable from the power supply's back is connected to J1 of the cross connect. The pin 59 of the cable is the one
controlling the shutter. It is then routed to J12 pin 36. The interlock and a XYCOM switch are both connected to this
pin.
Now what happened was, on the way tracking down those cables, I pushed some connectors, especially the ones on the XYCOM.
After that, I was able to open the shutter from the EPICS button.
Steve and Alberto tried the EPICS button many times in the morning without success.
My guess is that it was some malfunctioning of the XY220 accidentally fixed by my pushing of the cables.
But I cannot exclude the possibility of the interlock malfunctioning.



Quote:
Steve, Alberto, Yoichi

A quick update.
The MOPA output went down to zero on Sunday early morning (00:28 AM).
We found that the NPRO beam is mis-aligned on the power monitoring PD (126MON).
We don't know yet if it is also mis-aligned to the power amplifier (PA) because the mechanical shutter is not working (always closed).
Most likely the beam is not aligned to the PA.
A mystery is that although the beam is terribly (more than a half inch) missing the monitor PD, the beam still goes through two faradays.
Another mystery is that the NPRO output power is now increased to 600mW.

The power drop was a very fast phenomenon (less than 1/16 sec).
We are trying to figure out what happened.
The first step is to fix the mechanical shutter. We have a spare in our hand.
  1625   Tue May 26 17:05:44 2009 robUpdatePSLMOPA re-activated

steve, rob, alberto

 

Steve installed two rotary flow meters into the MOPA chiller system--one at the chiller flow output and one in the NPRO cooling line.  After some hijinks, we discovered that the long, insulated chiller lines have the same labels at each end.  This means that if you match up the labels at the chiller end, at the MOPA end you need switch labels: out goes to in and vice-versa.  This means that, indubitably, we have at some point had the flow going backwards through the MOPA, though I'm not sure if that would make much of a difference. 

Steve also installed a new needle valve in the NPRO cooling line, which works as expected as confirmed by the flow meter. 

We also re-discovered that the 40m procedures manual contains an error.  To turn on the chiller in the MOPA start-up process, you have to press ON, then RS-232, then ENTER.  The proc man says ON, RS-232, RUN/STOP.

The laser power is at 1.5W and climbing.

Attachment 1: DSC_0513.JPG
DSC_0513.JPG
Attachment 2: DSC_0517.JPG
DSC_0517.JPG
  1626   Tue May 26 17:34:14 2009 robUpdatePSLMOPA re-deactivated

Quote:

steve, rob, alberto

 

Steve installed two rotary flow meters into the MOPA chiller system--one at the chiller flow output and one in the NPRO cooling line.  After some hijinks, we discovered that the long, insulated chiller lines have the same labels at each end.  This means that if you match up the labels at the chiller end, at the MOPA end you need switch labels: out goes to in and vice-versa.  This means that, indubitably, we have at some point had the flow going backwards through the MOPA, though I'm not sure if that would make much of a difference. 

Steve also installed a new needle valve in the NPRO cooling line, which works as expected as confirmed by the flow meter. 

We also re-discovered that the 40m procedures manual contains an error.  To turn on the chiller in the MOPA start-up process, you have to press ON, then RS-232, then ENTER.  The proc man says ON, RS-232, RUN/STOP.

The laser power is at 1.5W and climbing.

 Rob, Alberto

The chiller HT alarm started blinking, as the water temperature had reached 40 degrees C, and was still rising.  We turned off the MOPA and the chiller.  Maybe we need to open the needle valve a bit more?  Or maybe the flow needs to be reversed?  The labels on the MOPA are backwards?

Attachment 1: laser_temp.jpg
laser_temp.jpg
  1621   Fri May 22 17:03:14 2009 rob, steveUpdatePSLMOPA takes a holiday

The MOPA is taking the long weekend off.

Steve went out to wipe off the condensation inside the MOPA and found beads of water inside the NPRO box, perilously close to the PCB board.  He then measured the water temperature at the chiller head, which is 6C.  We decided to "reboot" the MOPA/chiller combo, on the off chance that would get things synced up.  Upon turning off the MOPA, the neslab chiller display immediately started displaying the correct temperature--about 6C.  The 22C number must come from the MOPA controller.  We thus tentatively narrowed down the possible space of problems to: broken MOPA controller and/or clog in the cooling line going to the power amplifier.  We decided to leave the MOPA off for the weekend, and start plumbing on Tuesday.  It is of course possible that the controller is the problem, but we think leaving the laser off over the weekend is the best course of action.

 

 

  537   Wed Jun 18 00:19:29 2008 robUpdatePSLMOPA trend
15 day trend of MOPA channels. The NPRO temperature fluctations are real, and causing the PMC to consistently run up against its rails. The cause of the temperature fluctations is unknown. This, combined with the MZ glitches and Miller kicking off DC power supplies is making locking rather tetchy tonight. Hopefully Yoichi will find the problem with the laser and fix it by tomorrow night.
Attachment 1: MOPAtrend.png
MOPAtrend.png
  113   Fri Nov 16 18:46:49 2007 steveBureaucracyPSL MOPA was turned off & on
The "Mohana" boys scouts and their parents visited the 40m lab today.
The laser was turned off for their safety.
It is back on !
  901   Fri Aug 29 15:01:45 2008 steveUpdatePSLMOPA_HTEMP in increasing
The laser chiller temp is 21.9C ( it should be 20.0C )
Control room temp 73F ok, no obvious block

Ops, there is a piece of paper blocking the intake of the chiller

This is a four day plot. The paper was blocking the air flow all day.
Attachment 1: htcl.jpg
htcl.jpg
  1027   Mon Oct 6 10:00:49 2008 steveUpdateMOPAMOPA_HTEMP is up
Monday morning conditions:

The laser head temp is up to 20.5 C
The laser shut down on Friday without any good reason.
I was expecting the temp to come down slowly. It did not.
The control room temp is 73-74 F, Matt Evans air deflector in perfect position.
The laser chiller temp is 22.2 C

ISS is saturating. Alarm is on. Turning gain down from 7 to 2 pleases alarm handler.

c1LSC computer is down
Attachment 1: htup.jpg
htup.jpg
  1282   Fri Feb 6 16:23:54 2009 steveUpdateMOPAMOPAs of 7 years

MOPAs and their settings, powers of 7 years in the 40m

Attachment 1: 7ymopas.jpg
7ymopas.jpg
  9785   Fri Apr 4 18:51:29 2014 ericqSummaryLSCMORE CARM related modeling

In today's ISC call, Kiwamu was comparing two ways to approach resonance: 

  • "C-Type": The scheme we currently think about; zero DARM offset and slowly reduce the CARM offset
  • "D-Type": Start with no CARM offset, but a DARM offset and reduce that. 

D-type might be interesting to check out, since things change a little less dramatically when you reduce the DARM offset. Maybe this makes signal hopping easier? Signal recycling may complicate things, though. 

So, I've simulated CARM and DARM offset effects on CARM and DARM signals. (As with the previous plots, this is for the PRFPMI configuration.) From moving both offsets around, it looks like the resonance peak is about 5x wider in DARM than in CARM, so I simulated a 50pm offset range for CARM and a 250pm offset range for DARM. 

Here are some CARM signal transfer functions subject to CARM offsets in the top plot, and DARM offsets in the bottom plot. 

 carm2REFL11.pdfcarm2REFL55.pdf

carm2REFLDC.pdfcarm2TRX.pdf

 

It's looks like the DARM offset changes cause much less dramatic changes in the CARM plant features. It's conceivable that this would make CARM locking easier. 

Here are some DARM plant transfer functions. 

 darm2AS11.pdfdarm2AS55.pdf

darm2ASDC.pdfdarm2TRX.pdf

In these plots, I did something kind of artificial: when we move the CARM offset, it changes the proper demodulation phase to get DARM in the Q of the AS 1F RFPDS. So, at each CARM offset, I re-phased the AS 1F demodulators, to show the total DARM information available at the AS RFPDs at each offset, rather than what one would actually see in them with a static demod phase. 

ASDemodAngles.pdf 

  11250   Sat Apr 25 22:17:49 2015 ranaUpdateCDSMXstream restart script working (beta)

Since python from crontab seemed intractableangry, I replaced autoMX.py with a soft link that points at autoMX.sh.

This is a simple BASH scriptcool that looks at the LSC FB stat (C1:DAQ-DC0_C1LSC_STATUS), and runs the restart mxstream script if its non-zero.

So far its run 5 times successfullylaugh. I guess this is good enough for now. Later on, someone ought to make it loop over other FE, but this ought to catch 99% of the FB issues.

  11312   Tue May 19 17:03:34 2015 KojiUpdateCDSMXstream restart script working (beta)

AutoMX is resetting mx_stream every 5 minutes. Basically everytime AutoMX is called,
it resets mx_stream. Is the mx_stream stalling such often? Or the script is detecting false alerms?


> tail -200 /opt/rtcds/caltech/c1/scripts/cds/autoMX.log

Tue May 19 16:43:01 PDT 2015
LSC - FB bad. Runnning restart:
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1sus closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1lsc closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1ioo closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1iscex closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1iscey closed.
0
Tue May 19 16:48:02 PDT 2015
LSC - FB bad. Runnning restart:
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1sus closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1lsc closed.
ssh_exchange_identification: read: Connection reset by peer
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1iscex closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1iscey closed.
0
Tue May 19 16:53:01 PDT 2015
LSC - FB bad. Runnning restart:
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1sus closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1lsc closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1ioo closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1iscex closed.
 * Stopping mx_stream ...                                                 [ ok ]
 * Starting mx_stream ...                                                 [ ok ]
Connection to c1iscey closed.

  11314   Tue May 19 18:38:33 2015 ranaUpdateCDSMXstream restart script working (beta)

Good catch; that was some seriously bad programming on my part. I had some undeclared variable garbage going on. I fixed it and re-implemented the script in CRON on megatron. The log file shows that it has detected no problems for the last several checks. I'll check it again tomorrow to see if its doing good or bad.

  186   Mon Dec 10 19:08:03 2007 tobinConfigurationPSLMZ
The MZ seems finicky today--it keeps unlocking and relocking.

I've temporarily blocked one of the MZ arms while I work on the ISS.
  1925   Tue Aug 18 15:52:27 2009 JenneUpdatePSLMZ
I tweaked up the MZ alignment.  The reflection had been around 0.550, which kept the MEDM indicator green, but was still too high.  I fiddled with BS1, and a little bit with BS2.  When I had the doors of the PSL table open, I got as low as 0.320.  When I closed up and came back to the control room, the MZ refl had drifted up to 0.354.  But it's good again now.

In the future, mirrors shouldn't be so close together that you can't get at their knobs to adjust them No good.  I ended up blocking the beam coming out of the PMC to prevent sticking my hand in some beam, making the adjustment, then removing the dump.  It worked in a safe way, but it was obnoxious. 

  1926   Tue Aug 18 19:57:47 2009 rana, JenneUpdatePSLMZ

- we finished the MZ alignment; the contrast is good.

- we did the RFAM tuning using a new technique: a bubble balanced analyzer cube and the StochMon RFPD. This techniques worked well and there's basically no 33 or 166 RFAM. The 133 and 199 are as expected.

- the MC locked right up and then we used the periscope to align to it; the transmission was ~75% of max before periscope tuning. So the beam pointing after the MC should be fine now.

- the Xarm locked up with TRX = 0.97 (no xarm alignment).

 

If Rob/Yoichi say the alignment is now good, the we absolutely must center the IOO QPDs and IP POS and IP ANG and MC TRANS  today so that we have good references.

 

-----------------------------------

The first photo is of our nifty new setup to get the beam to the StochMon PD.  The MZ transmitted beam enters the photo from the bottom right corner, and hits the PBS (which we leveled using a bubble level).  The P-polarization light is transmitted through the cube, and the S-polarization is reflected to the left.  The pure S-polarized light hits a Beam Splitter, which we are using as a pickoff to reduce the amount of light which gets to the PD.  Most of the light is dumped on an aluminum dump.  The remaining light hits a steering mirror (Y1 45-S), goes through a lens, and then hits the StochMon PD.  While aligning the MZ to maximize visibility, we look at the small amount of P-polarized light which passes through the PBS on an IR card, and minimize it (since we want to be sending purely S-polarized light through the EOMs and into the MC).

The second photo is of a spectrum analyzer which is directly connected to the RF out of the StochMon PD.  To minimize the 33MHz and 166MHz peaks, we adjust the waveplates before each of the EOMs, and also adjusted the tilt of the EOM holders.

The final photo is of the EOMs themselves with the Olympus camera.

Once we finished all of our MZ aligning, we noticed that the beam input to the MC wasn't perfect, so Rana adjusted the lower periscope mirror to get the pointing a little better.  

The MZ refl is now at 0.300 when locked.  When Rana reduced the modulation depth, the MZ refl was about 0.050 .  Awesome!

 

Attachment 1: MZ_RFAMmon_setup_small.jpg
MZ_RFAMmon_setup_small.jpg
Attachment 2: MZ_RFAMmon_SpecAnalyzer_small.jpg
MZ_RFAMmon_SpecAnalyzer_small.jpg
Attachment 3: MZ_EOM_IRrefl2_small.jpg
MZ_EOM_IRrefl2_small.jpg
  2148   Tue Oct 27 01:45:02 2009 robUpdateLockingMZ

Quote:
Tonight we also encountered a large peak in the frequency noise around 485 Hz. Changing the MZ lock point (the spot in the PZT range) solved this.


This again tonight.

It hindered the initial acquisition, and made the DD signal handoff fail repeatedly.
  1853   Fri Aug 7 11:39:13 2009 AlbertoUpdatePSLMZ Alignment
For the last couple of days we've been trying to find the cause that is preventing us to get more than 0.85 for the arm power.
After re-aligning the reference caivity yesterdau, today I went for the MZ. I slightly changed the alignment of the mirror in pitch. I was able to inrease the MZ-TRANPD to 4.8 (from about 3).
Unfortunately the same increase didn't show up at the MC transmission (that is IFO input) becasue changing the MZ also changed alignment to the MC cavity changed. A little tune of the MZ periscope was necessary to adjust the beam to the MC.
 
After all this MC-TRANS read 7.2 vs 7.0 before: no big of an improvement.
 
The arm power is still below 0.85.
 
Next step: measuring the MC length. Maybe changed a lot after the MC satellite was recently it by the people that were installing sesimometers and accelerometers on it.

 

  1195   Fri Dec 19 11:29:16 2008 Alberto, YoichiConfigurationMZMZ Trans PD
Lately, it seems that the matching of the input beam to the Mode Cleaner has changed. Also, it is drifting such that it has become necessary to continuously adjust the MC cavity alignment for it to lock properly.

Looking for causes we stopped on the Mach Zehnder. We found that the monitor channel:
C1:PSL-MZ_MZTRANSPD

which supposedly reads the voltage from some photodiode measuring the transmitted power from the Mach Zehnder, is totally unreliable and actually not related to any beam at all.

Blocking either the MZ input or output beam does not change the channel's readout. The reflection channel readout responds well, so it seems ok.
  2035   Thu Oct 1 13:12:41 2009 KojiUpdateMZMZ Work from 13:00-

I will investigate the MZ board. I will unlock MZ (and MC).

  1395   Thu Mar 12 18:44:02 2009 YoichiUpdatePSLMZ aligned
The MC lost the alignment somehow this afternoon.
So I thought it was good time to touch the MZ because I had to align the MC using the periscope anyway.

I mainly touched the mirror with a PZT. The MZ reflection went down from 0.5 to 0.3.
  604   Mon Jun 30 15:08:52 2008 JohnSummaryPSLMZ alignment
I adjusted the alignmnet of the Mach-Zehnder's two North mirrors (downstream of the EOMs).

MZ REFL is reduced from 0.54 to 0.43. The largest improvement was due to pitch on the PZT mirror.
Attachment 1: mzalign.png
mzalign.png
  607   Mon Jun 30 18:36:01 2008 YoichiUpdatePSLMZ alignment again
John, Yoichi

We re-adjusted the MZ alignment. The reason behind this is to make sure that the MZ dark port is not falling at a strange fringe, where it is only dark at the dark port PD. It can happen when the two beams poorly overlap.
We tried both the minimization of the MZ dark PD and the maximization of the MZ transmission at the same time.
We also placed another PD in the MZ dark port at a different distance from the original dark PD and tried to minimize this too.
If the MZ dark port is at a strange fringe, one of the dark PD can be dark where the other one is still bright.
If both of the dark PD get dark, the overlap between the beams should be ok.
We tweaked only the two mirrors of the MZ after the EOMs (mainly the one with a PZT).

Right now, the MZ dark power is 0.432.
BTW, we should change the name of the MZ dark port on the medm screen (it is now MZ reflection, where it is not a reflection).
I will try to change it later.

We wanted to put the beam position on the IOO-QPD_POS_* back to the original (before John tweaked the MZ alignment earlier).
However, the trends of IOO-QPD_POS_* show a lot of fluctuation and jumps, of which we don't know the cause. So we could not find reasonable original values.
We suspect a circuit problem in IOO-QPD_POS, especially because the jumps are very strange.
We will do this investigation later too.
  1876   Mon Aug 10 16:37:27 2009 robUpdatePSLMZ alignment touched

I aligned the MZ.  The reflection went from .86 to .374

  1099   Wed Oct 29 12:23:04 2008 YoichiConfigurationPSLMZ alignment touched and the alarm level changed
Since the MZ reflection is alarming all the time, I tried to improve the MZ alignment by touching the folding mirror.
I locked the X-arm and monitored the transmitted light power while tweaking the mirror alignment to ensure that the output beam pointing is not changed.
I changed the alignment only a little, almost like just touching the knob.
The reflected power monitor was around 0.6 this morning and now it is about 0.525. Still large.
I changed the alarm level (HIGH) from 0.5 to 0.55.
  1874   Mon Aug 10 15:24:17 2009 robSummaryPSLMZ bad

I think the MZ pzt is broken/failing.  I'm not sure how else to explain this behavior.

The first bit of the time series is a triangle wave into the DC offset (output) field, over approximately the whole range (0-250V).   You can see the fringe visbility is quite small.  The triangle wave is stopped, and I then maxed out the offset slider to get to the "high" power point from the triangle wave sweep. Then for a little while with the PZT is held still, and the power still increases.  The MZ is then locked, and you can see the PZT voltage stay about the same but the power continues to rise over the next ~10 minutes or so.

Attachment 1: brokenMZpzt.png
brokenMZpzt.png
  1875   Mon Aug 10 15:56:12 2009 robSummaryPSLMZ bad redux

Quote:

I think the MZ pzt is broken/failing.  I'm not sure how else to explain this behavior.

 

The first bit of the time series is a triangle wave into the DC offset (output) field, over approximately the whole range (0-250V).   You can see the fringe visbility is quite small.  The triangle wave is stopped, and I then maxed out the offset slider to get to the "high" power point from the triangle wave sweep. Then for a little while with the PZT is held still, and the power still increases.  The MZ is then locked, and you can see the PZT voltage stay about the same but the power continues to rise over the next ~10 minutes or so.

 

 

 

This plot answers the previous question, and raises a new one--what the heck is MZTRANSPD?  I'd guess the pins are unconnected--it's just floating, and somehow picking up the MZ_PZT signal.

 

 

Attachment 1: badMZtrans.png
badMZtrans.png
  2349   Mon Nov 30 19:23:50 2009 JenneUpdateMZMZ down

Came back from dinner to find the Mach Zehnder unlocked.  The poor IFO is kind of having a crappy day (computers, MZ, and I think the Mode Cleaner alignment might be bad too).

ELOG V3.1.3-