40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 300 of 354  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  14286   Fri Nov 9 15:00:56 2018 gautamUpdateIOONo IFO beam as TT1 UL hijacked for REFL55 check

This problem resurfaced. I'm doing the debugging.

6:30pm - "Solved" using the same procedure of stepping through the whitening gains with a small (10 DAC cts pk) signal applied. Simply stepping through the gains with input grounded doesn't seem to do the trick.

Attachment 1: REFL55_wht_chk.png
REFL55_wht_chk.png
  14288   Sat Nov 10 17:32:33 2018 gautamUpdateLSCNulling MICH->PRCL and MICH->SRCL

With the DRMI locked, I drove a line in MICH using the sensing matrix infrastructure. Then I looked at the error points of MICH, PRCL and SRCL. Initially, the sensing line oscillator output matrix for MICH was set to drive only the BS. Subsequently, I changed the --> PRM and --> SRM matrix elements until the line height in the PRCL and SRCL error signals was minimized (i.e. the change to PRCL and SRCL due to the BS moving, which is a geometric effect, is cancelled by applying the opposite actuation to the PRM/SRM respectively. Then I transferred these to the LSC output matrix (old numbers in brackets).

MICH--> PRM = -0.335 (-0.2655)

MICH--> SRM = -0.35 (+0.25)

I then measured the loop TFs - all 3 loops had UGFs around 100 Hz, coinciding with the peaks of the phase bubbles. I also ran some sensing lines and did a sensing matrix measurement, Attachment #1 - looks similar to what I have obtained in the past, although the relative angles between the DoFs makes no sense to me. I guess the AS55 demod phase can be tuned up a bit.

The demodulation was done offline - I mixed the time series of the actuator and sensor signals with a "local oscillator" cosine wave - but instead of using the entire 5 minute time series and low-passing the mixer output, I divvied up the data into 5 second chunks, windowed with a Tukey window, and have plotted the mean value of the resulting mixer output.

Unrelated to this work: I re-aligned the PMC on the PSL table, mostly in Pitch.

Attachment 1: sensMat_2018-11-10.pdf
sensMat_2018-11-10.pdf
  14289   Sat Nov 10 17:40:00 2018 aaronUpdateIOOIMC problematic

Gautam was doing some DRMI locking, so I replaced the photodiode at the AS port to begin loss measurements again.

I increased the resolution on the scope by selecting Average (512) mode. I was a bit confused by this, since Yuki was correct that I had only 4 digits recorded over ethernet, which made me think this was an i/o setting. However the sample acquisition setting was the only thing I could find on the tektronix scope or in its manual about improving vertical resolution. This didn't change the saved file, but I found the more extensive programming manual for the scope, which confirms that using average mode does increase the resolution... from 9 to 14 bits! I'm not even getting that many.

There's another setting for DATa:WIDth, that is the number of bytes per data point transferred from the scope.

I tried using the *.25 scope instead, no better results. Changing the vertical resolution directly doesn't change this either. I've also tried changing most of the ethernet settings. I don't think it's something on the scripts side, because I'm using the same scripts that apparently generated the most recent of Johannes' and Yuki's files; I did look through for eg tds3014b.py, and didn't see the resolution explicitly set. Indeed, I get 7 bits of resolution as that function specifies, but most of them aren't filled by the scope. This makes me think the problem is on the scope settings.

  14290   Mon Nov 12 13:53:20 2018 ranaUpdateIOOloss measurement: oscope vs CDS DAQ

sstop using the ssscope, and just put the ssssignal into the DAQ with sssssome whitening. You'll get 16 bitsśšß.

Quote:

I increased the resolution on the scope by selecting Average (512) mode. I was a bit confused by this, since Yuki was correct that I had only 4 digits recorded over ethernet, which made me think this was an i/o setting. However the sample acquisition setting was the only thing I could find on the tektronix scope or in its manual about improving vertical resolution. This didn't change the saved file, but I found the more extensive programming manual for the scope, which confirms that using average mode does increase the resolution... from 9 to 14 bits! I'm not even getting that many.

 

  14291   Tue Nov 13 16:15:01 2018 SteveUpdateVACrga scan pd81 at day 119

 

 

Attachment 1: pd81-d119.png
pd81-d119.png
Attachment 2: pd81-560Hz-d119.png
pd81-560Hz-d119.png
  14292   Tue Nov 13 18:09:24 2018 gautamUpdateLSCInvestigation of SRCL-->MICH coupling

Summary:

I've been looking into the cross-coupling from the SRCL loop control point to the Michelson error point.

[Attachment #1] - Swept sine measurement of transfer function from SRCL_OUT_DQ to MICH_IN1_DQ. Details below.

[Attachment #2] - Attempt to measure time variation of coupling from SRCL control point to MICH error point. Details below.

[Attachment #3] - Histogram of the data in Attachment #2.

[Attachment #4] - Spectrogram of the duration in which data in #2 and #3 were collected, to investigate the occurrance of fast glitches.

Hypothesis: (so that people can correct me where I'm wrong - 40m tests are on DRMI so "MICH" in this discussion would be "DARM" when considering the sites)

  • SRM motion creates noise in MICH.
  • The SRM motion may be naively decomposed into two contributions -
    • Category #1: "sensing noise induced" motion, which comes about because of the SRCL control loop moving the SRM due to shot noise (or any other sensing noise) of the SRCL PDH photodiode, and
    • Category #2: all other SRM motion.
  • We'd like to cancel the former contribution from DARM.
  • The idea is to measure the transfer function from SRCL control point to the MICH error point. Knowing this, we can design a filter so that the SRCL control signal is filtered and summed in at the MICH error point to null the SRCL coupling to MICH.
  • Caveats/questions:
    • Introducing this extra loop actually increases the coupling of the "all other" category of SRM motion to MICH. But the hypothesis is that the MICH noise at low frequencies, which is where this increased coupling is expected to matter, will be dominated by seismic/other noise contributions, and so we are not actually degrading the MICH sensitivity.
    • Knowing the nosie-budget for MICH and SRCL, can we AC couple the feedforward loop such that we are only doing stuff at frequencies where Category #1 is the dominant SRCL noise?

Measurement details and next steps:

Attachment #1

  • This measurement was done using DTT swept sine.
  • Plotted TF is from SRCL_OUT to MICH_IN, so the SRCL loop shape shouldn't matter.
  • I expect the pendulum TF of the SRM to describe this shape - I've overlaid a 1/f^2 shape, it's not quite a fit, and I think the phase profile is due to a delay, but I didn't fit this.
  • I had to average at each datapoint for ~10 seconds to get coherence >0.9.
  • The whole measurement takes a few minutes.

Attachments #2 and #3

  • With the DRMI locked, I drove a sine wave at 83.13 Hz at the SRCL error point using awggui.
  • I ramped up the amplitude till I could see this line with an SNR of ~10 in the MICH error signal.
  • Then I downloaded ~10mins of data, demodulated it digitally, and low-passed the mixer output.
  • I had to use a pretty low corner frequency (0.1 Hz, second order butterworth) on the LPF, as otherwise, the data was too noisy.
  • Even so, the observed variation seems too large - can the coupling really change by x100?
  • The scatter is huge - part of the problem is that there are numerous glitches while the DRMI is locked.
  • As discussed at the meeting today, I'll try another approach of doing multiple swept-sines and using Craig's TFplotter utility to see what scatter that yields.

Attachments #2

  • Spectrogram generated with 1 second time strides, for the duration in which the 83 Hz line was driven.
  • There are a couple of large fast glitches visible.
Attachment 1: TF_sweptSineMeas.pdf
TF_sweptSineMeas.pdf
Attachment 2: digitalDemod.pdf
digitalDemod.pdf
Attachment 3: digitalDemod_hist.pdf
digitalDemod_hist.pdf
Attachment 4: DRMI_LSCspectrogram.pdf
DRMI_LSCspectrogram.pdf
  14293   Tue Nov 13 21:53:19 2018 gautamUpdateCDSRFM errors

This problem resurfaced, which I noticed when I couldn't get the single arm locks going.

The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.

Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.

Attachment 1: RFMerrors.png
RFMerrors.png
  14294   Wed Nov 14 14:35:38 2018 SteveUpdateALARM emergency calling list for 40m Lab

It is posted at the 40m wiki with Gautam' help. Printed copies posted around doors also.

  14295   Wed Nov 14 18:58:35 2018 aaronUpdateDAQNew DAC for the OMC

I began moving the AA and AI chassis over to 1X1/1X2 as outlined in the elog.

The chassis were mostly filled with empty cables. There was one cable attached to the output of a QPD interface board, but there was nothing attached to the input so it was clearly not in use and I disconnected it.

I also attach a picture of some of the SMA connectors I had to rotate to accommodate the chassis in their new locations.

Update:

The chassis are installed, and the anti-imaging chassis can be seen second from the top; the anti-aliasing chassis can be seen 7th from the top.

I need to breakout the SCSI on the back of the AA chassis, because ADC breakout board only has a DB36 adapter available; the other cables are occupied by the signals from the WFS dewhitening outputs.

Attachment 1: 6D079592-1350-4099-864B-1F61539A623F.jpeg
6D079592-1350-4099-864B-1F61539A623F.jpeg
Attachment 2: 5868D030-0B97-43A1-BF70-B6A7F4569DFA.jpeg
5868D030-0B97-43A1-BF70-B6A7F4569DFA.jpeg
  14297   Thu Nov 15 10:21:07 2018 aaronUpdateIOOIMC problematic

I ran a BNC from the PD on the AS table along the cable rack to a free ADC channel on the LSC whitening board. I lay the BNC on top of the other cables in the rack, so as not to disturb anything. I also was careful not to touch the other cables on the LSC whitening board when I plugged in my BNC. The PD now reads out to... a mystery channel. The mystery channel goes then to c1lsc ADC0 channels 9-16 (since the BNC goes to input 8, it should be #16). To find the channel, I opened the c1lsc model and found that adc0 channel 15 (0-indexed in the model) goes to a terminator.

Rather than mess with the LSC model, Gautam freed up C1:ALS-BEATY_FINE_I, and I'm reading out the AS signal there.

I misaligned the x-arm then re-installed the AS PO PD, using the scope to center the beam then connecting it to the BNC to (first the mystery channel, then BEATY). I turned off all the lights.

I went to misalign the x-arms, but the some of the control channels are white boxed. The only working screen is on pianosa.

The noise on the AS signal is much larger than that on the MC trans signal, and the DC difference for misaligned vs locked states is much less than the RMS (spectrum attached); the coherence between MC trans and AS is low. However, after estimating that for ~30ppm the locked vs misaligned states should only be ~0.3-0.4% different, and double checking that we are well above ADC and dark noise (blocked the beam, took another spectrum) and not saturating the PD, these observations started to make more sense.

To make the measurement in cds, I also made the following changes to a copy opf Johannes' assess_armloss_refl.py that I placed in /opt/rtcds/caltech/c1/scripts/lossmap_scripts/armloss_cds/   :

  • function now takes as argument the number of averages, averaging time, channel of the AS PD, and YARM|XARM|DARK.
  • made the data save to my directory, in /users/aaron/40m/data/armloss/

I started taking a measurement, but quickly realized that the mode cleaner has been locked to a higher order mode for about an hour, so I spend some time moving the MC. It would repeatedly lock on the 00 mode, but the alignment must be bad because the transmission fluctuates between 300 and 1400, and the lock only lasts about 5 minutes.

Attachment 1: 181115_chansDown.png
181115_chansDown.png
Attachment 2: PD_noise.png
PD_noise.png
  14298   Fri Nov 16 00:47:43 2018 gautamUpdateLSCMore DRMI characterization

Summary:

  • More DRMI characterization was done.
  • I was working on trying to improve the stability of the DRMI locks as this is necessary for any serious characterization.
  • Today I revived the PRC angular feedforward - this was a gamechanger, the DRMI locks were much more stable. It's probably worth spending some time improving the POP LSC/ASC sensing optics/electronics looking towards the full IFO locking.
  • Quantitatively, the angular fluctuations as witnessed by the POP QPD is lowered by ~2x with the FF on compared to offyes [Attachment #1, references are with FF off, live traces are with FF on].
  • The first DRMI lock I got is already running 15 mins, looking stable.
    • Update: Out of the ~1 hour i've tried DRMI locking tonight, >50 mins locked!
  • I think the filters can be retrained and this performance improved, something to work on while we are vented.
  • Ran sensing lines, measured loop TFs, analysis tomorrow, but I think the phasing of the 1f PDs is now okay.
    • MICH in AS55 Q, demod phase = -92deg, +6dB wht gain.
    • PRCL in REFL11 I, demod phase = +18 deg, +18dB wht gain.
    • SRCL in REFL55 I, demod phase = -175 deg, +18dB wht gain.
  • Also repeated the line in SRCL-->witness in MICH test.
    • At least 10 minutes of data available, but I'm still collecting since the lock is holding.
    • This time I drove the line at ~124 Hz with awggui, since this is more a regime where we are sensing noise dominated.

Prep for this work:

  • Reboots of c1psl, c1iool0, c1susaux.
  • Removed AS port PD loss measurement PD.
  • Initial alignment procedure as usual: single arms --> PRMI locked on carrier --> DRMI

I was trying to get some pics of the optics as a zeroth-level reference for the pre-vent loss with the single arms locked, but since our SL7 upgrade, the sensoray won't work anymore no. I'll try fixing this during the daytime.

Attachment 1: PRCff.pdf
PRCff.pdf
Attachment 2: DRMI_withPRCff.png
DRMI_withPRCff.png
  14299   Fri Nov 16 10:26:12 2018 SteveUpdateVAC single viton O-rings

The 40m vacuum envelope has one large single O-ring on the OOC west side. All other doors have double O-ring with annuloses.

There are 3  spacers to protect o-ring. They should not be removed!

 

The Cryo-pump static seal to VC1 also viton. All gate valves and right angle valve plates have single viton o-ring seal.

Small single viton o-rings on all optical quality viewports.

Helium will permiate through these fast. Leak checking time is limited to 5-10 minutes.

All other seals are copper gaskits. We have 2 manual right angle with METAL-dynamic seal [ VATRING ] as  VV1 & RV1

 

Attachment 1: Single-O-ring.png
Single-O-ring.png
  14300   Fri Nov 16 10:53:07 2018 aaronUpdateIOOIMC problematic

Back to loss measurements.

I replaced the PD I've been using for the AS beam.

I misaligned the x arm.

I tried to lock the y arm, but PRC was locked so I could was unable. Gautam reminded me where the config scripts are.

The armloss measurement script needed two additional modifications:

  • It was setting the initial offset of the PIT and YAW demod signals to 0, but due to the clipping on the heater we are operating at an offset. I commented out these lines.
  • When the script ran UNFREEZE_DITHER, it was running it using medmrun. The scope script hadn't been using this, and it seemed that when it ran UNFREEZE_DITHER in this way the YARM_ASS servo was passing only '0'. I don't really know why this was, but when I removed the call to medmrun it worked.

I ran successfully the loss measurement script for the x and y arms. I'm getting losses of ~100ppm from the first estimates.

I made the following changes to the lossmap script:

  • make the averaging time an input to the script, so we can exceed 2 second averages
  • remove anything about getting data from the scope, replace it with the correct analogues to save the averages for POX/POY refl, MC trans, op lev P/Y, and ASDC signal.
  • record the GPS time in the file with the cds averages (this way I can grab the full data)
  • Added a step in the lossmap script to misalign the optic, so we can continue getting data for the 'misaligned' state, both for the centered and not-centered measurements (that is, for every position on the lossmap).

When the optic aligns itself not at the ideal position, I'm noticing that it often locks on a 01. When the cavity is then misaligned and restored, it can no longer obtain lock. To fix this, I've moved my 'save' commands to just before the loop begins. This means the script may take longer to run, but as long as the cavity is initially locked and well aligned, this should make it more robust against wandering off and never reacquiring lock.

I left the lossmap script running for the x-arm. Next would be to run it for the y arm, but I see that after stepping to a few positions the lock is again lost. It's still trying to run, but if you want to stop it no data already taken will be lost. To stop it, go to the remaining terminal open on rossa and ctrl+c

the analysis needs:

  • Windowing
  • Filter, don't average
  • detrend to get rid of the linear drifts in lock that we see.
    • Is this the right thing?
Attachment 1: Screenshot_from_2018-11-16_19-22-34.png
Screenshot_from_2018-11-16_19-22-34.png
  14302   Sat Nov 17 18:59:01 2018 aaronUpdateIOOIMC problematic

I made additional measurements on the x and y arms, at 5 offset positions for each arm (along with 6 measurements at the "zeroed" position).

  14303   Sun Nov 18 00:59:33 2018 gautamUpdateGeneralVent prep

I've begun prepping the IFO for the vent, and completed most of the IFO related items on the checklist. The power into the MC has been cut, but the low-power autolocker has not been checked. I will finish up tomorrow and post the go ahead. PSL shutter is closed for tonight.

  14304   Sun Nov 18 17:09:02 2018 gautamUpdateGeneralVent prep

Vent prep

Following the checklist, I did these:

  • Both arms were locked to IR, TRY and TRX maximized using ASS.
  • GTRY and GTRX were also maximized.
  • ITM/ETM Oplevs centered with TRX/TRY maximized, PRM/SRM/BS Opelvs were centered once the DRMI was locked and aligned.
  • Attachment #1 summarizes the above 3 bullets.
  • Sensoray was made to work with Donatella (Raspberry Pi video server idea is good but although the sensoray drivers look to have installed correctly, when the Sensoray unit is plugged into the RPi USB port, the red light doesn't come on, so I opted to not spend too much time on it for the moment).
  • Photos of all ports in various locked configurations are saved in /users/sensoray/SensorayCaptures/Nov2018Vent
  • PSL power into the IMC was cut from 1.07 W (measured after G&H mirror) to 97 mW. I opted to install a new HWP+PBS after the PMC to cut the power, so we don't have to fiddle around so much with the PMC locking settings [Attachment #3, this was the only real candidate location as the IMC wants s-polarized light].
  • 2" R=10% BS in the IMC REFL path was replaced with a 2" Y1 HR mirror, so there is no MCREFL till we turn the power back up.
  • IMC was locked.
  • Low power MC autolocker works [Attachment #2]. The reduction in MCREFL is because of me manually aligning the cavity, WFS servos are disabled in low power mode since there is no light incident on the WFS heads.
  • Updated the SUS driftmon values (though I'm not really sure how useful this is).
  • PSL shutter will remain closed, but I have not yet installed a manual beam block in the beam path on the PSL table.

@Steve & Chub, we are ready to vent tomorrow (Monday Nov 19). 

Attachment 1: VentPrepNov2018.png
VentPrepNov2018.png
Attachment 2: MCautolocker_lowPower.png
MCautolocker_lowPower.png
Attachment 3: IMG_7163.JPG
IMG_7163.JPG
  14305   Mon Nov 19 14:59:48 2018 ChubUpdateVACVent 81

Vent 80 is nearly complete; the instrument is almost to atmosphere.  All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open.  The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations.  VM1 and VM3 could not be actuated.  The condition status is still listed as Unidentified because of the disconnected valves. 

  14306   Mon Nov 19 17:09:00 2018 SteveUpdateVACVent 81

Gautam, Aaron, Chub and Steve,

Quote:

Vent 80 is nearly complete; the instrument is almost to atmosphere.  All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open.  The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations.  VM1 and VM3 could not be actuated.  The condition status is still listed as Unidentified because of the disconnected valves. 

The vent 81 is completed.

4 ion pumps and cryo pump are at ~ 1-4 Torr (estimated as we have no gauges there), all other parts of the vacuum envelope are at atm. P2 & P3 gauges are out of order.

V1 and VM1 are in a locked state. We suspect this is because of some interlock logic.

TP1 and TP3 controllers are turned off.

Valve conditions as  shown: ready to be opened or closed or moved or rewired. To re-iterate: VC1, VC2, and the Ion Pump valves shouldn't be re-connected during the vac upgrade.

Thanks for all of your help.

Attachment 1: beforeVent82.png
beforeVent82.png
Attachment 2: vent81completed.png
vent81completed.png
  14307   Mon Nov 19 22:01:50 2018 gautamUpdateVACLoose nut on valve

As I was turning off the lights in the VEA, I heard a rattling sound from near the PSL enclosure. I followed it to a valve - I couldn't see a label on this valve in my brief effort to find one, but it is on the south-west corner of the IMC table, so maybe VABSSCI or VABSSCO? The power cable is somehow spliced with an attachment that looks to be bringing gas in/out of the valve (See Attachment #1), and the nut on the bottom was loose, the whole power cable + mettal attachment was responsible for the rattling. I finger-tightened the nut and the sound went away.

Attachment 1: IMG_7171.JPG
IMG_7171.JPG
  14310   Tue Nov 20 13:13:01 2018 gautamUpdateVACIMC alignment is okay

I checked the IMC alignment following the vent, for which the manual beam block placed on the PSL table was removed. The alignment is okay, after minor touchup, the MC Trans was ~1200 cts which is roughly what it was pre-vent. I've closed the PSL shutter again.

  14311   Tue Nov 20 17:38:13 2018 ranaUpdateUpgradeNew Coffee Machine

Rana, Aaron, Gautam

The old Zojirushi has died. We have received and comissioned our new Technivoorm Mocha Master today. It is good.

  14312   Tue Nov 20 20:33:11 2018 aaronUpdateOMCOMC scanning/aligning script

I finished running the cabling for the OMC, which involved running 7x 50ft DB9 cables from the OMC_NORTH rack to the 1X2 rack, laying cables over others on the tray. I tried not to move other cables to the extent I could, and I didn't run the new cables under any old cables. I attach a sketch diagram of where these cables are going, not inclusive of the entire DAC/ADC signal path.

I also had to open up the AA board (D050387, D050374), because it had an IPC connector rather than the DB37 that I needed to connect. The DAC sends signals to a breakout board that is in use (D080302) and had a DB37 output free (though note this carries only 4 DAC channels). I opened up the AA board and it had two IPC 40s connected to an adapter to the final IPC 70 output. I replaced the IPC40 connectors with DB37 breakouts, and made a new slot (I couldn't find a DB37 punch, so this is not great...) on the front panel for one of them, so I can attach it to the breakout board.

I noticed there were many unused wires, so I had to confirm that I had the wiring correct (still haven't confirmed by driving the channels, but will do). There was no DCC for D080302, but I grabbed the diagrams for the whitening boards it was connected to (D020432) and for the AA board I was opening up as well as checked out elog 8814, and I think I got it. I'll confirm this manually and make a diagram if it's not fake news.

Attachment 1: pathwaysketch.pdf
pathwaysketch.pdf
Attachment 2: IMG_0094.JPG
IMG_0094.JPG
Attachment 3: IMG_0097.JPG
IMG_0097.JPG
  14313   Wed Nov 21 09:59:26 2018 gautamUpdateLSCLSC feedforward block diagram

Attachment #1 is a block diagram depicting the pathway by which the vertex DOF control signals can couple into DARM (adapted from a similar diagram in Gabriele's Virgo note on the subject). I've also indicated some points where noise can couple into either loop. In general, there are sensing noises that couple in at the error point of the loop, and actuation noises that couple in at the control point. In this linear picture, each block represents a (possibly time varying) transfer function. So we can write out the node-to-node transfer functions and evaluate the various couplings.

The motivation is to see if we can first simulate with some realistic noise and time-varying couplings (and then possibly test on the realtime system) the effectiveness of the filter denoted by "FF" in canceling out the shot noise from the auxiliary loop being re-injected into the DARM loop via the DARM sensor. Does this look correct?

Attachment 1: IMG_7173.JPG
IMG_7173.JPG
  14314   Wed Nov 21 16:48:11 2018 gautamUpdateCOCEY mini cleanroom setup

With Chub's help, I've setup a mini cleanroom at EY - Attachment #1. The HEPA unit is running on high now. All surfaces were wiped with isopropanol, we can wipe everything down again on Monday and replace the foil.

Attachment 1: IMG_7174.JPG
IMG_7174.JPG
  14316   Mon Nov 26 10:22:16 2018 aaronUpdateGeneralprojector light bulb replaced

I replaced the projector bulb. Previous bulb was shattered.

  14317   Mon Nov 26 15:43:16 2018 aaronUpdateOMCOMC scanning/aligning script

I've started testing the OMC channels I'll use.

I needed to update the model, because I was getting "Unable to setup testpoint" errors for the DAC channels that I had created earlier, and didn't have any ADC channels yet defined. I attach a screenshot of the new model. I ran

rtcds make c1omc
rtcds install c1omc
rtcds start c1omc.
 
without errors.
Attachment 1: c1omc.png
c1omc.png
  14318   Mon Nov 26 15:58:48 2018 SteveUpdateVACVent 81

Gautam, Aaron, Chub & Steve,

ETMY heavy door replaced by light one.

We did the following:  measured 950 particles/cf min of 0.5 micron at SP table, wiped crane and it's cable, wiped chamber,

                                placed heavy door on clean merostate covered stand, dry wiped o-rings and isopropanol wiped Aluminum light cover

                              

Quote:

Gautam, Aaron, Chub and Steve,

Quote:

Vent 80 is nearly complete; the instrument is almost to atmosphere.  All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open.  The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations.  VM1 and VM3 could not be actuated.  The condition status is still listed as Unidentified because of the disconnected valves. 

The vent 81 is completed.

4 ion pumps and cryo pump are at ~ 1-4 Torr (estimated as we have no gauges there), all other parts of the vacuum envelope are at atm. P2 & P3 gauges are out of order.

V1 and VM1 are in a locked state. We suspect this is because of some interlock logic.

TP1 and TP3 controllers are turned off.

Valve conditions as  shown: ready to be opened or closed or moved or rewired. To re-iterate: VC1, VC2, and the Ion Pump valves shouldn't be re-connected during the vac upgrade.

Thanks for all of your help.

 

  14319   Mon Nov 26 17:16:27 2018 gautamUpdateSUSEY chamber work

[steve, rana, gautam]

  • PSL and EY 1064nm laser (physical) shutters on the head were closed so that we and sundance crew could work without laser safety goggles. EY oplev laser was also turned off.
  • Cylindrical heater setup removed:
    • heater wiring meant the heater itself couldn't be easily removed from the chamber
    • two lenses and Al foil cylinder removed from chamber, now placed on the mini-cleanroom table.
  • Parabolic heater is untouched for now. We can re-insert it once the test mass is back in, so that we can be better informed about the clipping situation.
  • ETMY removed from chamber.
    • EQ stops were engaged.
    • Pictures were taken
    • OSEMs were removed from cage, placed in foil holders.
    • Cage clamps were removed after checking that marker clamps were in place.
    • Optic was moved first to NW corner of table, then out of the vacuum onto the mini-cleanroom desk Chub and I had setup last week.
    • Hoepfully there isn't an earthquake. EY has been marked as off-limits to avoid accidental bumping / catasrophic wire/magnet/optic breaking.
    • We sealed up the mini cleanroom with tape. F.C. cleaning tomorrow or at another opportune moment.
    • Light door was put back on for the evening.

Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.

Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.

Attachment 1: IMG_5251.JPG
IMG_5251.JPG
  14321   Tue Nov 27 10:50:20 2018 SteveUpdatePEMearth quake Mexico

Nothing tripped.

 

Attachment 1: 5.5M.Mexico.png
5.5M.Mexico.png
  14323   Thu Nov 29 08:13:33 2018 SteveUpdatePEMEQ 3.9m So CA

EQ did not trip anything. atm1

Just a REMINDER: our vacuum system is at atm to help the vacuum upgrade to Acromag.

Exceptions: cryo pump and 4 ion pumps

It is our first rainy day of the season..The roof is not leaking.

 Vac Status: The vac rack power was recycled yesterday and power to controller TP1,2 and 3 restored. atm3

                     VME is OFF.        Power to all other instrument are ON.       23.9Vdc 0.2A

ETMY sus tower with locked optic in HEPA tent at east end is standing by for action.

 

Attachment 1: 3.9mSoCA.png
3.9mSoCA.png
Attachment 2: Vac_as_today.png
Vac_as_today.png
Attachment 3: as_is.png
as_is.png
  14324   Thu Nov 29 17:46:43 2018 gautamUpdateGeneralSome to-dos

[koji, gautam, jon, steve]

  • We suspect analog voltage from N2 pressure gauge is connected to interfacing Omega controller with the 'wrong' polarity (i.e pressure is rising over ~4 days and then rapidly falling instead of the other way around). This should be fixed.
  • N2 check script logic doesn't seem robust. Indeed, it has not been sending out warning emails (threshold is set to 60 psi, it has certainly gone below this threshold even with the "wrong" polarity pressure gauge hookup). Probably the 40m list is rejecting the email because controls isn't a part of the 40m group.
  • Old frames have to be re-integrated from JETSTOR to the new FB in order to have long timescale lookback.
  • N2 cylinder pressure gauges (at the cylinder end) need a power supply - @ Steve, has this been purchased? If not, perhaps @ Chub can order it.
  • MEDM vacuum screen should be updated to have gate valves be a different color to the spring-loaded valves. Manual valve between TP1 and V1 should also be added.
  • P2, P3 and P4 aren't returning sensible values (they should all be reading ~760torr as is P1). @ Steve, any idea if these gauges are broken?
  • Hornet gauges (CC and Pirani) should be hooked up to the new vacuum system.
  • add slow channels of   foreline pressures of TP2 & 3   and    C1:Vac-IG1_status_pressure
  14326   Fri Nov 30 19:37:47 2018 gautamUpdateLSCLSC feedforward block diagram

I wanted to set up an RTCDS model to understand this problem better. Attachment #1 is the simulink diagram of the signal flow. The idea will be to put in the appropriate filter shapes into the various filter blocks denoting the DARM and auxiliary DoF plants, controllers and actuators, and then use awggui / diaggui to inject some noises and see if in this idealized model I can achieve good subtraction. Then we can build up to applying a time varying cross coupling between DARM and the vertex DoF, and see how good the adaptive FF works. Still need to setup some MEDM screens to make working with the test system easier.

I figured c1omc would be the least invasive model to set this upon without risking losing any of our IR/green alignment references. Compile and install went smooth, see Attachment #2. The c1omc model was clocking 4us before, now it's using 7us.

Attachment #3 shows the top level of the OMC model, while Attachment #4 shows the MEDM screen.

* Note to self: when closing a loop inside the realtime model, there has to be a delay block somewhere in the loop, else a compilation error is thrown.

Attachment 1: LSC_FF_tester.png
LSC_FF_tester.png
Attachment 2: Screenshot_from_2018-11-30_19-41-07.png
Screenshot_from_2018-11-30_19-41-07.png
Attachment 3: Screenshot_from_2018-12-10_15-31-23.png
Screenshot_from_2018-12-10_15-31-23.png
Attachment 4: SimLSC.png
SimLSC.png
  14328   Sun Dec 2 17:26:58 2018 gautamUpdateIMCIMC ringdown fitting

Recently we wondered at the meeting what the IMC round trip loss was. I had done several ringdowns in the winter of 2017, but because the incident light on the cavity wasn't being extinguished completely (the AOM 0th order beam is used), the full Isogaio et. al. analysis could not be applied (there were FSS induced features in the reflection ringdown signal). Nevertheless, I fitted the transmission ringdowns. They looked like clean exponentials, and judging by the reflection signals (see previous elogs in this thread), the first ~20us of data is a clean exponential, so I figured we may get some rough value of the loss by just fitting the transmission data. 

The fitted storage time is 60.8 \pm 2.7 \mu s.However, this number isn't commensurate with the 40m IMC spec of a critically coupled cavity with 2000ppm transmissivity for the input and output couplers.

Attachment #1: Expected storage time for a lossless cavity, with round-trip length ~27m. MC2 is assumed to be perfectly reflecting. The IMC length is known to better than 100 Hz uncertainty because the marconi RF modulation signal is set accordingly. For the 40m spec, I would expect storage times of ~40 usec, but I measure almost 30% longer, at ~60 usec.

Attachment #2: Fits and residuals from the 10 datasets I had collected. This isn't a super informative plot because there are 10 datasets and fits, but to eye, the fits are good, and the diagonal elements of the covariance matrix output by scipy's curve_fit back this up. The function used to fit the t > 0 portions of these signals (because the light was extinguished at t=0 by actuating on the AOM) is \text{Transmission} = Ae^{-\frac{2t}{\tau_{\mathrm{storage}}}}, where A and tau are the fitted parameters. In the residuals, the same artefacts visible in the reflection signal are seen.

Attachment #3: Scatter plot of the data. Width of circles are proportional to fit error on individual measurements (i just scaled the marker size arbitrarily to be able to visually see the difference in uncertainty, the width doesn't exactly indicate the error), while the dahsed lines are the global mean and +/- 1 sigma levels.

Attachment #4: Cavity pole measurement. Using this, I get an estimate of the loss that is a much more believable 300 \pm 20\, \mathrm{ppm}.

Attachment 1: tauTheoretical.pdf
tauTheoretical.pdf
Attachment 2: ringdownFit.pdf
ringdownFit.pdf
Attachment 3: ringdownScatter.pdf
ringdownScatter.pdf
Attachment 4: cavPole.pdf
cavPole.pdf
  14329   Sun Dec 2 19:32:35 2018 ranaUpdateIOOfit times

need to vary start/stop times in fit to test for systematics

  14332   Thu Dec 6 11:16:28 2018 aaronUpdateOMCOMC channels

I need to hookup +/- 24 V supplies to the OMC whitening/dewhitening boxes that have been added to 1X2.

There are trailing +24V fuse slots, so I will extend that row to leave the same number of slots open.

While removing one +24V wire to add to the daisy chain, I let the wire brush an exposed conductor on the ground side, causing a spark. FSS_PCDRIVE and FSS_FAST are at different levels than before this spark. The 24V sorensens have the same currents as before according to the labels. Gautam advised me to remove the final fuse in the daisy chain before adding additional links.

gautam: we peeled off some outdated labels from the Sorensens in 1X1 such that each unit now has only 1 label visible reflecting the voltage and current. Aaron will post a photo after his work.

  14334   Fri Dec 7 12:51:06 2018 gautamUpdateIMCIMC ringdown fitting

I started putting together some code to implement some ideas we discussed at the Tuesday meeting here. Pipeline isn't setup yet, but i think it's commented okay so if people want to play around with it, the code lives on the 40m gitlab

Model parameters:

  • T+ --- average transmission of MC1 and MC3.
  • T- --- difference in transmission between MC1 and MC3 (this basis is used rather than T1 and T3, because the assumption is that since they were coated in the same coating run, the difference in transmission should be small, even if there is considerable uncertainty in the actual average transmission number.
  • T2 --- MC2 transmission.
  • Lrt --- Round trip loss in the cavity.
  • "sigma" --- a nuisance parameter quantifying the error in the time domain ringdown data.

Simulation:

  • Using these model parameters, calculate some simulated time-domain ringdowns. Optionally, add some noise (assumed Gaussian).
  • Try and back out the true values of the model parameters using emcee - priors were assumed to be uniformly distributed, with a +/- 20% uncertainty around the central value.
  • For a first test, see if there is any improvement in the parameter estimation uncertainty using only transmission ringdown vs both transmission and reflection.

Initial results and conclusions:

  • Attachment #1 - Simulated time series used for this study. The "fit" trace is computed using the median values from the monte-carlo.
  • Attachment #2 - Corner plots showing the distribution of the estimated parameter values, using only transmission ringdown. The "true" values are indicated using the thick blue lines.
  • Attachment #3 - Corner plots showing the distribution of the estimated parameter values, using both transmission and reflection ringdowns.
  • The overall approach seems to work okay. There seems to be only marginal improvement in the uncertainty in estimated parameters using both ringdown signals, at least in the simulation.
  • However, everything seems pretty sensitive to the way the likelihood and priors are coded up - need to explore this a bit more.

Next steps:

  • Add more simulated measurements, see if we can constrain these parameters more tightly. 
  • Use linear error analysis to see if that tells us which measurements we should do, without having to go through the emcee.

There still seems to be some data quality issues with the ringdown data I have, so I don't think we really gain anything from running this analysis on the data I have already collected - but in the future, we can do the ringdown with complete extinguishing of the input light, and repeat the analysis.

As for whether we should clean the IMC mirrors - I'm going to see how much power comes out at the REFL port (with PRM aligned) this afternoon, and compare to the input power. This technique suffers from uncertainty in the Faraday insertion loss, isolation and IMC parameters, but I am hoping we can at least set a bound on what the IMC loss is.

Attachment 1: time_reflAndTrans.pdf
time_reflAndTrans.pdf
Attachment 2: corner_transOnly.pdf
corner_transOnly.pdf
Attachment 3: corner_reflAndTrans.pdf
corner_reflAndTrans.pdf
  14335   Fri Dec 7 17:04:18 2018 gautamUpdateIOOIMC transmission
  • Power just before PSL shutter on PSL table = 97 +/- 1 mW. Systematic error unknown.
  • Power from IFO REFL on AP table = 40 +/- 1 mW. Systematic error unknown.

Both were measured using the FieldMate power meter. I was hesitant to use the Ophir power meter as there is a label on it that warns against exceeding 100 mW. I can't find anything in the elog/wiki about the measured inesrtion loss / isolation of the input faraday, but this seems like a pretty low amount of light to get back from PRM. The IMC visibility using the MC_REFL DC values is ~87%. Assuming perfect transmission of the 87% of the 97mW that's coupled into the IMC, and assuming a further 5% loss between the Faraday rejected port and the AP table, the Faraday insertion loss would be ~30%. Realistically, the IMC transmission is lower. There is also some part of the light picked off for IPPOS. Judging by the shape of the REFL spot on the camera, it doesn't look clipped to me.

Either way, seems like we are only getting ~half of the 1W we send in on the back of PRM. So maybe it's worth it to investigate the situation in the IOO chamber during this vent.


c1pslc1susaux,c1iool0,caux  crates were keyed. Also, the physical shutter on the PSL NPRO, which was closed last Monday for the Sundance crew filming, was opened and the PMC was locked. PMC remains locked, but there is no light going into the IMC.

  14337   Mon Dec 10 12:11:28 2018 aaronUpdateOMCAligning the OMC

I did some ray tracing and determined that the aux beam will enter the OMC after losing some power in reflection on OMPO (couldn't find this spec on the wiki, I remember something like 90-10 or 50-50) and the SRM (R~0.9), and then transmission through OMPO. This gives us something like 8%-23% of the aux light going to the OMC, depending on the OMPO transmission. This elog tells me the aux power before the recombination BS is ~37mW, ~3.7mW onto SRM, which is consistent with the OMPO being 90-10, and would mean the aux power onto the OMC is ~3mW, plenty for aligning into the OMC.

Since the dewhitening board I'd intended to use isn't working (see elog) , I'm gong to scan the OMC length with a function generator while adjusting the alignment by hand, as was briefly attempted during the last vent.

I couldn't identify a PD on the AP table that was the one I had used during the last vent, I suspect I coopted the very same PD for the arm loss measurements. It is a PDA520, which has a large (100mm^2) area so I've repurposed it again to catch the OMC prompt reflection during the mode scans. I've mounted it approximately where I expect the refl beam to exit the AS chamber.

I brought over the cart that usually lives at 1X1 to help me organize materials near the OMC chamber for opening.

I replaced the banana connectors we'd been using to send HV to the HV driver with soldered wires going to the final locking connector only, so now the 150V is on a safe cable.

I powered up the DCPD sat box and again confirmed that it's working. I sent a 500Hz sine wave through the sat box and confirmed that I can see the signal in the DCPD channels I've defined in cds. I gave the TT and OMC-L PZT channels bad assignments on the ADC (right now, what reads as 'OMC_PZT_MON' is actually the unfiltered output from the sat box, while the DCPD channels are for the filtered outputs of the box), because the way the signals are grouped on the cables I can't attach all of them at once. For this vent, I'll only really need the DCPD outputs, and since I have confirmed that I can read out both of those I'll fix up the HV driver mon channels later.

Attachment 1: B9DCF55F-1355-410C-8A29-EE45D43A56A4.jpeg
B9DCF55F-1355-410C-8A29-EE45D43A56A4.jpeg
  14338   Mon Dec 10 12:29:05 2018 aaronUpdateOMCOMC channels

I kept having trouble keeping the power LEDs on the dewhitening board 'on'. I did the following:

1. I noticed that the dewhitening board was drawing a lot of current (>500mA), so I initially thought that the indicators were just turning on until I blew the fuse. I couldn't find the electronics diagrams for this board, so I was using analagous boards' diagrams and wasn't sure how much current to expect to draw. I swapped out for 1A fuses (only for the electronics I was adding to the system).

2. Now the +24V indicator on the dewhitening board wasn't turning on, and the -24V supply was alternatively drawing ~500mA and 0mA in a ~1Hz square wave. Thinking I could be dropping voltage along the path to the board, I swapped out the cables leading to the whitening/dewhitening boards with 16AWG (was 18AWG). This didn't seem to help.

3. Since the whitening board seemed to be consistently powered on, I removed the dewhitening board to see if there was a problem with it. Indeed, I'd burned out the +24V supply electronics--two resisters were broken entirely, and the breadboard near the voltage regulator had been visibly heated.

  1. I identified that the resistors were 1Ohm, and replaced them (though I couldn't find 1Ohm surface mount resistors). I also replaced the voltage regulator in case it was broken. I couldn't find the exact model, so I replaced the LM2940CT-12 with an LM7812, which I think is the newer 12V regulator.
  2. Though this replacement seemed to work when the power board was disconnected from the dewhitening board, connecting to the dewhitening board again resulted in a lot of current draw.
  3. I depowered the board and decided to take a different approach (see)

I noticed that the +/-15V currents are slightly higher than the labels, but didn't notice whether they were already different before I began this work.

I also noticed one pair of wires in the area of 1X1 I was working that wasn't attached to power (or anything). I didn't know what it was for, so I've attached a picture.

Attachment 1: 52DE723A-02A4-4C62-879B-7B0070AE8A00.jpeg
52DE723A-02A4-4C62-879B-7B0070AE8A00.jpeg
Attachment 2: 545E5512-D003-408B-9F00-55F985966A16.jpeg
545E5512-D003-408B-9F00-55F985966A16.jpeg
Attachment 3: DFF34976-CC49-4E4F-BFD1-A197E2072A32.jpeg
DFF34976-CC49-4E4F-BFD1-A197E2072A32.jpeg
  14339   Mon Dec 10 15:53:16 2018 gautamUpdateLSCSwept-sine measurement with DTT

Disclaimer: This is almost certainly some user error on my part.

I've been trying to get this running for a couple of days, but am struggling to understand some behavior I've been seeing with DTT.

Test:

I wanted to measure some transfer functions in the simulated model I set up.

  • To start with, I put a pendulum (f0 = 1Hz, Q=5) TF into one of the filter modules
  • Isolated it from the other interconnections (by turning off the MEDM ON/OFF switches).
  • Set up a DTT swept-sine measurement
    • EXC channel was C1:OMC-TST_AUX_A_EXC
    • Monitored channels were C1:OMC-TST_AUX_A_IN2 and C1:OMC-TST_AUX_A_OUT.
    • Transfer function being measured was C1:OMC-TST_AUX_A_OUT/C1:OMC-TST_AUX_A_IN2.
    • Coherence between the excitation and output were also monitored.
  • Sweep parameters:
    • Measurement band was 0.1 - 900 Hz
    • Logarithmic, downward.
    • Excitation amplitude = 1ct, waveform = "Sine"

Unexplained behavior:

  • The transfer function measurement fails with a "Synchronization error", at ~15 Hz.
    • I don't know what is special about this frequency, but it fails repeatedly at the same point in the measurement.
  • Coherence is not 1 always
    • Why should the coherence deviate from 1 since everything is simulated? I think numerical noise would manifest when the gain of the filter is small (i.e. high frequencies for the pendulum), but the measurement and coherence seem fine down to a few tens of Hz.

To see if this is just a feature in the simulated model, I tried measuring the "plant" filter in the C1:LSC-PRCL filter bank (which is also just a pendulum TF), and run into the same error. I also tried running the DTT template on donatella (Ubuntu12) and pianosa (SL7), and get the same error, so this must be something I'm doing wrong with the way the measurement is being run / setup. I couldn't find any mention of similar problems in the SimPlant elogs I looked through, does anyone have an idea as to what's going on here?

* I can't get the "import" feature of DTT to work - I go through the GUI prompts to import an ASCII txt file exported from FOTON but nothing selectable shows up in DTT once the import dialog closes (which I presume means that the import was successful). Are we using an outdated version of DTT (GDS-2.15.1)?  But Attachment #1 shows the measured part of the pendulum TF, and is consistent with what is expected until the measurement terminates with a synchronization error.


the import problem is fixed - when importing, you have to give names to the two channels that define the TF you're importing (these can be arbitrary since the ASCII file doesn't have any channel name information). once i did that, the import works. you can see that while the measurement ran, the foton TF matches the DTT measured counterpart.


11 Dec 2pm: After discussing with Jamie and Gabriele, I also tried changing the # of points, start frequency etc, but run into the same error (though admittedly I only tried 4 combinations of these, so not exhaustive).

Attachment 1: SimTF.pdf
SimTF.pdf
  14340   Mon Dec 10 19:47:06 2018 aaronUpdateOMCOMC channels

Taking another look at the datasheet, I don't think LM7812 is an appropriate replacement and I think the LM2940CT-12 is supposed to supply 1A, so it's possible the problem actually is on the power board, not on the dewhitening board. The board takes +/- 15V, not +/- 24...

Quote:
 
  1. I identified that the resistors were 1Ohm, and replaced them (though I couldn't find 1Ohm surface mount resistors). I also replaced the voltage regulator in case it was broken. I couldn't find the exact model, so I replaced the LM2940CT-12 with an LM7812, which I think is the newer 12V regulator.

 

  14341   Tue Dec 11 13:42:44 2018 KojiUpdateOMCOMC channels

FYI:

D050368 Anti-Imaging Chassis
https://dcc.ligo.org/LIGO-D050368

https://labcit.ligo.caltech.edu/~tetzel/files/equip

D050368 Adl SUS/SEI Anti-Image filter board 
S/N 100-102 Assembled by screaming circuits. Begin testing 4/3/06 
S/N xxx Mohana returned it to the shop. No S/N or traveler. Put in shop inventory 4/24/06 
S/N 103 Rev 01. Returned from Screaming circuits 7/10/06. complete except for C28, C29 
S/N 104-106 Rev 01. Returned from Screaming circuits 7/10/06. complete except for C28, C29 Needs DRV-135’s installed 
S/N 107-111 Rev 02 (32768 Hz) Back from assembly 7/14/06 
S/N 112-113 Rev 03 (65536 Hz) assembled into chassis and waiting for test 1/29/07 
S/N 114 Rev 03 (65536 Hz) assembled and ready for test 020507 


D050512 RBS Interface Chassis Power Supply Board (Just an entry. There is no file)

https://dcc.ligo.org/LIGO-D050512

RBS Interface Chassis Power Board D050512-00

https://labcit.ligo.caltech.edu/~rolf/jayfiles/drawings/D050512-00.pdf
 

 

  14342   Tue Dec 11 13:48:04 2018 aaronUpdateOMCOMC channels

Koji gave me some tips on testing this board that I wanted to write down, notes probably a bit intermingled with my thoughts. Thanks Koji, also for the DCC and equipment logging!

  • Test the power and AI boards separately with an external supply, ramping the voltage up slowly for each.
  • If it seems the AI board is actually drawing too much current, may need to check its TPs for where a problem might be
    • If it's really extensive may use an IR camera to see what elements are getting too hot
    • Testing in segments will prevent breaking more components
  • Check the regulator that I've replaced
  • The 1 Ohm resistors may have been acting as extra 1A fuses. i need to make sure the resistors I've used to replace them are rated for >1W, if this is the case.
  • Can check the resistance between +-12V and Gnd inputs on the AI board, if there is a short drawing too much current it may show up there.
  • The 7812 may be an appropriate regulator, but the input voltage may need to be somewhat higher than with the low drop regulator that was used before.
  • I want to double check the diagram on the DCC
  14343   Tue Dec 11 14:24:18 2018 aaronUpdateOMCAligning the OMC

I set up a function generator to drive OMC-L, and have the two DCPD mons and the OMC REFL PD sent to an oscilloscope. I need to select a cds channel over which to read the REFL signal.

The two DCPD mon channels have very different behaviors on the PD mons at the sat box (see attachment). PD1 has an obvious periodicity, PD2 has less noise overall and looks more white. I don't yet understand this, and whether it is caused by real light, something at the PDs, or something at the sat box.

I've again gone through the operations that will happen with the OMC chamber vented. Here's how it'll go, with some of the open questions that I'm discussing with Gautam or whoever is around the 40m:

  1. Function generator is driving OMC-L. Right now there is one 150V Kepco supply in use, located on the ground just to the right of the OMC rack. I only have plans to power it on while scanning OMC-L, and until the OMC is fully in use the standard practice will be to use this HV with two people in the lab and shut it off after the immediate activities.
    1. To do: Is a second drive necessary for the TT drivers? I don't think it is during this vent, because we will want to align into the OMC with the TTs in a 'neutral' state. I recall that the way the TT drivers are set up, 0V from the dac to the driver is the 'centered' position for all TTs. Unless we want to compensate for some known shift of the chambers during pumpdown, I think this is the TT position we should use while aligning the OMMT into the OMC.
    2. To do: make sure I'm driving the right pins with the function generator. Update: Seems I was driving the right channels, here's the pinout.
  2. We will use the reflection of aux from the SRM to align into the OMC.
    1. Gautam pointed out that I hadn't accounted for the recombination BS for the aux beam being 90-10. This means there's actually something like 300uW of aux onto the OMC, rather than ~3mW. This should still be enough to see on a card, so it is fine.
    2. However, the aux beam is aligned to be colinear with the AS beam when the SRM is misaligned. So the question is whether the wedge on the SRM makes the SRM-reflected aux beam not colinear with the AS beam

 

---------

Talked with Gautam for a good while about the above plan. In trying to figure out why the DCPD sat box appears to have a different TF for the two PDs (seems to be some loose cabling problem at the mons, because wiggling the cables changed this), we determined that the AA chassis also wasn't behaving as expected--driving the expected channels (28-31) with a sine wave yields some signal at the 100Hz driving frequency, but all save ch31 were noisy. We also still saw the 100Hz when the chassis was unplugged. I will continue pursuing this, but in the meantime I'm making an IDE40 to DB37 connector so I can drive the ADC channels directly with the DAC channels I've defined (need to match pinouts for D080303 to D080302). I also will make a new SCSI to DB37 adapter that is more robust than mentioned here. I also need to replace the cable carrying HV to the OMC-L driver, so that it doesn't have a wire-to-wire solder joint.

We moved a razor blade on the AP table so it is no longer blocking the aux beam. We checked the alignment of aux into the AS port. AUX and AS are not colinear anywhere on the AP table, and despite confirming that the main AS beam is still being reflected off of the OMC input mirror, the returning AUX beam does not reach the AP table (and probably is not reaching the OMC). AUX needs to be realigned such that it is colinear with the AS beam. It would be good if in this configuration, the SRM is held close to its position when the interferometer is locked, but the TTs should provide us some (~2.5mrad) actuation. Gautam will do this alignment and I will calculate whether the TTs will be able to compensate for any misalignment of the SRM.

Here is the new plan and minimal things to do for the door opening tomorrow:

  1. Function generator is driving OMC-L.
    1. The PZT mon channel is sent to the oscilloscope.
    2. To do: confirm again that the triangle wave I send in results in the expected triangle wave going to the OMC, using this mon channel.
  2. The OMC REFL signal is being sent to the AP PD. See photo.
    1. Need to align into this PD, but this alignment can be done in air on the AP table.
  3. Monitor the DCPD signals using the TPs from the sat box going to the oscilloscope.
    1. There may be further problems with the sat box, but for the initial alignment into the OMC only the REFL signal is necessary.
    2. Not minimally necessary, but the sat box needs a new case. It has a front, back, and bottom, but no main case, so the board is exposed.
  4. I will move the OMMT-to-OMC steering mirrors while watching the scope for flashes in the REFL signal.

That is the first, minimal sequence of steps, which I plan to complete tomorrow. After aligned into the OMC, the alignment into the DCPDs shouldn't need modification. Barring work needed to align from OMC to DCPDs, I think most other work with the OMC can be done in-air.

  14344   Tue Dec 11 14:33:29 2018 gautamUpdateCDSNDScope

NDscope is now running on pianosa. To be really useful, we need the templates, so I've made /users/Templates/NDScope_templates where these will be stored. Perhaps someone can write a parser to convert dataviewer .xml to something ndscope can understand. To get it installed, I had to run:

sudo yum install ndscope
sudo yum install python34-gpstime
sudo yum install python34-dateutil
sudo yum install python34-requests

 I also changed the pythonpath variable to include the python3.4 site-packages library in .bashrc

Quote:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971

Let's install Jamie's new Data Viewer

Attachment 1: ndscope.png
ndscope.png
  14345   Tue Dec 11 18:20:59 2018 gautamUpdateOptical LeversBS/PRM HeNe is dead

I found that the BS/PRM OL SUM channels were reading close to 0. So I went to the optical table, and found that there was no beam from the HeNe. I tried power-cycling the controller, there was no effect. From the trend data, it looks like there was a slow decay over ~400000 seconds (~ 5 days) and then an abrupt shutoff. This is not ideal, because we would have liked to use the Oplevs as a DC alignment reference during the ventnoI plan to use the AS camera to recover some sort of good Michelson alignment, and then if we want to, we can switch out the HeNe.

*How can I export PDF from NDscope?

Attachment 1: BSOL_dead.png
BSOL_dead.png
  14346   Tue Dec 11 22:50:07 2018 aaronUpdateOMCAligning the OMC

I did the following:

  • Noticed that the OMC rack's power has +-18V, but I had tested the HV driver with +-15V. Maybe fine, something to watch.
  • Checked that nothing but the OMC driver board was in use on the OMC's Sorensen (the QPD whitening board in the OMC rack is not in use, and anyway is labeled +-15V), then turned down the rack voltage from 18 to 15V. Photos attached of AUX_OMC_S Sorensen bank.
  • I hadn't used the alternative dither before. I started by driving the alternative dither with a 10Vpp sine wave at 1-10 Hz. I have both the DC and AC driver mons on a scope.
    • Initially, I only give it 10V at the HV. I don't see much, nor at 30V, while driving with 0-10V sine waves between 0.1-100Hz.
    • In my last log, I hadn't been using the alternative dither.
  • Instead, I switch over to the main piezo drive, which is sent over DB9. Now I see the following on the AC/DC piezo mon channels:
    • Increasing the HV input (increasing in steps from 10-50V) yields 1V at the DC piezo mon for 50V at the HV input.
    • Driving under a few 100s of Hz results in no change to the AC dither mon. Driving <1Hz results in a small (~10% for a 10Vpp drive) at the HV. I didn't take a full transfer function, but it is the thing to do with cds.
    • Changing the drive amplitude changes the AC mon amplitude proportionally
    • At a few kHz, the 10Vpp drive saturates the AC mon.
    • Photos are in order:
      • 1Hz drive, visible on the DC mon channel in green
      • 1kHz drive 10Vpp, visible on the AC mon channel in violet
      • 1kHz drive 5Vpp
      • 5kHz drive 10Vpp, saturates the AC mon channel
Attachment 1: 8323029A-970E-4BEA-833E-77E709300446.jpeg
8323029A-970E-4BEA-833E-77E709300446.jpeg
Attachment 2: C52735BB-0C56-41A1-B731-678CDDCEC921.jpeg
C52735BB-0C56-41A1-B731-678CDDCEC921.jpeg
Attachment 3: 3F8A0B3B-5C7C-4876-B3A7-332F560D554D.jpeg
3F8A0B3B-5C7C-4876-B3A7-332F560D554D.jpeg
Attachment 4: 3DC88B6A-4213-4ABD-A890-7EC317D9EED0.jpeg
3DC88B6A-4213-4ABD-A890-7EC317D9EED0.jpeg
Attachment 5: C0C4F9C0-9574-4A17-9414-B99D6E27025F.jpeg
C0C4F9C0-9574-4A17-9414-B99D6E27025F.jpeg
Attachment 6: A191E1DE-552F-42A5-BED7-246001248BBD.jpeg
A191E1DE-552F-42A5-BED7-246001248BBD.jpeg
  14347   Wed Dec 12 11:53:29 2018 aaronUpdateGeneralPower Outage

At 11:13 am there was a ~2-3 second interruption of all power at the 40m.

I checked that nobody was in any of the lab areas at the time of the outage.

I walked along both arms of the 40m and looked for any indicator lights or unusual activity. I took photos of the power supplies that I encountered, attached. I tried to be somewhat complete, but didn't have a list of things in mind to check, so I may have missed something. 

I noticed an electrical buzzing that seemed to emanate from one of the AC adapters on the vacuum rack. I've attached a photo of which one, the buzzing changes when I touch the case of the adapter. I did not modify anything on the vacuum rack. There is also 

Most of the cds channels are still down. I am going through the wiki for procedures on what to log when the power goes off, and will follow the procedures here to get some useful channels.

Attachment 1: IMG_0033.HEIC
Attachment 2: IMG_1027.HEIC
Attachment 3: IMG_2605.HEIC
  14349   Thu Dec 13 01:26:34 2018 gautamUpdateGeneralPower Outage recovery

[koji, gautam]

After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:

  1. It seems that we need to first turn on FB
    • Manually start the open-mx and mx services using
      sudo systemctl start open-mx.service 
      sudo systemctl start mx.service
    • Check that the system time returned by gpstime matches the gpstime reported by internet sources.
    • Manually start the daqd processes using
      sudo systemctl start daqd_*
  2. Then fully power cycle (including all front and rear panel power switches/cables) the FEs and the expansion chassis.
    • This seems to be a necessary step for models run on c1sus (as reported by the CDS MEDM screen) to pick up the correct system time (the FE itself seems to pick up the correct time, not sure what's going on here).
    • This was necessary to clear 0x4000 errors.
  3. Power on the expansion chassis.
  4. Power on the FE.
  5. Start the RTCDS models in the usual way
    • For some reason, there is a 1 second mismatch between the gpstime returned on the MEDM screen for a particular CDS model status, and that in the terminal for the host machine.
    • This in itself doesn't seem to cause any timing errors. But see remark about c1sus above in #2.

The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.

  14350   Thu Dec 13 10:03:07 2018 ChubUpdateGeneralOMC chamber

Bob, Aaron, and I removed the door from the OMC chamber this morning.  Everything went well.

ELOG V3.1.3-