40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 205 of 326  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  14192   Tue Sep 4 10:14:11 2018 gautamUpdateCDSCDS status update

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

Quote:

Starting c1cal now, let's see if the other c1lsc FE models are affected at all... Moreover, since MC1 seems to be well-behaved, I'm going to restore the nominal eurocrate configuration (sans extender board) tomorrow.

  14194   Thu Sep 6 14:21:26 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Todd E. came by this morning and gave us (i) 1x new ADC card and (ii) 1x roll of 100m (2017 vintage) PCIe fiber. This afternoon, I replaced the old ADC card in the c1lsc expansion chassis, and have returned the old card to Todd. The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue. CDS is back to what is now the nominal state (Attachment #1) and Yarm is locked for Jon to work on his IFOcoupling study. We will monitor the stability in the coming days.

Quote:

(i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

Attachment 1: CDSoverview.png
CDSoverview.png
  14195   Fri Sep 7 12:35:14 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

Attachment 1: Screenshot_from_2018-09-07_12-34-52.png
Screenshot_from_2018-09-07_12-34-52.png
  14198   Mon Sep 17 12:28:19 2018 gautamUpdateIOOPMC and IMC relocked, WFS inputs turned off

The PMC and IMC were unlocked. Both were re-locked, and alignment of both cavities were adjusted so as to maximize MC2 trans (by hand, input alignment to PMC tweaked on PSL table, IMC alignment tweaked using slow bias voltages). I disabled the inputs to the WFS loops, as it looks like they are not able to deal with the glitching IMC suspensions. c1lsc models have crashed again but I am not worrying about that for now.

9pm: The alignment is wandering all over the place so I'm just closing the PSL shutter for now.

  14202   Thu Sep 20 11:29:04 2018 gautamUpdateCDSNew PCIe fiber housed

[steve, yuki, gautam]

The plastic tubing/housing for the fiber arrived a couple of days ago. We routed ~40m of fiber through roughly that length of the tubing this morning, using some custom implements Steve sourced. To make sure we didn't damage the fiber during this process, I'm now testing the vertex models with the plastic tubing just routed casually (= illegally) along the floor from 1X4 to 1Y3 (NOTE THAT THE WIKI PAGE DIAGRAM IS OUT OF DATE AND NEEDS TO BE UPDATED), and have plugged in the new fiber to the expansion chassis and the c1lsc front end machine. But I'm seeing a DC error (0x4000), which is indicative of some sort of timing error (Attachment #1) **. Needs more investigation...

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

** In the past, I have been able to fix the 0x4000 error by manually rebooting fb (simply restarting the daqd processes on fb using sudo systemctl restart daqd_* doesn't seem to fix the problem). Sure enough, seems to have done the job this time as well (Attachment #2). So my initial impression is that the new fiber is functioning alright yes.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3)

Attachment 1: PCIeFiberSwap.png
PCIeFiberSwap.png
Attachment 2: PCIeFiberSwap_FBrebooted.png
PCIeFiberSwap_FBrebooted.png
  14203   Thu Sep 20 16:19:04 2018 gautamUpdateCDSNew PCIe fiber install postponed to tomorrow

[steve, gautam]

This didn't go as smoothly as planned. While there were no issues with the new fiber over the ~3 hours that I left it plugged in, I didn't realize the fiber has distinct ends for the "HOST" and "TARGET" (-5 points to me I guess). So while we had plugged in the ends correctly (by accident) for the pre-lunch test, while routing the fiber on the overhead cable tray, we switched the ends (because the "HOST" end of the cable is close to the reel and we felt it would be easier to do the routing the other way. 

Anyway, we will fix this tomorrow. For now, the old fiber was re-connected, and the models are running. IMC is locked.

Quote:

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

  14206   Fri Sep 21 16:46:38 2018 gautamUpdateCDSNew PCIe fiber installed and routed

[steve, koji, gautam]

We took another pass at this today, and it seems to have worked - see Attachment #1. I'm leaving CDS in this configuration so that we can investigate stability. IMC could be locked. However, due to the vacuum slow machine having failed, we are going to leave the PSL shutter closed over the weekend.

Attachment 1: PCIeFiber.png
PCIeFiber.png
Attachment 2: IMG_5878.JPG
IMG_5878.JPG
  14207   Fri Sep 21 16:51:43 2018 gautamUpdateVACc1vac1 is unresponsive

Steve pointed out that some of the vacuum MEDM screen fields were reporting "NO COMM". Koji confirmed that this is a c1vac1 problem, likely the same as reported here and can be fixed using the same procedure.

However, Steve is worried that the interlock won't kick in in case of a vacuum emergency, so we are leaving the PSL shutter closed over the weekend. The problem will be revisited on Monday.

  14215   Mon Sep 24 15:06:10 2018 gautamUpdateVACc1vac1 reboot + TP1 controller replacement

[steve, gautam]

Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button.

While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog):

  • Turn power off using switch on rear.
  • Remove 4 connecting cables on the back.
  • Switch controllers.
  • Reconnect 4 cables on the back panel.
  • Turn power back on using switch on rear.

However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. 

Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work.

Quote:

The problem will be revisited on Monday.

Attachment 1: beforeReboot.png
beforeReboot.png
Attachment 2: afterReboot.png
afterReboot.png
Attachment 3: CC1.png
CC1.png
  14222   Mon Oct 1 20:39:09 2018 gautamConfigurationASCc1asy

We need to set up a copy of the c1asx model (which currently runs on c1iscex), to be named c1asy, on c1iscey for the green steering PZTs. The plan discussed at the meeting last Wednesday was to rename the existing model c1tst into c1asy, and recompile it with the relevant parts copied over from c1asx. However, I suspect this will create some problems related to the "dcuid" field in the CDS params block (I ran into this issue when I tried to use the dcuid for an old model which no longer exists, called c1imc, for the c1omc model).

From what I can gather, we should be able to circumvent this problem by deleting the .par file corresponding to the c1tst model living at /opt/rtcds/caltech/c1/target/gds/param/, and rename the model to c1asy, and recompile it. But I thought I should post this here checking if anyone knows of other potential conflicts that will need to be managed before I start poking around and breaking things. Alternatively, there are plenty of cores available on c1iscey, so we could just set up a fresh c1asy model...

 
  • (write programming code of making alignment control automatically)
  14223   Mon Oct 1 22:20:42 2018 gautamUpdateSUSPrototyping HV Bias Circuit

Summary:

I've been plugging away at Altium prototyping the high-voltage bias idea, this is meant to be a progress update.

Details:

I need to get footprints for some of the more uncommon parts (e.g. PA95) from Rich before actually laying this out on a PCB, but in the meantime, I'd like feedback on (but not restricted to) the following:

  1. The top-level diagram: this is meant to show how all this fits into the coil driver electronics chain.
    • The way I'm imagining it now, this (2U) chassis will perform the summing of the fast coil driver output to the slow bias signal using some Dsub connectors (existing slow path series resistance would simply be removed). 
    • The overall output connector (DB15) will go to the breakout board which sums in the bias voltage for the OSEM PDs and then to the satellite box.
    • The obvious flaw in summing in the two paths using a piece of conducting PCB track is that if the coil itself gets disconnected (e.g. we disconnect cable at the vacuum flange), then the full HV appears at TP3 (see pg2 of schematic). This gets divided down by the ratio of the series resistance in the fast path to slow path, but there is still the possibility of damaging the fast-path electronics. I don't know of an elegant design to protect against this.
  2. Ground loops: I asked Johannes about the Acromag DACs, and apparently they are single ended. Hopefully, because the Sorensens power Acromags, and also the eurocrates, we won't have any problems with ground loops between this unit and the fast path.
  3. High-voltage precautons: I think I've taken the necessary precautions in protecting against HV damage to the components / interfaced electronics using dual-diodes and TVSs, but someone more knowledgable should check this. Furthermore, I wonder if a Molex connector is the best way to bring in the +/- HV supply onto the board. I'd have liked to use an SHV connector but can't find a comaptible board-mountable connector.
  4.  Choice of HV OpAmp: I've chosen to stick with the PA95, but I think the PA91 has the same footprint so this shouldn't be a big deal.
  5.  Power regulation: I've adapted the power regulation scheme Rich used in D1600122 - note that the HV supply voltage doesn't undergo any regulation on the board, though there are decoupling caps close to the power pins of the PA95. Since the PA95 is inside a feedback loop, the PSRR should not be an issue, but I'll confirm with LTspice model anyways just in case.
  6. Cost: 
    • ​​Each of the metal film resistors that Rich recommended costs ~$15.
    • The voltage rating on these demand that we have 6 per channel, and if this works well, we need to make this board for 4 optics.
    • The PA95 is ~$150 each, and presumably the high voltage handling resistors and capacitors won't be cheap.
    • Steve will update about his HV supply investigations (on a secure platform, NOT the elog), but it looks like even switching supplies cost north of $1200.
    • However, as I will detail in a separate elog, my modeling suggests that among the various technical noises I've modeled so far, coil driver noise is still the largest contribution which actually seems to exceed the unsqueezed shot noise of ~ 8e-19 m/rtHz for 1W input power and PRG 40 with 20ppm RT arm losses, by a smidge (~9e-19 m/rtHz, once we take into account the fast and slow path noises, and the fact that we are not exactly Johnson noise limited).

I also don't have a good idea of what the PCB layer structure (2 layers? 3 layers? or more?) should be for this kind of circuit, I'll try and get some input from Rich.

*Updated with current noise (Attachment #2) at the output for this topology of series resistance of 25 kohm in this path. Modeling was done (in LTspice) with a noiseless 25kohm resistor, and then I included the Johnson noise contribution of the 25k in quadrature. For this choice, we are below 1pA/rtHz from this path in the band we care about. I've also tried to estimate (Attachment #3) the contribution due to (assumed flat in ASD) ripple in the HV power supply (i.e. voltage rails of the PA95) to the output current noise, seems totally negligible for any reasonable power supply spec I've seen, switching or linear.

Attachment 1: CoilDriverBias.pdf
CoilDriverBias.pdf CoilDriverBias.pdf CoilDriverBias.pdf
Attachment 2: currentNoise.pdf
currentNoise.pdf
Attachment 3: PSRR.pdf
PSRR.pdf
  14225   Tue Oct 2 23:57:16 2018 gautamUpdatePonderSqueezeSqueezing scenarios

[kevin, gautam]

We have been working on double checking the noise budget calculations. We wanted to evaluate the amount of squeezing for a few different scenarios that vary in cost and time. Here are the findings:

Squeezing scenarios

Sqz [dBvac] fmin [Hz] PPRM [W] PBS [W] TPRM [%] TSRM [%]
-0.41 215 0.8 40 5.637 9.903
-0.58 230 1.7 80 5.637 9.903
-1.05 250 1.7 150 1 17
-2.26 340 10 900 1 17

All calculations done with

  • 4.5kohm series resistance on ETMs, 15kohms on ITMs, 25kohm on slow path on all four TMs.
  • Detuning of SRC = -0.01 deg.
  • Homodyne angle = 89.5 deg.
  • Homodyne QE = 0.9. 
  • Arm losses is 20ppm RT.
  • LO beam assumed to be extracted from PR2 transmission, and is ~20ppm of circulating power in PRC.

Scenarios:

  1. Existing setup, new RC folding mirrors for PRG of ~45.
  2. Existing setup, send Innolight (Edwin) for repair (= diode replacement?) and hope we get 1.7 W on back of PRM.
  3. Repair Innolight, new PRM and SRM, former for higher PRG, latter for higher DARM pole.
  4. Same as #3, but with 10 W input power on back of PRM (i.e. assuming we get a fiber amp).

Remarks:

  • The errors on the small dB numbers is large - 1% change in model parameters (e.g. arm losses, PRG, coil driver noise etc) can mean no observable squeezing. 
  • Actually, this entire discussion is moot unless we can get the RIN of the light incident on the PRM lower than the current level (estimated from MC2 transmission, filtered by CARM pole and ARM zero) by a factor of 60dB.
    • This is because even if we have 1mW contrast defect light leaking through the OMC, the beating of this field (in the amplitude quadrature) with the 20mW LO RIN (also almost entirely in the amplitude quad) yields significant noise contribution at 100 Hz (see Attachment #1).
    • Actually, we could have much more contrast defect leakage, as we have not accounted for asymmetries like arm loss imbalance.
    • So we need an ISS that has 60dB of gain at 100 Hz. 
    • The requirement on LO RIN is consistent with Eq 12 of this paper.
  • There is probably room to optimize SRC detuning and homodyne angle for each of these scenarios - for now, we just took the optimized combo for scenario #1 for evaluating all four scenarios.
  • OMC displacement noise seems to only be at the level of 1e-22 m/rtHz, assuming that the detuning for s-pol and p-pol is ~30 kHz if we were to lock at the middle of the two resonances
    • This assumes 0.02 deg difference in amplitude reflectivity b/w polarizations per optic, other parameters taken from aLIGO OMC design numbers.
    • We took OMC displacement noise from here.

Main unbudgeted noises:

  • Scattered light.
  • Angular control noise reinjection (not sure about the RP angular dynamics for the higher power yet).
  • Shot noise due to vacuum leaking from sym port (= DC contrast defect), but we expect this to not be significant at the level of the other noises in Atm #1.
  • Osc amp / phase.
  • AUX DoF cross coupling into DARM readout.
  • Laser frequency noise (although we should be immune to this because of our homodyne angle choice).

Threat matrix has been updated.

Attachment 1: PonderSqueeze_NB_LORIN.pdf
PonderSqueeze_NB_LORIN.pdf
  14233   Fri Oct 5 17:47:55 2018 gautamConfigurationASCY-end table upgrade

What about just copying the Xend layout? I think it has good MM (per calculations), reasonable (in)sensitivity to component positions, good Gouy phase separation, and I think it is good to have the same layout at both ends. Since the green waist has the same size and location in the doubling crystal, it should be possible to adapt the X end solution to the Yend table pretty easily I think.

Quote:

The setup I designed is here. It can bring 100% mode-matching and good separation of degrees of TEM01, however I found a probrem. The picture of setup is attached #3. You can see the reflection angle at Y7 and Y8 is not appropriate. I will consider the schematic again.

  14235   Sun Oct 7 16:51:03 2018 gautamConfigurationLSCYarm triggering changed

To facilitate Yuki's alignment of the EY green beam into the Yarm cavity, I have changed the LSC triggering and PowNorm settings to use only the reflected light from the cavity to do the locking of Arm Cavity length to PSL. Running the configure script should restore the usual TRY triggering settings. Also, the X arm optics were macroscopically misaligned in order to be able to lock in this configuration.

  14238   Mon Oct 8 18:56:52 2018 gautamConfigurationASCc1asx filter coefficient file missing

While pointing Yuki to the c1asx servo system, I noticed that the filter file for c1asx is missing in the usual chans directory. Why? Backups for it exist in the filter_archive subdirectory. But there is no current file. Clearly this doesn't seems to affect the realtime code execution as the ASX model seems to run just fine. I copied the latest backup version from the archive area into the chans directory for now.

  14239   Tue Oct 9 16:05:29 2018 gautamConfigurationASCc1tst deleted, c1asy deployed.

Setting up c1asy:

  • Backed up old c1tst.mdl as c1tst_old_bak.mdl in /opt/rtcds/userapps/release/cds/c1/models
  • Copied the c1tst model to /opt/rtcds/userapps/release/isc/c1/models/c1asy.mdl as this is where the c1asx.mdl file resides.
  • Backed up original c1rfm.mdl as c1rfm_old.mdl in /opt/rtcds/userapps/release/cds/c1/models (since the old c1tst had an RFM block which is unnecessary).
  • Deleted offending RFM block from c1rfm.mdl.
  • Recompiled and re-installed c1rfm.mdl. Model has not yet been restarted, as I'd like suspension watchdogs to be shutdown, but c1susaux EPICS channels are presently not responsive.
  • Removed c1tst model (C-node91) from /opt/rtcds/caltech/c1/target/gds/param/testpoints.
  • Removed /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1tst.par (at this point, DCUID 91 is free for use by c1asy).
  • Moved c1tst line in /opt/rtcds/caltech/c1/target/daqd/master to "old model definitions models" section.
  • Added /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1asy.par to the master file.
  • Edited/diskless/root.jessie/etc/rtsystab to allow c1asy to be run on c1iscey.
  • Finally, I followed the instructions here to get the channels into frames and make all the indicators green.

Now Yuki can work on copying the simulink model (copy c1asx structure) and implementing the autoalignment servo.

Attachment 1: CDSoverview_ASY.png
CDSoverview_ASY.png
  14264   Wed Oct 31 17:54:25 2018 gautamUpdateVACCC1 hornet power connection restored

Steve reported to me that the CC1 Hornet gauge was not reporting the IFO pressure after some cable tracing at EX. I found that the power to the unit had been accidentally disconnected. I re-connected the power and manually turned on the HV on the CC gauge (perhaps this can be automated in the new vacuum paradigm). IFO pressure of 8e-6 torr is being reported now.

Attachment 1: cc1_Hornet.png
cc1_Hornet.png
  14269   Fri Nov 2 19:25:16 2018 gautamUpdateComputer Scripts / Programsloss measurements

Some facts which should be considered when doing this measurement and the associated uncertainty:

  1. When Johannes did the measurement, there was no light from the AS port diverted to the OMC. This represents ~70% loss in the absolute amount of power available for this measurement. I estimate ~1W*Tprm * Ritm * Tbs * Rbs * Tsrm * OMCsplit ~ 300uW which should still be plenty, but the real parameter of interest is the difference in reflected power between locked/no cavity situations, and how that compares to the RMS of the scope readout. For comparison, the POX DC light level is expected to be ~20uW, assuming a 600ppm AR coating on the ITMs.
  2. Even though the reflection from the arm not being measured may look like it's completely misaligned looking at the AS camera, the PDA520 which is used at the AS port has a large active area and so one must check on the oscilloscope that the other arm is truly misaligned and not hitting the photodiode to avoid interference effects artifically bloating the uncertainty.
  3. The PDA255 monitoring the MC transmission has a tiny active area. I'm not sure the beam has been centered on it anytime recently. If the beam is not well centered on that PD, and you normalize the measurements by "MC Transmission", you're likely to end up with larger error.
Quote:

This result has about 40% of uncertaintities in XARM and 33% in YARM (so big... no).

  14275   Tue Nov 6 15:23:48 2018 gautamUpdateIOOIMC problematic

The IMC has been misbehaving for the last 5 hours. Why? I turned the WFS servos off. afaik, aaron was the last person to work on the IFO, so i'm not taking any further debugging steps so as to not disturb his setup.

Attachment 1: MCwonky.png
MCwonky.png
  14279   Tue Nov 6 23:19:06 2018 gautamUpdateVACc1vac1 FAIL lights on (briefly)

Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.

But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.

Is there a reason why extender cards shouldn't be stuck into eurocrates?

Attachment 1: Screenshot_from_2018-11-06_23-18-23.png
Screenshot_from_2018-11-06_23-18-23.png
Attachment 2: Screenshot_from_2018-11-06_23-19-26.png
Screenshot_from_2018-11-06_23-19-26.png
  14283   Wed Nov 7 19:20:53 2018 gautamUpdateComputersPaola Battery Error

The VEA vertex laptop, paola, has a flashing orange indicator which I take to mean some kind of battery issue. When the laptop is disconnected from its AC power adaptor, it immediately shuts down. So this machine is kind of useless for its intended purpose of being a portable computer we can work at optical tables withno. The actual battery diagnostics (using upower) don't report any errors. 

  14284   Wed Nov 7 19:42:01 2018 gautamUpdateGeneralIFO checkup and DRMI locking prep

Earlier today, I rebooted a few unresponsive VME crates (susaux, auxey).

The IMC has been unhappy for a couple of days - the glitches in the MC suspensions are more frequent. I reset the dark offsets, minimized MCREFL by hand, and then re-centered the beam on the MC2 Trans QPD. In this config, the IMC has been relatively stable today, although judging by the control room StripTool WFS control signal traces, the suspension glitches are still happening. Since we have to fix the attenuator issue anyways soon, we can do a touch-up on IMC WFS.

I removed the DC PD used for loss measurements. I found that the AS beam path was disturbed - there is a need to change the alignment, this just makes it more work to get back to IFO locking as I have to check alignment onto the AS55 and AS110 PDs.

Single arm locking worked with minimal effort - although the X arm dither alignment doesn't do the intended job of maximizing the transmission. Needs a checkup.

PRMI locking (carrier resonant) was also pretty easy. Stability of the lock is good, locks hold for ~20 minutes at a time and only broke because I was mucking around. However, when the carrier is resonant, I notice a smeared scatter pattern on the ITMX camera that I don't remember from before. I wonder if the FF idea can be tested in the simpler PRMI config.

After recovering these two simpler IFO configurations, I improved the cavity alignment by hand and with the ASS servos that work. Then I re-centered all the Oplev beams onto their respective QPDs and saved the alignment offsets. I briefly attemped DRMI locking, but had little success, I'm going to try a little later in the evening, so I'm leaving the IFO with the DRMI flashing about, LSC mode off.

  14285   Wed Nov 7 23:07:11 2018 gautamUpdateLSCDRMI locking recovered

I had some success today. I hope that the tweaks I made will allow working with the DRMI during the day as well, though it looks like the main limiting factor in lock duty cycle is angular stability of the PRC.

  • Since there has been some change in the light levels / in vacuum optical paths, I decided to be a bit more systematic.
  • Initial guess of locking gains / demod phases was what I had last year.
  • Then I misaligned SRM, and locked PRMI, for the sideband resonant in the PRC (but still no arm cavities, and using 1f Refl error signals).
  • Measured loop TFs, adjusted gains, re-enabled boosts.
  • Brought the SRM back into the picture. Decided to trigger SRCL loop on AS110I rather than the existing POP22I (because why should 2f1 signal buildup carry information about SRCL?). New settings saved to the configure script. Reduced MICH gain to account for the SRC cavity gain.
  • Re-measured loop TFs, re-adjusted gains. More analysis about the state of the loops tomorrow, but all loops have UGF ~100-120 Hz.
  • Ran some sensing lines - need to check my sensing matrix making script, and once I get the matrix elements, I can correct the error signal demod phasing as necessary.

[Attachment #1]: Repeatable and reliable DRMI locks tonight, stability is mainly limited by angular glitches - I'm not sure yet if these are due to a suspect Oplev servo on the PRM, or if they're because of the tip-tilt PR2/PR3/SR2/SR3.

[Attachment #2]: A pass at measuring the TF from SRCL error point to MICH error point via control noise re-injection. I was trying to measure down to 40 Hz, but lost the lock, and am calling it for the night.

[Attachment #3]: Coherence between PRM oplev error point and beam spot motion on POP QPD.

Note that the MICH actuation is not necessarily optimally de-coupled by actuating on the PRM and SRM yet (i.e. the latter two elements of the LSC output matrix are not precisely tuned yet).

What is the correct way to make feedforward filters for this application? Swept-sine transfer function measurement? Or drive broadband noise at the SRCL error point and then do time-domain Wiener filter construction using SRCL error as the witness and MICH error as the target? Or some other technique? Does this even count as "feedforward" since the sensor is not truly "outside" the loop?

Attachment 1: Screenshot_from_2018-11-07_23-05-58.png
Screenshot_from_2018-11-07_23-05-58.png
Attachment 2: SRCL2MICH_crosscpl.pdf
SRCL2MICH_crosscpl.pdf
Attachment 3: PRCangularCoh_rot.pdf
PRCangularCoh_rot.pdf
  14286   Fri Nov 9 15:00:56 2018 gautamUpdateIOONo IFO beam as TT1 UL hijacked for REFL55 check

This problem resurfaced. I'm doing the debugging.

6:30pm - "Solved" using the same procedure of stepping through the whitening gains with a small (10 DAC cts pk) signal applied. Simply stepping through the gains with input grounded doesn't seem to do the trick.

Attachment 1: REFL55_wht_chk.png
REFL55_wht_chk.png
  14288   Sat Nov 10 17:32:33 2018 gautamUpdateLSCNulling MICH->PRCL and MICH->SRCL

With the DRMI locked, I drove a line in MICH using the sensing matrix infrastructure. Then I looked at the error points of MICH, PRCL and SRCL. Initially, the sensing line oscillator output matrix for MICH was set to drive only the BS. Subsequently, I changed the --> PRM and --> SRM matrix elements until the line height in the PRCL and SRCL error signals was minimized (i.e. the change to PRCL and SRCL due to the BS moving, which is a geometric effect, is cancelled by applying the opposite actuation to the PRM/SRM respectively. Then I transferred these to the LSC output matrix (old numbers in brackets).

MICH--> PRM = -0.335 (-0.2655)

MICH--> SRM = -0.35 (+0.25)

I then measured the loop TFs - all 3 loops had UGFs around 100 Hz, coinciding with the peaks of the phase bubbles. I also ran some sensing lines and did a sensing matrix measurement, Attachment #1 - looks similar to what I have obtained in the past, although the relative angles between the DoFs makes no sense to me. I guess the AS55 demod phase can be tuned up a bit.

The demodulation was done offline - I mixed the time series of the actuator and sensor signals with a "local oscillator" cosine wave - but instead of using the entire 5 minute time series and low-passing the mixer output, I divvied up the data into 5 second chunks, windowed with a Tukey window, and have plotted the mean value of the resulting mixer output.

Unrelated to this work: I re-aligned the PMC on the PSL table, mostly in Pitch.

Attachment 1: sensMat_2018-11-10.pdf
sensMat_2018-11-10.pdf
  14292   Tue Nov 13 18:09:24 2018 gautamUpdateLSCInvestigation of SRCL-->MICH coupling

Summary:

I've been looking into the cross-coupling from the SRCL loop control point to the Michelson error point.

[Attachment #1] - Swept sine measurement of transfer function from SRCL_OUT_DQ to MICH_IN1_DQ. Details below.

[Attachment #2] - Attempt to measure time variation of coupling from SRCL control point to MICH error point. Details below.

[Attachment #3] - Histogram of the data in Attachment #2.

[Attachment #4] - Spectrogram of the duration in which data in #2 and #3 were collected, to investigate the occurrance of fast glitches.

Hypothesis: (so that people can correct me where I'm wrong - 40m tests are on DRMI so "MICH" in this discussion would be "DARM" when considering the sites)

  • SRM motion creates noise in MICH.
  • The SRM motion may be naively decomposed into two contributions -
    • Category #1: "sensing noise induced" motion, which comes about because of the SRCL control loop moving the SRM due to shot noise (or any other sensing noise) of the SRCL PDH photodiode, and
    • Category #2: all other SRM motion.
  • We'd like to cancel the former contribution from DARM.
  • The idea is to measure the transfer function from SRCL control point to the MICH error point. Knowing this, we can design a filter so that the SRCL control signal is filtered and summed in at the MICH error point to null the SRCL coupling to MICH.
  • Caveats/questions:
    • Introducing this extra loop actually increases the coupling of the "all other" category of SRM motion to MICH. But the hypothesis is that the MICH noise at low frequencies, which is where this increased coupling is expected to matter, will be dominated by seismic/other noise contributions, and so we are not actually degrading the MICH sensitivity.
    • Knowing the nosie-budget for MICH and SRCL, can we AC couple the feedforward loop such that we are only doing stuff at frequencies where Category #1 is the dominant SRCL noise?

Measurement details and next steps:

Attachment #1

  • This measurement was done using DTT swept sine.
  • Plotted TF is from SRCL_OUT to MICH_IN, so the SRCL loop shape shouldn't matter.
  • I expect the pendulum TF of the SRM to describe this shape - I've overlaid a 1/f^2 shape, it's not quite a fit, and I think the phase profile is due to a delay, but I didn't fit this.
  • I had to average at each datapoint for ~10 seconds to get coherence >0.9.
  • The whole measurement takes a few minutes.

Attachments #2 and #3

  • With the DRMI locked, I drove a sine wave at 83.13 Hz at the SRCL error point using awggui.
  • I ramped up the amplitude till I could see this line with an SNR of ~10 in the MICH error signal.
  • Then I downloaded ~10mins of data, demodulated it digitally, and low-passed the mixer output.
  • I had to use a pretty low corner frequency (0.1 Hz, second order butterworth) on the LPF, as otherwise, the data was too noisy.
  • Even so, the observed variation seems too large - can the coupling really change by x100?
  • The scatter is huge - part of the problem is that there are numerous glitches while the DRMI is locked.
  • As discussed at the meeting today, I'll try another approach of doing multiple swept-sines and using Craig's TFplotter utility to see what scatter that yields.

Attachments #2

  • Spectrogram generated with 1 second time strides, for the duration in which the 83 Hz line was driven.
  • There are a couple of large fast glitches visible.
Attachment 1: TF_sweptSineMeas.pdf
TF_sweptSineMeas.pdf
Attachment 2: digitalDemod.pdf
digitalDemod.pdf
Attachment 3: digitalDemod_hist.pdf
digitalDemod_hist.pdf
Attachment 4: DRMI_LSCspectrogram.pdf
DRMI_LSCspectrogram.pdf
  14293   Tue Nov 13 21:53:19 2018 gautamUpdateCDSRFM errors

This problem resurfaced, which I noticed when I couldn't get the single arm locks going.

The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.

Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.

Attachment 1: RFMerrors.png
RFMerrors.png
  14298   Fri Nov 16 00:47:43 2018 gautamUpdateLSCMore DRMI characterization

Summary:

  • More DRMI characterization was done.
  • I was working on trying to improve the stability of the DRMI locks as this is necessary for any serious characterization.
  • Today I revived the PRC angular feedforward - this was a gamechanger, the DRMI locks were much more stable. It's probably worth spending some time improving the POP LSC/ASC sensing optics/electronics looking towards the full IFO locking.
  • Quantitatively, the angular fluctuations as witnessed by the POP QPD is lowered by ~2x with the FF on compared to offyes [Attachment #1, references are with FF off, live traces are with FF on].
  • The first DRMI lock I got is already running 15 mins, looking stable.
    • Update: Out of the ~1 hour i've tried DRMI locking tonight, >50 mins locked!
  • I think the filters can be retrained and this performance improved, something to work on while we are vented.
  • Ran sensing lines, measured loop TFs, analysis tomorrow, but I think the phasing of the 1f PDs is now okay.
    • MICH in AS55 Q, demod phase = -92deg, +6dB wht gain.
    • PRCL in REFL11 I, demod phase = +18 deg, +18dB wht gain.
    • SRCL in REFL55 I, demod phase = -175 deg, +18dB wht gain.
  • Also repeated the line in SRCL-->witness in MICH test.
    • At least 10 minutes of data available, but I'm still collecting since the lock is holding.
    • This time I drove the line at ~124 Hz with awggui, since this is more a regime where we are sensing noise dominated.

Prep for this work:

  • Reboots of c1psl, c1iool0, c1susaux.
  • Removed AS port PD loss measurement PD.
  • Initial alignment procedure as usual: single arms --> PRMI locked on carrier --> DRMI

I was trying to get some pics of the optics as a zeroth-level reference for the pre-vent loss with the single arms locked, but since our SL7 upgrade, the sensoray won't work anymore no. I'll try fixing this during the daytime.

Attachment 1: PRCff.pdf
PRCff.pdf
Attachment 2: DRMI_withPRCff.png
DRMI_withPRCff.png
  14303   Sun Nov 18 00:59:33 2018 gautamUpdateGeneralVent prep

I've begun prepping the IFO for the vent, and completed most of the IFO related items on the checklist. The power into the MC has been cut, but the low-power autolocker has not been checked. I will finish up tomorrow and post the go ahead. PSL shutter is closed for tonight.

  14304   Sun Nov 18 17:09:02 2018 gautamUpdateGeneralVent prep

Vent prep

Following the checklist, I did these:

  • Both arms were locked to IR, TRY and TRX maximized using ASS.
  • GTRY and GTRX were also maximized.
  • ITM/ETM Oplevs centered with TRX/TRY maximized, PRM/SRM/BS Opelvs were centered once the DRMI was locked and aligned.
  • Attachment #1 summarizes the above 3 bullets.
  • Sensoray was made to work with Donatella (Raspberry Pi video server idea is good but although the sensoray drivers look to have installed correctly, when the Sensoray unit is plugged into the RPi USB port, the red light doesn't come on, so I opted to not spend too much time on it for the moment).
  • Photos of all ports in various locked configurations are saved in /users/sensoray/SensorayCaptures/Nov2018Vent
  • PSL power into the IMC was cut from 1.07 W (measured after G&H mirror) to 97 mW. I opted to install a new HWP+PBS after the PMC to cut the power, so we don't have to fiddle around so much with the PMC locking settings [Attachment #3, this was the only real candidate location as the IMC wants s-polarized light].
  • 2" R=10% BS in the IMC REFL path was replaced with a 2" Y1 HR mirror, so there is no MCREFL till we turn the power back up.
  • IMC was locked.
  • Low power MC autolocker works [Attachment #2]. The reduction in MCREFL is because of me manually aligning the cavity, WFS servos are disabled in low power mode since there is no light incident on the WFS heads.
  • Updated the SUS driftmon values (though I'm not really sure how useful this is).
  • PSL shutter will remain closed, but I have not yet installed a manual beam block in the beam path on the PSL table.

@Steve & Chub, we are ready to vent tomorrow (Monday Nov 19). 

Attachment 1: VentPrepNov2018.png
VentPrepNov2018.png
Attachment 2: MCautolocker_lowPower.png
MCautolocker_lowPower.png
Attachment 3: IMG_7163.JPG
IMG_7163.JPG
  14307   Mon Nov 19 22:01:50 2018 gautamUpdateVACLoose nut on valve

As I was turning off the lights in the VEA, I heard a rattling sound from near the PSL enclosure. I followed it to a valve - I couldn't see a label on this valve in my brief effort to find one, but it is on the south-west corner of the IMC table, so maybe VABSSCI or VABSSCO? The power cable is somehow spliced with an attachment that looks to be bringing gas in/out of the valve (See Attachment #1), and the nut on the bottom was loose, the whole power cable + mettal attachment was responsible for the rattling. I finger-tightened the nut and the sound went away.

Attachment 1: IMG_7171.JPG
IMG_7171.JPG
  14310   Tue Nov 20 13:13:01 2018 gautamUpdateVACIMC alignment is okay

I checked the IMC alignment following the vent, for which the manual beam block placed on the PSL table was removed. The alignment is okay, after minor touchup, the MC Trans was ~1200 cts which is roughly what it was pre-vent. I've closed the PSL shutter again.

  14313   Wed Nov 21 09:59:26 2018 gautamUpdateLSCLSC feedforward block diagram

Attachment #1 is a block diagram depicting the pathway by which the vertex DOF control signals can couple into DARM (adapted from a similar diagram in Gabriele's Virgo note on the subject). I've also indicated some points where noise can couple into either loop. In general, there are sensing noises that couple in at the error point of the loop, and actuation noises that couple in at the control point. In this linear picture, each block represents a (possibly time varying) transfer function. So we can write out the node-to-node transfer functions and evaluate the various couplings.

The motivation is to see if we can first simulate with some realistic noise and time-varying couplings (and then possibly test on the realtime system) the effectiveness of the filter denoted by "FF" in canceling out the shot noise from the auxiliary loop being re-injected into the DARM loop via the DARM sensor. Does this look correct?

Attachment 1: IMG_7173.JPG
IMG_7173.JPG
  14314   Wed Nov 21 16:48:11 2018 gautamUpdateCOCEY mini cleanroom setup

With Chub's help, I've setup a mini cleanroom at EY - Attachment #1. The HEPA unit is running on high now. All surfaces were wiped with isopropanol, we can wipe everything down again on Monday and replace the foil.

Attachment 1: IMG_7174.JPG
IMG_7174.JPG
  14319   Mon Nov 26 17:16:27 2018 gautamUpdateSUSEY chamber work

[steve, rana, gautam]

  • PSL and EY 1064nm laser (physical) shutters on the head were closed so that we and sundance crew could work without laser safety goggles. EY oplev laser was also turned off.
  • Cylindrical heater setup removed:
    • heater wiring meant the heater itself couldn't be easily removed from the chamber
    • two lenses and Al foil cylinder removed from chamber, now placed on the mini-cleanroom table.
  • Parabolic heater is untouched for now. We can re-insert it once the test mass is back in, so that we can be better informed about the clipping situation.
  • ETMY removed from chamber.
    • EQ stops were engaged.
    • Pictures were taken
    • OSEMs were removed from cage, placed in foil holders.
    • Cage clamps were removed after checking that marker clamps were in place.
    • Optic was moved first to NW corner of table, then out of the vacuum onto the mini-cleanroom desk Chub and I had setup last week.
    • Hoepfully there isn't an earthquake. EY has been marked as off-limits to avoid accidental bumping / catasrophic wire/magnet/optic breaking.
    • We sealed up the mini cleanroom with tape. F.C. cleaning tomorrow or at another opportune moment.
    • Light door was put back on for the evening.

Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.

Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.

Attachment 1: IMG_5251.JPG
IMG_5251.JPG
  14324   Thu Nov 29 17:46:43 2018 gautamUpdateGeneralSome to-dos

[koji, gautam, jon, steve]

  • We suspect analog voltage from N2 pressure gauge is connected to interfacing Omega controller with the 'wrong' polarity (i.e pressure is rising over ~4 days and then rapidly falling instead of the other way around). This should be fixed.
  • N2 check script logic doesn't seem robust. Indeed, it has not been sending out warning emails (threshold is set to 60 psi, it has certainly gone below this threshold even with the "wrong" polarity pressure gauge hookup). Probably the 40m list is rejecting the email because controls isn't a part of the 40m group.
  • Old frames have to be re-integrated from JETSTOR to the new FB in order to have long timescale lookback.
  • N2 cylinder pressure gauges (at the cylinder end) need a power supply - @ Steve, has this been purchased? If not, perhaps @ Chub can order it.
  • MEDM vacuum screen should be updated to have gate valves be a different color to the spring-loaded valves. Manual valve between TP1 and V1 should also be added.
  • P2, P3 and P4 aren't returning sensible values (they should all be reading ~760torr as is P1). @ Steve, any idea if these gauges are broken?
  • Hornet gauges (CC and Pirani) should be hooked up to the new vacuum system.
  • add slow channels of   foreline pressures of TP2 & 3   and    C1:Vac-IG1_status_pressure
  14326   Fri Nov 30 19:37:47 2018 gautamUpdateLSCLSC feedforward block diagram

I wanted to set up an RTCDS model to understand this problem better. Attachment #1 is the simulink diagram of the signal flow. The idea will be to put in the appropriate filter shapes into the various filter blocks denoting the DARM and auxiliary DoF plants, controllers and actuators, and then use awggui / diaggui to inject some noises and see if in this idealized model I can achieve good subtraction. Then we can build up to applying a time varying cross coupling between DARM and the vertex DoF, and see how good the adaptive FF works. Still need to setup some MEDM screens to make working with the test system easier.

I figured c1omc would be the least invasive model to set this upon without risking losing any of our IR/green alignment references. Compile and install went smooth, see Attachment #2. The c1omc model was clocking 4us before, now it's using 7us.

Attachment #3 shows the top level of the OMC model, while Attachment #4 shows the MEDM screen.

* Note to self: when closing a loop inside the realtime model, there has to be a delay block somewhere in the loop, else a compilation error is thrown.

Attachment 1: LSC_FF_tester.png
LSC_FF_tester.png
Attachment 2: Screenshot_from_2018-11-30_19-41-07.png
Screenshot_from_2018-11-30_19-41-07.png
Attachment 3: Screenshot_from_2018-12-10_15-31-23.png
Screenshot_from_2018-12-10_15-31-23.png
Attachment 4: SimLSC.png
SimLSC.png
  14328   Sun Dec 2 17:26:58 2018 gautamUpdateIMCIMC ringdown fitting

Recently we wondered at the meeting what the IMC round trip loss was. I had done several ringdowns in the winter of 2017, but because the incident light on the cavity wasn't being extinguished completely (the AOM 0th order beam is used), the full Isogaio et. al. analysis could not be applied (there were FSS induced features in the reflection ringdown signal). Nevertheless, I fitted the transmission ringdowns. They looked like clean exponentials, and judging by the reflection signals (see previous elogs in this thread), the first ~20us of data is a clean exponential, so I figured we may get some rough value of the loss by just fitting the transmission data. 

The fitted storage time is 60.8 \pm 2.7 \mu s.However, this number isn't commensurate with the 40m IMC spec of a critically coupled cavity with 2000ppm transmissivity for the input and output couplers.

Attachment #1: Expected storage time for a lossless cavity, with round-trip length ~27m. MC2 is assumed to be perfectly reflecting. The IMC length is known to better than 100 Hz uncertainty because the marconi RF modulation signal is set accordingly. For the 40m spec, I would expect storage times of ~40 usec, but I measure almost 30% longer, at ~60 usec.

Attachment #2: Fits and residuals from the 10 datasets I had collected. This isn't a super informative plot because there are 10 datasets and fits, but to eye, the fits are good, and the diagonal elements of the covariance matrix output by scipy's curve_fit back this up. The function used to fit the t > 0 portions of these signals (because the light was extinguished at t=0 by actuating on the AOM) is \text{Transmission} = Ae^{-\frac{2t}{\tau_{\mathrm{storage}}}}, where A and tau are the fitted parameters. In the residuals, the same artefacts visible in the reflection signal are seen.

Attachment #3: Scatter plot of the data. Width of circles are proportional to fit error on individual measurements (i just scaled the marker size arbitrarily to be able to visually see the difference in uncertainty, the width doesn't exactly indicate the error), while the dahsed lines are the global mean and +/- 1 sigma levels.

Attachment #4: Cavity pole measurement. Using this, I get an estimate of the loss that is a much more believable 300 \pm 20\, \mathrm{ppm}.

Attachment 1: tauTheoretical.pdf
tauTheoretical.pdf
Attachment 2: ringdownFit.pdf
ringdownFit.pdf
Attachment 3: ringdownScatter.pdf
ringdownScatter.pdf
Attachment 4: cavPole.pdf
cavPole.pdf
  14331   Tue Dec 4 18:24:05 2018 gautamOmnistructureGeneralN2 line disconnected

[jon, gautam]

In the latest installment in this puzzler: turns out that maybe the trend of the "N2 pressure" channel increasing over the ~3 day timescale it takes a cylinder of N2 to run out is real, and is a feature of the way our two N2 cylinder lines/regulators are setup (for the automatic switching between cylinders when one runs out). In order to test this hypothesis, we'd like to have the line pressure be 0 initially, and then just have 1 cylinder hooked up. When we went into the drill-press area, we heard a hiss, turns out that one of the cylinders is leaking (to be fair, this was labelled, but i thought it isn't great to have a higher N2 concentration in an enclosed space). Since we don't need any actuation ability, I valved off the leaky cylinder, and disconnected the other properly functioning one. Attachment #1 shows the current state.

Attachment 1: IMG_7195.JPG
IMG_7195.JPG
  14334   Fri Dec 7 12:51:06 2018 gautamUpdateIMCIMC ringdown fitting

I started putting together some code to implement some ideas we discussed at the Tuesday meeting here. Pipeline isn't setup yet, but i think it's commented okay so if people want to play around with it, the code lives on the 40m gitlab

Model parameters:

  • T+ --- average transmission of MC1 and MC3.
  • T- --- difference in transmission between MC1 and MC3 (this basis is used rather than T1 and T3, because the assumption is that since they were coated in the same coating run, the difference in transmission should be small, even if there is considerable uncertainty in the actual average transmission number.
  • T2 --- MC2 transmission.
  • Lrt --- Round trip loss in the cavity.
  • "sigma" --- a nuisance parameter quantifying the error in the time domain ringdown data.

Simulation:

  • Using these model parameters, calculate some simulated time-domain ringdowns. Optionally, add some noise (assumed Gaussian).
  • Try and back out the true values of the model parameters using emcee - priors were assumed to be uniformly distributed, with a +/- 20% uncertainty around the central value.
  • For a first test, see if there is any improvement in the parameter estimation uncertainty using only transmission ringdown vs both transmission and reflection.

Initial results and conclusions:

  • Attachment #1 - Simulated time series used for this study. The "fit" trace is computed using the median values from the monte-carlo.
  • Attachment #2 - Corner plots showing the distribution of the estimated parameter values, using only transmission ringdown. The "true" values are indicated using the thick blue lines.
  • Attachment #3 - Corner plots showing the distribution of the estimated parameter values, using both transmission and reflection ringdowns.
  • The overall approach seems to work okay. There seems to be only marginal improvement in the uncertainty in estimated parameters using both ringdown signals, at least in the simulation.
  • However, everything seems pretty sensitive to the way the likelihood and priors are coded up - need to explore this a bit more.

Next steps:

  • Add more simulated measurements, see if we can constrain these parameters more tightly. 
  • Use linear error analysis to see if that tells us which measurements we should do, without having to go through the emcee.

There still seems to be some data quality issues with the ringdown data I have, so I don't think we really gain anything from running this analysis on the data I have already collected - but in the future, we can do the ringdown with complete extinguishing of the input light, and repeat the analysis.

As for whether we should clean the IMC mirrors - I'm going to see how much power comes out at the REFL port (with PRM aligned) this afternoon, and compare to the input power. This technique suffers from uncertainty in the Faraday insertion loss, isolation and IMC parameters, but I am hoping we can at least set a bound on what the IMC loss is.

Attachment 1: time_reflAndTrans.pdf
time_reflAndTrans.pdf
Attachment 2: corner_transOnly.pdf
corner_transOnly.pdf
Attachment 3: corner_reflAndTrans.pdf
corner_reflAndTrans.pdf
  14335   Fri Dec 7 17:04:18 2018 gautamUpdateIOOIMC transmission
  • Power just before PSL shutter on PSL table = 97 +/- 1 mW. Systematic error unknown.
  • Power from IFO REFL on AP table = 40 +/- 1 mW. Systematic error unknown.

Both were measured using the FieldMate power meter. I was hesitant to use the Ophir power meter as there is a label on it that warns against exceeding 100 mW. I can't find anything in the elog/wiki about the measured inesrtion loss / isolation of the input faraday, but this seems like a pretty low amount of light to get back from PRM. The IMC visibility using the MC_REFL DC values is ~87%. Assuming perfect transmission of the 87% of the 97mW that's coupled into the IMC, and assuming a further 5% loss between the Faraday rejected port and the AP table, the Faraday insertion loss would be ~30%. Realistically, the IMC transmission is lower. There is also some part of the light picked off for IPPOS. Judging by the shape of the REFL spot on the camera, it doesn't look clipped to me.

Either way, seems like we are only getting ~half of the 1W we send in on the back of PRM. So maybe it's worth it to investigate the situation in the IOO chamber during this vent.


c1pslc1susaux,c1iool0,caux  crates were keyed. Also, the physical shutter on the PSL NPRO, which was closed last Monday for the Sundance crew filming, was opened and the PMC was locked. PMC remains locked, but there is no light going into the IMC.

  14339   Mon Dec 10 15:53:16 2018 gautamUpdateLSCSwept-sine measurement with DTT

Disclaimer: This is almost certainly some user error on my part.

I've been trying to get this running for a couple of days, but am struggling to understand some behavior I've been seeing with DTT.

Test:

I wanted to measure some transfer functions in the simulated model I set up.

  • To start with, I put a pendulum (f0 = 1Hz, Q=5) TF into one of the filter modules
  • Isolated it from the other interconnections (by turning off the MEDM ON/OFF switches).
  • Set up a DTT swept-sine measurement
    • EXC channel was C1:OMC-TST_AUX_A_EXC
    • Monitored channels were C1:OMC-TST_AUX_A_IN2 and C1:OMC-TST_AUX_A_OUT.
    • Transfer function being measured was C1:OMC-TST_AUX_A_OUT/C1:OMC-TST_AUX_A_IN2.
    • Coherence between the excitation and output were also monitored.
  • Sweep parameters:
    • Measurement band was 0.1 - 900 Hz
    • Logarithmic, downward.
    • Excitation amplitude = 1ct, waveform = "Sine"

Unexplained behavior:

  • The transfer function measurement fails with a "Synchronization error", at ~15 Hz.
    • I don't know what is special about this frequency, but it fails repeatedly at the same point in the measurement.
  • Coherence is not 1 always
    • Why should the coherence deviate from 1 since everything is simulated? I think numerical noise would manifest when the gain of the filter is small (i.e. high frequencies for the pendulum), but the measurement and coherence seem fine down to a few tens of Hz.

To see if this is just a feature in the simulated model, I tried measuring the "plant" filter in the C1:LSC-PRCL filter bank (which is also just a pendulum TF), and run into the same error. I also tried running the DTT template on donatella (Ubuntu12) and pianosa (SL7), and get the same error, so this must be something I'm doing wrong with the way the measurement is being run / setup. I couldn't find any mention of similar problems in the SimPlant elogs I looked through, does anyone have an idea as to what's going on here?

* I can't get the "import" feature of DTT to work - I go through the GUI prompts to import an ASCII txt file exported from FOTON but nothing selectable shows up in DTT once the import dialog closes (which I presume means that the import was successful). Are we using an outdated version of DTT (GDS-2.15.1)?  But Attachment #1 shows the measured part of the pendulum TF, and is consistent with what is expected until the measurement terminates with a synchronization error.


the import problem is fixed - when importing, you have to give names to the two channels that define the TF you're importing (these can be arbitrary since the ASCII file doesn't have any channel name information). once i did that, the import works. you can see that while the measurement ran, the foton TF matches the DTT measured counterpart.


11 Dec 2pm: After discussing with Jamie and Gabriele, I also tried changing the # of points, start frequency etc, but run into the same error (though admittedly I only tried 4 combinations of these, so not exhaustive).

Attachment 1: SimTF.pdf
SimTF.pdf
  14344   Tue Dec 11 14:33:29 2018 gautamUpdateCDSNDScope

NDscope is now running on pianosa. To be really useful, we need the templates, so I've made /users/Templates/NDScope_templates where these will be stored. Perhaps someone can write a parser to convert dataviewer .xml to something ndscope can understand. To get it installed, I had to run:

sudo yum install ndscope
sudo yum install python34-gpstime
sudo yum install python34-dateutil
sudo yum install python34-requests

 I also changed the pythonpath variable to include the python3.4 site-packages library in .bashrc

Quote:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971

Let's install Jamie's new Data Viewer

Attachment 1: ndscope.png
ndscope.png
  14345   Tue Dec 11 18:20:59 2018 gautamUpdateOptical LeversBS/PRM HeNe is dead

I found that the BS/PRM OL SUM channels were reading close to 0. So I went to the optical table, and found that there was no beam from the HeNe. I tried power-cycling the controller, there was no effect. From the trend data, it looks like there was a slow decay over ~400000 seconds (~ 5 days) and then an abrupt shutoff. This is not ideal, because we would have liked to use the Oplevs as a DC alignment reference during the ventnoI plan to use the AS camera to recover some sort of good Michelson alignment, and then if we want to, we can switch out the HeNe.

*How can I export PDF from NDscope?

Attachment 1: BSOL_dead.png
BSOL_dead.png
  14349   Thu Dec 13 01:26:34 2018 gautamUpdateGeneralPower Outage recovery

[koji, gautam]

After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:

  1. It seems that we need to first turn on FB
    • Manually start the open-mx and mx services using
      sudo systemctl start open-mx.service 
      sudo systemctl start mx.service
    • Check that the system time returned by gpstime matches the gpstime reported by internet sources.
    • Manually start the daqd processes using
      sudo systemctl start daqd_*
  2. Then fully power cycle (including all front and rear panel power switches/cables) the FEs and the expansion chassis.
    • This seems to be a necessary step for models run on c1sus (as reported by the CDS MEDM screen) to pick up the correct system time (the FE itself seems to pick up the correct time, not sure what's going on here).
    • This was necessary to clear 0x4000 errors.
  3. Power on the expansion chassis.
  4. Power on the FE.
  5. Start the RTCDS models in the usual way
    • For some reason, there is a 1 second mismatch between the gpstime returned on the MEDM screen for a particular CDS model status, and that in the terminal for the host machine.
    • This in itself doesn't seem to cause any timing errors. But see remark about c1sus above in #2.

The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.

  14351   Thu Dec 13 12:06:35 2018 gautamUpdateGeneralPower Outage recovery

I did a walkaround and checked the status of all the interlock switches I could find based on the SOP and interlock wiring diagram, but the PSL remains interlocked. I don't want to futz around with AC power lines so I will wait for Koji before debugging further. All the "Danger" signs at the VEA entry points aren't on, suggesting to me that the problem lies pretty far upstream in the wiring, possibly at the AC line input? The Red lights around the PSL enclosure, which are supposed to signal if the enclosure doors are not properly closed, also do not turn on, supporting this hypothesis...

I confirmed that there is nothing wrong with the laser itself - i manually shorted the interlock pins on the rear of the controller and the laser turned on fine, but I am not comfortable operating in this hacky way so I have restored the interlock connections until we decide the next course of action...

Quote:
 

The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.

  14352   Thu Dec 13 18:12:47 2018 gautamUpdateIOOND filter on AS camera changed

In order to see the AS beam a bit more clearly in our low-power config, I swapped out the ND=1.0 filter on the AS camera for ND=0.5.

  14356   Thu Dec 13 22:56:28 2018 gautamUpdateCDSFrames

[koji, gautam]

We looked into the /frames situation a bit tonight. Here is a summary:

  1. We have already lost some second trend data since the new FB has been running from ~August 2017.
  2. The minute trend data is still safe from that period, we believe.
  3. The Jetstor has ~2TB of trend data in the /frames/trend folder.
    • This is a combination of "second", "minute_raw" and "minute".
    • It is not clear to us what the distinction is between "minute_raw" and "minute", except that the latter seems to go back farther in time than the former.
    • Even so, the minute trend folder from October 2011 is empty - how did we manage to get the long term trend data?? From the folder volumes, it appears that the oldest available trend data is from ~July 24 2015.

Plan of action:

  1. The wiper script needs to be tweaked a bit to allow more storage for the minute trends (which we presumably want to keep for long term).
  2. We need to clear up some space on FB1 to transfer the old trend data from Jetstor to FB1.
  3. We need to revive the data backup via LDAS. Also summary pages.

BTW - the last chiara (shared drive) backup was October 16 6 am. dmesg showed a bunch of errors, Koji is now running fsck in a tmux session on chiara, let's see if that repairs the errors. We missed the opportunity to swap in the 4TB backup disk, so we will do this at the next opportunity.

  14362   Sat Dec 15 20:04:03 2018 gautamUpdateIOOTT1/TT2 stepping

I'm running a script that moves TT1 and TT2 randomly in some restricted P/Y space to try and find an alignment that gets some light onto the TRY PD. Test started at gpstime 1228967990, should be done in a few hours. The IMC has to remain locked for the duration of this test. I will close the PSL shutter once the test is done. Not sure if the light level transmitted through the ITM, which I estimate to be ~30uW, will be enough to show up on the TRY PD, but worth a shot I figure.

Test was completed and PSL shutter was closed at 1228977122.

  14366   Wed Dec 19 00:12:46 2018 gautamUpdateOMC40m OMC DCC node

I made a node to collect drawings/schematics for the 40m OMC, added the length drive for now. We should collect other stuff (TT drivers, AA/AI, mechanical drawings etc) there as well for easy reference.

Some numbers FTR:

  • OMC length PZT capacitance was measured to be 209 nF.
  • Series resistance in HV path of OMC lenght PZT driver is 10 kohms, so this creates a LP with corner 1/2/pi/10kohm/200nF ~80 Hz.
  • Per Rob's thesis, the length PZT has DC actuation coefficient of 8.3 nm/V, ∼ 2 µm range. 
ELOG V3.1.3-