40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 220 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  14593   Fri May 3 12:51:58 2019 gautamUpdatePSLPSL turned on again

Per instructions from Coherent, I made the some changes to the NPRO settings. The value we were operating at is in the column labelled "Operating value", while that in the Innolight test datasheet is in the rightmost column. I changed the Xtal temp and pump current to the values Innolight tested them at (but not the diode temps as they were close and they require a screwdriver to adjust), and turned the laser on again at ~1245pm local time. The acromag channels are recording the diagnostic information.

update 2:30pm - looking at the trend, I saw that D2 TGuard channel was reporting 0V. This wasn't the case before. Suspecting a loose contact, I tightened the DSub connectors at the controller and Acromag box ends. Now it too reports ~10V, which according to the manual signals normal operation. So if one sees an abrupt change in this channel in the long trend since 1245pm, that's me re-seating the connector. According to the manual, an error state would be signalled by a negative voltage at this pin, up to -12V. Also, the Innolight manual says pin 13 of the diagnostics connector is indicating the "Interlock" state, but doesn't say what the "expected" voltage should be. The newer manual Coherent sent me has pin13 listed as "Do not use".

Setting Operating value Value Innolight tested at
Diode 1 temp [C] 20.74 21.98
Diode 2 temp [C] 21.31 23.01
Xtal temp [C] 29.39 25.00
Pump current [A] 2.05

2.10

  14594   Fri May 3 15:40:33 2019 gautamUpdateGeneralCVI 2" beamsplitters delivered

Four new 2" CVI 50/50 beamsplitters (2 for p-pol and 2 for s-pol) were delivered. They have been stored in the optics cabinet, along with the "Test Data" sheets from CVI.

  14595   Mon May 6 10:51:43 2019 gautamUpdatePSLPSL turned off again

As we have seen in the last few weeks, the laser turned itself off after a few hours of running. So bypassing the lab interlock system / reverting laser crystal temperature to the value from Innolight's test datasheet did not fix the problem.

I do not understand why the "Interlock" and "TGUARD" channels come revert to their values when the laser was lasing a few minutes after the shutoff. Is this just an artefact of the way the diagnostics is set up, or is this telling us something about what is causing the shutoff?

Attachment 1: NPROshutoff.png
NPROshutoff.png
  14599   Thu May 9 19:50:04 2019 gautamUpdatePSLPSL turned off again

This time, it stayed on for ~24 hours. I am not going to turn it on again today as the crane inspection is tomorrow and we plan to keep the VEA a laser safe area for speedy crane inspection.

But what is the next step? If these diode temps maximize the power output of the NPRO, then it isn't a good idea to raise the TEC setpoint futher, so should I just turn it on again with the same settings?

I did not turn the HEPA down on the PSL enclosure. I also turned off the NPROs at EX and EY so now all the four 1064nm lasers in the VEA are turned OFF (for crane inspection).

Quote:

locked PMC at 1900 PT; let's see how long it lasts.

My hunch is that the TECs are working too hard and can't offload the heat onto the heat sinks. As the diode's degrade, more of the electrical power is converted to heat in the diodes rather than 808 nm photons. So hopefully the increased airflow will help

 
Attachment 1: Screenshot_from_2019-05-09_19-49-29.png
Screenshot_from_2019-05-09_19-49-29.png
  14602   Fri May 10 15:18:04 2019 gautamUpdatePSLSome work on/around PSL table
  1. In anticipation of installing the new fan on the PSL, I disconencted the old fan and finally removed the bench power supply from the top shelf.
  2. Moved said bench supply to under the south-west corner of the PSL table.
  3. Installed temporary Acromag crate, now with two ADC cards, under the PSL table and hooked it up to the bench suppy (+15 VDC). Also ran an ethernet cable from 1X3 to the box on over head cable tray and connected it.
  4. Brought other end of 25-pin D-sub cable used to monitor the NPRO diagnostics channels from 1X4/1X5 to the PSL table. Rolled the excess length up and cable tied it, the excess is sitting on top of the PSL enclosure. Key parts of the setup are shown in Attachments #1-3. This is not an ideal setup and is only meant to get us through to the install of the new c1psl/c1ioo Acromag crate.
  5. Edited the modbus config file at /cvs/cds/caltech/target/c1psl2/npro_config.cmd to add Jon's new ADC card to the list.
  6. Edited EPICS database file at /cvs/cds/caltech/target/c1psl2/psl.db to add entries for the C1:PSL-FSS_RMTEMP and C1:PSL-PMC_PMCTRANSPD channels.
  7. Hooked up said channels to the physical ADC inputs via a DB15 cable and breakout board on the PSL table.
    CH0 --- FSS_RMTEMP (Pins 5/18 of the DB25 connector on the interface box to pins 1/9 of the Acromag DB15 connector)
    CH1 --- PMC TRANS (BNC cable from PD to pomona minigrabber to pins 2/10 of the Acromag DB15 connector)
    CH2-6 are unsued currently and are available via the DB15 breakout board shown in Attachment #3. CH7 is not connected at the time of writing
    The pin-out for the temperature sensor interface box may be found here. Restarted the modbus process. The channels are now being recorded, see Attachment #4, although checking the status of the modbus process, I get some error message, not sure what that's about.

So now we can monitor both the temperature of the enclosure (as reported by the sensor on the PSL table) and the NPRO diagnostics channels. The new fan for the controller has not been installed yet, due to us not having a good mounting solution for the new fans, all of which have a bigger footprint than the installed fan. But since the laser isn't running right now, this is probably okay.

modbusPSL.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusPSL.service; disabled)
   Active:
active (running) since Fri 2019-05-10 13:17:54 PDT; 2h 13min ago
  Process: 8824 ExecStop=/bin/kill -9 ` cat /run/modbusPSL.pid`
(code=exited, status=1/FAILURE)
 Main PID: 8841 (procServ)
   CGroup: /system.slice/modbusPSL.service
           ├─8841 /usr/bin/procServ -f -L /home/controls/modbusPSL.log -p /run/modbusPSL.pid 8009 /cvs/cds/rtapps/epics-3.14.12.2_long/module...
           ├─8842 /cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/bin/linux-x86_64/modbusApp /cvs/cds/caltech/target/c1psl2/npro_config.c...
           └─8870 caRepeater

May 10 13:17:54 c1auxex systemd[1]: Started ModbusIOC Service via procServ.

Attachment 1: IMG_7427.JPG
IMG_7427.JPG
Attachment 2: IMG_7428.JPG
IMG_7428.JPG
Attachment 3: IMG_7429.JPG
IMG_7429.JPG
Attachment 4: newPSLAcro.png
newPSLAcro.png
  14603   Fri May 10 18:24:29 2019 gautamUpdateNoiseBudgetaligoNB

I pulled the aligoNB git repo to /ligo/GIT/aligoNB/aligoNB. There isn't a reqs.txt file in the repo so installing the dependencies on individual workstations to get this running is a bit of a pain. I found the easiest thing to do was to setup a virtual environment for the python3 stuff, this way we can run python2 for the cdsutils package (hopefully that gets updated soon). I'm setting up a C1 directory in there, plan is to budget some subsystems like Oplev, ALS for now, and develop the code for the eventual IFO locking. As a test, I ran the H1 noise budget (./aligonb H1), works, so looks like I got all the dependencies...

  14605   Mon May 13 10:45:38 2019 gautamUpdatePSLPSL turned ON again

I used some double-sided tape to attach a San Ace 60 9S0612H4011 to the Innolight controller (Attachment #1). This particular fan is rated to run with up to 13.8V, but I'm using a +15V Sorensen output - at best, this shortens the lifespan of the fan, but I don't have a better solution for now. Then I turned the laser on again (~1040 local time), using the same settings Rana configured earlier in this thread. PMC was locked, and the IMC also could be locked but I closed the shutter for now while the laser frequency/intensity stabilizes after startup. The purpose is to facilitate completion of the pre-vent alignment checklist in prep for the planned vent tomorrow. PMC Trans reports 0.63 after alignment was optimized, which is ~15% lower than in Oct 2016.

Attachment 1: IMG_7431.JPG
IMG_7431.JPG
  14606   Mon May 13 18:48:32 2019 gautamUpdateGeneralVent prep
  1. c1auxey and c1aux VME crates were keyed.
  2. EX and EY NPROs were turned on.
  3. Y arm was aligned to the IR - best effort TRY ~0.75.
  4. EY green was aligned to the Y arm cavity. The spot is on the lower right quadrant on the CCD monitor, but GTRY ~0.35.
  5. #3 and #4 were repeated for XARM.
  6. All beams were centerd on Oplev and IP POS QPDs with this reference alignment - see Attachment #1. SOS Optic and TT DC bias positions were saved to burt snap files.
  7. I've never really used it but I updated all the SUS "driftmon" values - Attachment #2.
  8. Power going into the IMC was cut from 945 mW to 100 mW (both numbers measured with FieldMate power meter) by rotating the HWP installed last time for this purpose from 244 degrees (OLD) to 208 degrees (NEW). There was no beam dump for the reflected port of the PBS used to cut power, so I installed one, see Attachment #4.
  9. The T=90% BeamSplitter in the MC REFL path was replaced with a 2" HR mirror as is the norm for the low power IMC locking. Alignment of the MC REFL beam onto the MC REFL PD was tweaked.
  10. init.d file was edited and MCautolocker initctl process was restarted on Megatron to adopt the low power settings. It was locked, MCT ~1350 counts, see Attachment #3. Also adjusted the threshold level above which to have the slow PID offloading of FSS PZT voltage from 10000 to 1000.

I believe this completes the non-Chub portions of the pre-vent checklist, we will start letting air into the main volume ASAP tomorrow morning after crossing off the remaining items.

Main goal of this vent is to investigate the oddness of the YARM suspensions. I leave the PSL NPRO on overnight in the interest of data gathering, it's been running ~10 hrs now - I suspect it'll turn itself off before we are ready to vent in the AM.

Attachment 1: ventPrep_20190514.png
ventPrep_20190514.png
Attachment 2: driftMon_20190514.png
driftMon_20190514.png
Attachment 3: lowPowIMC.png
lowPowIMC.png
Attachment 4: IMG_7434.JPG
IMG_7434.JPG
  14607   Tue May 14 10:35:58 2019 gautamUpdateGeneralVent underway
  1. PSL had stayed on overnight. There was an EQ (M 4.6 near Costa Rica) which showed up on the Seis BLRMS, and I noticed that several optics were reporting Oplev spots off their QPDs (I had just centered these yesterday). So I did a quick alignment check:
    • IMC was readily locked
    • After moving test mass bias sliders to bring Oplev spots back to the center, the EX and EY green beams were readily locked to a TEM00 mode
    • IR flashes could be seen in TRX and TRY (though their levels are low, since we are operating with 1/10th the nominal power
    • The IP-POS QPD channels were reporting a "segmentation fault" so I keyed the c1iscaux crate and they came back. Still the QPD was reporting a low SUM value, but this too is because of the lower power. Conveniently, there was an ND2.0 filter in the beam path on a flip mount which I just flipped out of the way for the low-power tracking.
    • Then, PSL and green shutters were closed and Oplev loops were disengaged.
  2. Checked that we have an RGA scan from today
  3. During the walkthrough to check the jam nuts, Chub noticed that the outer nuts on the bellows between the OMC chamber and the IMC chamber were loose to the finger! He is tightening them now and checking the remaining jam nuts. AFAIK, Steve made it sound like this was always a formality. Should we be concerned? The other jam nuts are fine according to Chub.
  4. We valved off the pumpspool from the main volume and annuli, and started letting Nitrogen into the main volume at ~1045am.
  5. Started letting instrument grade air into the main volume at ~1130am. We are aiming for a pressure increase of 3 torr/min
  6. 4 cylinders of dry air were exhausted by ~330pm. It actually looks like we over-pressured the main volume by ~20torr - this is bad, we should've stopped the air inletting at 700 psi and then let it equilibriate to lab air pressure.
  7. At some point during the vent, the main volume pressure exceeded the working range of the cold cathode gauge CC1. It reports "Current Fail" on its LED display, which I'm assuming meant it auto-shutoff its HV to protect itself, Jon tells me the vacuum code isn't responsible for initiating any manual shutoff.
  8. A new vacuum state was added to reflect these conditions (pumpspool under vacuum, main volume at atmosphere).
  9. The annuli remain under vacuum for now. Tomorrow, when we remove the EY door, we will vent the EY annulus.

IMC was locked, MC2T ~ 1200cts after some alginment touch ups. The test mass oplevs indicate some drift, ~100urad. I didn't realign them.

The EY door removal will only be done tomorrow. I will take some free-swinging ETMY data today (suspension was kicked at 1241919438) to see if anything has changed (it shouldn't have). I need to think up a systematic debugging plan in the meantime.

Attachment 1: vent.png
vent.png
Attachment 2: Screenshot_from_2019-05-14_16-35-16.png
Screenshot_from_2019-05-14_16-35-16.png
  14608   Wed May 15 00:40:19 2019 gautamUpdateSUSETMY diagnosis plan

I collected some free-swinging data from earlier today evening. There are still only 3 peaks visible in the ASDs, see Attachment #1.

Plan for tomorrow:

TBH, I don't have any clear ideas as to what we are supposed to do to to fix the problem (or even what the problem is). So here is my plan for now:

  1. Take pictures of relative position of magnet and OSEM coil for all five coils
  2. Inspect positions of all EQ stops - back them well out if any look suspiciously close
  3. Inspect suspension wire for any kinks
  4. Inspect position of suspension wire in standoff

I anticipate that these will throw up some more clues 

Attachment 1: ETMY_sensorSpectra.pdf
ETMY_sensorSpectra.pdf
  14609   Wed May 15 10:56:47 2019 gautamUpdatePSLPSL turned ON again

To test the hypothesis that the fan replacement had any effect on the NPRO shutoff phenomena, I turned the HEPA on the PSL table down to the nominal 30% setting at ~10am.

Tomorrow I will revert the laser crystal temperature to whatever the nominal value was. If the NPRO runs in that configuration (i.e. the only change from March 2019 are the diode TEC setpoints and the new fan on the back of the controller), then hurray.

  14610   Wed May 15 10:57:57 2019 gautamUpdateSUSEY chamber opened

[chub, gautam]

  1. Vented the EE annulus.
  2. Took the heavy door off, put it on the wooden rack, put a light door on at ~11am.
  14611   Wed May 15 17:46:24 2019 gautamUpdateSUSETMY inspection

I setup the usual mini-cleanroom setup around the ETMY chamber. Then I carried out the investigative plan outlined here.

Main finding: I saw a fiber of what looks like first contact on the bottom left (as viewed from HR side) of ETMY, connecting the optic to the cage. See Attachment #1. I don't know that this can explain the problem with the missing eigenmode, it's not a hard constraint.  Seems like something that should be addressed in any case. How do we want to remove this? Just use a tweezer and pull it off, or apply a larger FC patch and then pull it off? I'm pretty sure it's first contact and not a piece of PEEK mesh because I can see it is adhered to the HR side of the optic, but couldn't capture that detail in a photo.

There weren't any obvious problem with the magnet positioning inside the OSEM, or the suspension wire. All the EQ stop tips were >3mm away from the optic.

I also backed out the bottom EQ stops on the far (south side) of the optic by ~2 full turns of the screw. Taking another free-swinging dataset now to see if anything has changed. I will upload all the photos I took, with annotations, to the gPhotos later today eve. Light doors back on at ~1730.

Update 10pm: the photos have been uploaded. I've added a "description" to each photo which should convey the message of that particualr shot, it shows up in my browser on the bottom left of the photo but can also be accessed by clicking the "info" icon. Please have a look and comment if something sticks out as odd / requires correction.

Update 1045pm: I looked at the freeswinging data from earlier today. Still only 3 peaks around 1 Hz.

The following optics were kicked:
ETMY
Wed May 15 17:45:51 PDT 2019
1242002769
Attachment 1: firstContactFiber.JPG
firstContactFiber.JPG
Attachment 2: ETMY_sensorSpectra.pdf
ETMY_sensorSpectra.pdf
  14613   Thu May 16 13:07:14 2019 gautamUpdateSUSFirst contact residue removal

I  used a pair of tweezers to remove the stray fiber of first contact. As Koji predicted, this was rather dry and so it didn't have the usual elasticity, so while I was able to pull most of it off, there is a small spot remaining on the HR surface of the ETM. We will remove this with a fresh application of a small patch of FC.

I the meantime, I'm curious if this has actually fixed the suspension woes, so yet another round of freeswinging data collection is ongoing. From the first 5 mins, looks positive, I see 4 peaks around 1Hz cool!

The following optics were kicked:
ETMY
Thu May 16 13:06:39 PDT 2019
1242072418

Update 730pm: There are now four well-defined peaks around 1 Hz. Together with the Bounce and Roll modes, that makes six. The peak at 0.92 Hz, which I believe corresponds to the Yaw eigenmode, is significantly lower than the other three. I want to get some info about the input matrix but there was some NDS dropout and large segments of data aren't available using the python nds fetch method, so I am trying again, kicked ETMY at 1828 PDT. It may be that we could benefit from some adjustment of the OSEM positions, the coupling of bounce mode to LL is high. Also the SIDE/POS resonances aren't obviously deconvolved. The stray first contact has to be removed too. But overall I think it was a successful removal, and the suspension characteristics are more in line with what is "expected". 

Attachment 1: etmy_sensors.pdf
etmy_sensors.pdf
Attachment 2: etmy_BRmode.pdf
etmy_BRmode.pdf
  14614   Thu May 16 22:58:25 2019 gautamUpdateASSIn air ASS test with green?

We were wondering yesterday if we can somehow test the ASS system in air. Though the arm cavity can be locked with the low power IMC transmission, I think the dither would render the POY lock unstable. But I wonder if we can use the green beam for a test. The steering PZTs installed by Yuki can serve the role of TT1/TT2 and we can dither the arm cavity mirrors while the green TEM00 mode is locked to the arm no problem. This would at least give us confidence that the actuation of ETMY/ITMY are okay (in addition to the other suspension tests). Then on the sensing side, after pumping down, the only thing we'd be foiled by is in-vacuum clipping or some major gunk on ETMY - everything else should be de-buggable even after pumping down?

I think most of the CDS infrastructure for this is already in place.

  14615   Thu May 16 23:31:55 2019 gautamUpdateSUSETMY suspension characterization

Here is my analysis. I think there are still some problems with this suspension.

Attachment #1: Time domain plots of the ringdown. The LL coil has peak response ~half of the other face OSEMs. I checked that the signal isn't being railed, the lowest level is > 100 cts.

Attachment #2: Complex TF from UL to the other coils. While there are four peaks now, looking at the phase information, it isn't possible to clearly disentangle PIT or YAW motion - in fact, for all peaks, there are at least three face shadow sensors which report the same phase. The gains are also pretty poorly balanced - e.g. for the 0.77 Hz peak, the magnitude of UR->UL is ~0.3, while LR->UL is ~3. Is it reasonable that there is a factor of 10 imbalance?

Attachment #3: Nevertheless, I assumed the following mapping of the peaks (quoted f0 is from a lorentzian fit) and attempted to find the input matrix that best convers the Sensor basis into the Euler basis.

DoF f0 [Hz]
POS 1.004
PIT 0.771
YAW 0.920
SIDE 0.967

Unsurprisingly, the elements of this matrix are very different from unity (I have to fix the normalization of the rows).

Attachment #4: Pre and post diagonalization spectra. The null stream certainly looks cleaner, but then again, this is by design so I'm not sure if this matrix is useful to implement.

Next steps:

  1. Repeat the actuator diagnonality test detailed here.
  2. ???

In case anyone wants to repeat the analysis, the suspension was kicked at 1828 PDT today and this analysis uses 15000 seconds of data from then onwards.

​Update 18 May 3pm:  Attachment #5 better presentation of the data shown in Attachment #2, the remark about the odd phasing of the coils is more clearly seen in this zoomed in view.  Attachment #6 shows Lorentzian fits to the peaks - the Qs are comparable to that seen for the other optics, although the Q for the 0.77 Hz peak is rather low.

Attachment 1: ETMY_sensors_timeDomain.pdf
ETMY_sensors_timeDomain.pdf
Attachment 2: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 3: matrixDiag.png
matrixDiag.png
Attachment 4: ETMY_diagComp.pdf
ETMY_diagComp.pdf
Attachment 5: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 6: ETMY_pkFitNaive.pdf
ETMY_pkFitNaive.pdf
  14617   Fri May 17 10:57:01 2019 gautamUpdateSUSIY chamber opened

At ~930am, I vented the IY annulus by opening VAEV. I checked the particle count, seemed within the guidelines to allow door opening so I went ahead and loosened the bolts on the ITMY chamber.

Chub and I took the heavy door off with the vertex crane at ~1015am, and put the light door on.

Diagnosis plan is mainly inspection for now: take pictures of all OSEM/magnet positionings. Once we analyze those, we can decide which OSEMs we want to adjust in the holders (if any). I shut down the ITMY and SRM watchdogs in anticipation of in-chamber work.

Not related to this work: Since the annuli aren't being pumped on, the pressure has been slowly rising over the week. The unopened annuli are still at <1 torr, and the PAN region is at ~2 mtorr.

  14618   Fri May 17 16:07:25 2019 gautamSummaryEquipment loanBorrowed component

ZHL-3A (2 units) —-> QIL

Quote:

I borrowed one Marconi (2023 B) from 40 m lab to QIL lab.

  14620   Fri May 17 17:01:08 2019 gautamUpdateSUSETMY suspension characterization

To investigate my mapping of the eigenfrequencies to eigenmodes, I checked the Oplev spectra for the last few hours, when the Oplev spot has been on the QPD (but the optic is undamped).

  1. Based on Attachment #1, I can't figure out which peak corresponds to what motion.
    • The most prominent peak (judged by peak height) is at 0.771 Hz for both PITCH and YAW
    • Assuming the peak at 0.92 Hz is the other angular mode, the PIT/YAW decoupling is poor in both peaks, only ~factor of 2 in both cases.
  2. Why are the POS and SIDE resonances sensed so asymmetrically in the PIT and YAW channels? There's a factor of 10 difference there...

So, while I conclude that my first-contact residue removal removed a constraint from the system (hence the pendulum dynamics are accurate and there are 6 eigenmodes), more thought is needed in judging what is the appropriate course of action.

Attachment 1: etmy_oplevs.pdf
etmy_oplevs.pdf
  14623   Mon May 20 11:33:46 2019 gautamUpdateSUSITMY inspection

With Chub providing illumination via the camera viewport, I was able to take photos of ITMY this morning. All the magnets look well clear of the OSEMs, with the possible exception of UR. I will adjust the position of this OSEM slightly. To test if this fix is effective, I will then cycle the bias voltage to the ITM between 0 and the maximum allowed, and check if the optic gets stuck.

  14624   Mon May 20 13:16:57 2019 gautamSummaryComputersnew laptop setup: ASIA - ndscope and diaggui

Following instructions here, I installed ndscope on this machine. DTT still could not be be run from this machine, and I want to use this today - so I ran the following commands from the K. Thorne setup instructions.

yum clean metadata
yum update
yum install cds-workstation pcaspy subversion redhat-lsb  gnuradio google-chrome-stable xorg-x11-drv-nvidia epel-release redhat-lsb

Now diaggui can be opened, and spectra can be made. I'm moving this laptop to its new home at EY.

Quote:
  • now running 'yum install gds-all' to see if we need more local libraries to run GDS from the shared disks...
  14625   Mon May 20 17:12:57 2019 gautamUpdateSUSETMY LL adjustment

Following the observation that the response in the LL shadow sensor was lower than that of the others, I decided to pull it out a little to move the signal level with nominal DC bias voltage applied was closer to half the open-voltage. I also chose to rotate the SIDE OSEM by ~20 degrees CCW in its holder (viewed from the south side of the EY chamber), to match more closely its position from a photo prior to the haphazhard vent of the summer of 2018. For the SIDE OSEM, the theoretical "best" alignment in order to be insensitive to POS motion is the shadow sensor beam being horizontal - but without some shimming of the OSEM in the holder, I can't get the magnet clear of the teflon inside the OSEM.

While I was inside the chamber, I attempted to minimize the Bounce/Roll mode coupling to the LL and SIDE OSEM channels, by rotating the Coil inside the holder while keeping the shadow sensor voltage at half-light. To monitor the coupling "live", I set up DTT with 0.3 Hz bandwidth and 3 exponentially weighted averages. For the LL coil, I went through pi radians of rotation either side of the equilibrium, but saw no significant change in the coupling - I don't understand why.

In any case, this wasn't the most important objective so I pushed ahead with recovering half-light levels for all the shadow sensors and closed up with the light doors. I kicked the optic again at 1712:14 PDT, let's see what the matrix looks like now.


before starting this work, i had to key the unresponsive c1auxey VME crate.

  14627   Mon May 20 22:06:07 2019 gautamUpdateSUSITMY also kicked

For good measure:

The following optics were kicked:
ITMY
Mon May 20 22:05:01 PDT 2019
1242450319
  14628   Tue May 21 00:15:21 2019 gautamUpdateSUSMain objectives of vent achieved (?)

Summary:

  1. ETMY now shows four suspension eigenmodes, with sensible phasing between signals for the angular DoFs. However, the eigenfrequencies have shifted by ~10% compared to 16 May 2019.
  2. PIT and YAW for ETMY as witnessed by the Oplev are now much better separated.
  3. ITMY can have its bias voltage set to zero and back to nominal alignment without it getting stuck.
  4. The sensing matrix for ETMY that I get doesn't make much sense to me. Nevertheless, the optic damps even with the "naive" input matrix.

So the primary vent objectives have been achieved, I think. 


Details:

  1. ETMY free-swinging data after adjusting LL and SIDE coils such that these were closer to half-light values
    • Attachment #1 - oplev witnessing the angular motion of the optic. PIT and YAW are well decoupled.
    • Attachment #2 - complex TF between the suspension coils. There is still considerable imbalance between coils, but at least the phasing of the signals make sense for PIT and YAW now.
    • Attachment #3 - DoFs sensed using the naive and optimized sensing matrices.
    • Attachment #4 - sensing matrix that the free swinging data tells me to implement. If the local damping works with the naive input matrix but we get better diagonality in the actuation matrix, I think we may as well stick to the naive input matrix.
  2. BR mode coupling minimization:
    • As alluded to in my previous elog, I tried to reduce the bounce mode coupling into the shadow sensor by rotating the OSEM in its holder.
    • However, I saw negligible change in the coupling, even going through a full pi radian rotation. I imagine the coupling will change smoothly so we should have seen some change in one of the ~15 positions I sampled in between, but I saw none.
    • The anomalously high coupling of the bounce mode to the shadow sensor readout is telling us something - I'm just not sure what yet.
  3. ITMY:
    • The offender was the LL OSEM, whose rotational orientation was causing the magnet to get stuck to the teflon part of the OSEM coil when the bias voltage was changed by a sufficiently large amount.
    • I rectified this (required adjustment of all 5 OSEMs to get everything back to half light again).
    • After this, I was able to zero the bias voltage to the PIT/YAW DoFs and not have the optic get stuck - huzzah 😀 
    • While I have the chance, I'm collecting the free-swinging data to see what kind of sensing matrix this optic yields.

Tomorrow and later this week:

  1. Prepare ETMY for first contact cleaning to remove the residual piece. 
    • Drag wipe the HR surface with dehydrated acetone 
    • Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
    • This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
  2. Confirm ETMY actuation makes sense.
    • Use the green beam for an ASS proxy implementation?
  3. High quality close out pictures of OSEMs and general chamber layout.
  4. Anything else? Any other tests we can do to convince ourselves the suspensions are well-behaved?

While we have the chance:

  1. Fix the IPANG alignment? Because the TT drift/hysteresis problem is still of unknown cause.
  2. Check that the AS beam is centered on OMs 1-6?
  3. Recover the 70% AS light that is being diverted to the OMC?

Unrelated to this work: megatron is responding to ping but isn't ssh-able. I also noticed earlier to day that the IMC autolocker blinky wasn't blinking. So it probably requries a hard reboot. I left the lab for tonight so I'll reboot it tomorrow, but no nds data access in the meantime... 

Attachment 1: etmy_oplevs_20190520.pdf
etmy_oplevs_20190520.pdf
Attachment 2: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 3: ETMY_diagComp.pdf
ETMY_diagComp.pdf
Attachment 4: Screen_Shot_2019-05-21_at_12.37.08_AM.png
Screen_Shot_2019-05-21_at_12.37.08_AM.png
  14629   Tue May 21 21:33:27 2019 gautamUpdateSUSETMY HR face cleaned

[koji, gautam]

We executed this plan. Photos are here. Summary:

  1. Optic was EQ-stopped (face stops only)., with the OSEMs in situ. We tried to do this as evenly as possible to avoid any magnets getting stuck on OSEMs.
  2. We used the specially procured acetone from Chub to drag wipe the HR face. This was a definite improvement, we should always get the correct grade of solvents when we attempt cleaning optics.
  3. It was observed that drag-wiping did not really have the desired cleaning effect. So Koji went in with hemostat / lens tissue soaked in acetone and wiped the HR face. This improved the situation.
  4. Applied a layer of F.C. Waited for it to dry, and then peeled it off. Under the green flashlight, the optic still looks horrific - but we decided against further drag-wiping/first-contacting. If the loss is truly 50 ppm, this is totally not a show-stopper for now.
  5. Suspension cage was replaced. EQ stops were released. Bias voltages were adjusted to bring the Oplev spot back to the center of the QPD. Now a free-swinging data collection is ongoing...
The following optics were kicked:
ETMY
Tue May 21 22:58:18 PDT 2019
1242539916

So if nothing, we got to practise this new wiping technique with OSEMs in situ successfully.

Quote:
 
  1. Prepare ETMY for first contact cleaning to remove the residual piece. 
    • Drag wipe the HR surface with dehydrated acetone 
    • Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
    • This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
  14630   Wed May 22 11:53:50 2019 gautamUpdateSUSETMY EQ stops backed out

Yesterday we noticed that the POS and SIDE eigenmodes were degenerate (with 1mHz spectral resolution). Moreover, the YAW peak had shifted down by ~500 mHz compared to earlier this week, although there was still good separation between PIT and YAW in the Oplev error signals. Ideas were (i) check if EQ stops were not backed out sufficiently, and (ii) look for any fibers/other constraints in the system. Today morning, I inspected the optic again. I felt the EQ stop viton tips were a bit close to the optic, so I backed them out further. Apart from this, I adjusted the LR and SIDE OSEM position in their respective holders to make the sensor voltages closer to half-light. Kicked the optic again just now, let's see if there is any change.

Remaining tasks:

  1. Check EY table leveling.
  2. Check EY actuation matrix diagonality using this technique.
  3. Check that IR resonances are seen (and all the usual pre-pumpdown alignment checks).
  4. Take close out pictures.
  5. Heavy doors on, pump down.

If everything goes smoothly, I think we should plan for the heavy doors going back on and commencing the pumpdown tomorrow. After discussion with Koji, we came to the conclusion that it isn't necessary to investigate IPANG (high likelihood of it falling off the steering optics during the pumpdown) / AS beam clipping (no strong evidence that this is a problem) for this vent.

Update 1235: Indeed, the eigenmodes are back to their positions from earlier this week. Indeed, the POS and SIDE modes are actually better separated! So, the OSEM/magnet and EQstop/optic interactions are non-negligible in the analysis of the dynamics of the pendulum.

Attachment 1: ETMY_eigenmodes.pdf
ETMY_eigenmodes.pdf
  14631   Wed May 22 22:50:13 2019 gautamUpdateVACPumpdown prep

I did the following:

  1. Checked the ETMY OSEM sensing matrix and OSEM actuation matrix - more on this later, but everything seems much more reasonable than it was prior to this vent.
  2. Checked that the IMC could be locked with the low-power beam
  3. Aligned the Y-arm cavity using the green beam. Then tweaked the TT1/TT2 alignment until I saw IR flashes in TRY.
  4. Repeated #2 for the X arm, using the BS to control the beam pointing.
  5. Confirmed that the AS beam makes it out of the vacuum. It is only ~30uW in a large (~1cm dia) beam, so not the clearest spot on an IR card, but looks pretty clean, no evidence of clipping. I removed an ND filter on the AS port camera in order to better see the beam on the CRT monitor, this should be re-installed prior to ramping the input power to the IMC again.
  6. With the PRM aligned, I confirmed that I could see resonant flashes in the POP QPD.
  7. With the SRM aligned, I confirmed that I could see SRC cavity flashes on the AS camera.

I think this completes the pre-pumpdown alignment checks we usually do. The detailed plan for tomorrow is here: please have a look and lmk if I missed something. 

  14634   Thu May 23 15:30:56 2019 gautamUpdateVACPumpdown underway - so far so good!

[chub, koji, gautam]

  1. We executed the pre-pumpdown tasks per the checklist - heavy doors were on by ~1030am.
  2. We were thwarted by the display of c1vac becoming unresponsive - the mouse cursor moves, but we could not interact with any screens. Connecting to c1vac by ssh with the -X option, we could interact with everything. Using top, we saw that the load average was reporting ~8 - this is pretty high! The most demanding processes were the modbus IOC and some python processes, presumably connected with the interlocks. We tried stopping the interlock systemctl process, kill -9ing the heavy processes, but to no avail. Next, we tried killing the X display proces, but this also did not fix the problem. Finally, we did a soft reboot of c1vac - the machine came back up, but still no interactivity. So we moved asia, the EY laptop, to the vacuum station for this pumpdown. We will fix the situation once the vacuum is in the nominal state.
  3. The actual pumpdown commenced by first evacuating the EY and IY annular volumes with the roughing pump. There is an interlock condition that prevents V6 from being opened if the PRP gauge reports < 0.25 torr (this is to protect against oil backstreaming from the roughing pumps I believe). To get around this, we gave the roughing pumps some work by exposing the annular line to the atmospheric pressure of the EY and IY annuli. In a few minutes, both of these reported < 1 torr.
  4. Main volume pumping started around noon - we have been going down in pressure steadily at ~3 torr/min (Koji has a nice python utility made that calculates the rate from the pressure channel).
  5. At the time of writing, after ~3.5 hrs of pumping, we are at 25 torr. I will keep going till ~1 torr, and then valve off the main volume until tomorrow, when Chub and I will work on getting the turbo pumps exposed to the main volume. Pausing at 355pm while I go for the colloquium. Resumed later in the evening, stopping for today at 500 mtorr.
  6. In preparation for the increased load on TP2 and TP3, I spun them up to the "high RPM mode" from their nominal "Standby mode".

Close up photos of the EY and IY chambers may be found here.


Update on the display manager of c1vac: I was able to get it working again by running sudo systemctl restart display-manager. Now I can interact with the MEDM screens on c1vac. It is a bit annoying that this machine doesn't have the users directory so I don't have access to the many convenient StripTool templates though - maybe I'll make local copies tomorrow for the pumpdown.

Attachment 1: pumpdownPres.png
pumpdownPres.png
  14636   Fri May 24 11:47:15 2019 gautamUpdateVACIFO is almost at nominal vacuum

[chub, gautam]

Overnight, the pressure of the main volume only rose by 10 mtorr, so there was no need to run the roughing pumps again. So we went straight to the turbos - hooked up the AUX drypump and set it up to back TP2. Initially, we tried having both TP2 and TP3 act as backing pumps for TP1, but the wimpy TP3 current was always passing the interlock threshold. So we decided to pump down with TP3 valved off, only TP2 backing TP1. This went smooth - we had to keep an eye on P2, to make sure it stayed below 1 torr. It took ~ 1 hour to go from 500 mtorr to 100 mtorr, but after that, I could almost immediately open up RV2 completely. A safe setting to run at seems to be to have RV2 open by between 0.5 and 1 turn (out of the full range of 7 turns) until the pressure drops to ~100 mtorr. Then we can crank it open. We are, at the time of writing, at ~8e-5 torr and the pressure is coming down steadily.

I had to manually clear the IG error on the CC1 gauge, and re-enabled the High Voltage, so that we have a readback of the main volume pressure in that range. I made a script to do this (enable the HV, the IG error still has to be cleared by pushing the appropriate buttons on the Hornet), it lives at /opt/target/python/serial/turnHornetON.py. I guess it'll take a few days to hit 8e-6 torr, but I don't see any reason to not leave the turbos running over the weekend.

Remaining tasks are (i) disconnect the roughing pump line and (ii) pump down the annuli, which will be done later today. Both were done at ~2pm, now we are in the vacuum normal config. I'll turn the two small turbos to run on "Standby Mode" before I head home today. I think TP3 may be close to end-of-life - the TP3 current went up to 1A even while evacuating the small volume of the annular line (which was already at 1 torr) with the AUX drypump backing it. The interlock condition is set to trip at 1.2A, and this pump is nominally supposed to be able to back TP1 during the pumpdown of the main volume from 500 mtorr, which it wasn't able to do.

Attachment 1: pumpdown_20190524.png
pumpdown_20190524.png
  14637   Fri May 24 17:50:19 2019 gautamUpdateIOOIFO recovery

At ~4pm, the main volume pressure (CC1) was reported to be ~5e-5 torr. So I replaced the HR mirror in the MC REFL path with the usual 10% beamsplitter, and aligned the beam onto MCREFL photodiode. I also replaced the ND filter on the AS port camera, and in front of the IPPOS QPD.

Then I turned up the power by HWP rotation - at the input to the IMC, I now measured 960 mW with the Coherent power meter, so the NPRO power has certainly decayed by ~10% from 2018 July. Normal high-power IMC autolocker script was re-enabled on megatron (and the slow servo enable threshold raised from 1000 cts to 8000cts). IMC was readily locked, after some hand alignment, I got a maximum of 14500 cts transmission. I was then able to lock the Y-arm. The dither alignment servo did not work with the nominal settings, but by hand alignment, I was able to get TRY up to 0.6 (I didn't try too hard to optimize this in any systematic way). X arm was also locked.

AUX drypump valved off and shutdown at ~610pm. I also switched both TP2 and TP3 to their lower rotation "standby" mode. So overall no major mishaps this time around. I am leaving the PSL shutter open over the long weekend. For in-air vs vacuum suspension spectra comparison, I kicked the ETMY optic at Fri May 24 18:26:10 PDT 2019.

  14640   Mon May 27 11:37:13 2019 gautamUpdateVACc1vac is unresponsive

I've been monitoring the status of the pumpdown remotely with ndscope lookbacks of C1:Vac-CC1_pressure. Today morning, I saw that the channel was putting out a constant value (signature of EPICS server being frozen). caget did not work either. Then I tried ssh-ing into c1vac to see if there were any issues but I was unable to. The machine isn't responding to ping either. The EPICS value has been frozen since ~1030pm PDT 26 May 2019.

I will try and head to campus later today to check on it. Isn't an email alert or soemthing supposed to be sent out in such an event?

  14641   Tue May 28 09:51:33 2019 gautamUpdateVACc1vac hard-rebooted

The vacuum itself was fine - CC1 gauge reported a pressure of 1.3e-5 torr. Note to self: the C1:Vac-CC1_HORNET_PRESSURE channel, which is the analog readback of the Hornet gauge and which is hooked up to an Acromag ADC in the c1auxex chassis, is independent of the status of the c1vac machine, and so can serve as a diagnostic.

However, I was unable to interact with c1vac in any way, the monitor hooked up directly to it was showing a frozen display. So I hard-rebooted the system. It took a few minutes to come back online - but even after 10 minutes of waiting, still no display. In the process of the reboot, several valves were closed off - when the EPICS processes restart, there are momentary instances where the readback channels get an "undefined" value, which prompts the main interlock process to transition to a "SAFE" state. 

Running df -h, I saw that the /var partition was completely full. Maybe this was somehow interfering with the machine running smoothly? Two files in particular, daemon.log and daemon.log.1 were ~1GB each. The contents of these files seemed to be just the readbacks for the caget and caput commands. So I cleared both these files, and now the /var partition usage is only 26%. I also got the display back up and running on the physical monitor hooked up to the c1vac machine's VGA port. Let's see if this has improved the stability situation. The CPU load is still high (~6-7), with most of this coming from the modbus process. Why is this so high? c1susaux has more Acromag units but claims a much lower load of 0.71. Is the CPU of the c1vac machine somehow inferior?

In the meantime, I ssh-ed into c1vac and restored the "Vacuum normal" valve config. During this little escapade, the main volume pressure rose to ~6e-5 torr. It's coming back down smoothly.


Unrelated to this work: we had turned the RGA off for the vent, I powered it back on and re-initialized it this morning.

Attachment 1: Screen_Shot_2019-05-31_at_12.44.54_PM.png
Screen_Shot_2019-05-31_at_12.44.54_PM.png
  14642   Tue May 28 17:41:13 2019 gautamUpdateGeneralIFO status

[chub, gautam]

Today, we tried to resuscitate the c1iscaux2 channels by swapping the existing, failed VME crate with the newly freed up crate from c1susaux. In summary, the crate gets power, and the EPICS server gets satrted, but I am unable to switch the whitening gain on the whitening boards. I belive that this has to do with the FAIL LEDs that are on for the XVME-220 units. We were careful to preserve the location of the various cards in the VME crates during the swap. Rather than do a detailed debugging with custom RJ45 cables and terminal emulators, I think we should just focus the efforts on getting the Acromag system up and running.

Our work must have bumped a cable to the c1lsc expansion chassis in the same rack - the c1lsc FE had crashed. I rebooted it using the script - everything came back gracefully.

Attachment 1: IMG_7444.JPG
IMG_7444.JPG
  14643   Wed May 29 18:13:25 2019 gautamUpdateALSFiber beam-splitters are now PM

To maintain PM fibers all the way through to the photodiode, I had ordered some PM versions of the 50/50 fiber beamsplitters from AFW technologies. They arrived some days ago, and today I installed them in the BeatMouth. Before installation, I checked that the ends of the fibers were clean with the fiber microscope. I also did a little cleanup of the NW corner of the PSL table, where the 1um MZ setup was completely disassembled. We now have 4 non-PM fiber beamsplitters which may be useful for non polarizaiton sensitive applications - they are stored in the glass-door cabinet slightly east of the IY chamber along the Y arm, together with all the other fiber-related hardware.

Anjali had changed the coupling of the beam to the slow axis for her experiment but I ordered beamsplitters which have the slow axis blocked (because that was the original config). I need to revert to this config, and then make a measurement of the ALS noise - if things look good, I'll also patch up the Y arm ALS. We made several changes to the proposed timeline for the summer but I'd like to see this ALS thing through to the end while I still have some momentum before embarking on the BHD project. More to follow later in the eve.

Quote:

Get a fiber BS that is capable of maintaining the beam polarization all the way through to the beat photodiode. I've asked AFW technologies (the company that made our existing fiber BS parts) if they supply such a device, and Andrew is looking into a similar component from Thorlabs.

  14645   Fri May 31 15:55:16 2019 gautamUpdateALSPSL + X beat restored

Coupling into the fast axis of the fiber:

The PM couplers I bought require that the light is coupled to the fast axis. The Thorlabs part that Andrew ordered, and which Anjali was using for the MZ experiment, was the opposite configuration, and so the input coupler K6XS mount was rotated to accommodate this polarization. The HWP was also rotated to cut the power into the fiber. I undid these changes. Mode-matching is ~65% (2.42mW/3.70mW) which isn't stellar, but good enough. The PER is ~15dB (ratio of power in fast axis to slow axis is ~40), which I verified using another collimator at the output, and a PBS + two photodiodes. Again isn't stellar but good enough.

EX laser temperature adjustment:

Rana adjusted the temperature of the main laser to 30.61 C. According to the calibration, the EX laser temperature needed to be ~32.8 C. It was ~31.2 C. I made the change by rotating the dial on the front panel of the EX laser controller. Fine adjustment was done using the temperature slider on the ALS screen. With an offset of ~+610 counts, I found a beat at ~80 MHz.

First look at PM beamsplitters:

From my initial test, the beat amplitude was stable to my moving of the fibers yes. The NF1611 DC monitor reports 2.6 V DC with only the EX light, and 3.15 V DC with only the PSL light. So I should probably cut the PSL power a little to improve the contrast. Assuming the 10 kohm DC transimpedance spec can be believed, this means the expected signal level is 4*sqrt(260uA * 315uA)*700V/A ~0.8 Vpp, and I see ~0.9 Vpp, so roughly things add up (this is actually more consistent with an RF transimpedance of 800V/A, which is maybe not unreasonable). The RF amps for routing this signal to the delay line has been borrowed for the 2um frequency noise experiemnt - I will reacquire it today and check the ALS noise performance.

So overall, I am happy with the performance of the current iteration of the BeatMouth.

  14647   Mon Jun 3 16:46:31 2019 gautamUpdateIOOIMC not locking

Since ~ 2 hours ago, the IMC autolocker has not been able to keep the IMC locked. I don't see any obvious trends in the wall StripTool that may point to what's going on. For the brief periods in which a TEM00 mode is locked, the PC Drive RMS level is ~5x what the nominal level is, and while the autolocker is trying to lock the IMC, the PC drive RMS level is hovering around 4V DC, which is high. The PMC Error and Control signal spectra show huge 60 Hz (and harmonics) peaks, and indeed this is visible in the time domain signals as well (on ndscope or on the oscilloscope on the PSL table), but this is not a new feature in the last two hours. Usually, this kind of problem signals that either/both the c1psl or c1iool0 slow machines need to be power-cycled, but I confirmed that both machines are online and telnet-able. Possibilities: (i) some card in the c1psl / c1ioo crates have failed or (ii) something in the MC/FSS electronics chain has failed or (iii) there is a huge amount of excess high-frequency noise from the NPRO.

I am leaving the PSL shutter closed.

Attachment 1: PCdrive_RMS.png
PCdrive_RMS.png
  14652   Tue Jun 4 00:17:15 2019 gautamUpdateBHDPreliminary BHD calculations

​Summary:

Attachment #1 shows the RIN and phase noise requirements for the 40m BHD for measuring Ponderomotive squeezing.

Some details:

  1. The interferometer topology is not systematically optimized - I just picked values which are likely close to what we will eventually choose. Namely, P_{\mathrm{PRM}} = 8\,\mathrm{W}\phi_{\mathrm{SRC}} = 0.275 ^{\circ}\zeta_{\mathrm{homodyne}} = 88 ^{\circ}\mathcal{L}_{\mathrm{rt}}^{\mathrm{arm}} = 30\, \mathrm{ppm}G_{\mathrm{PRC}}\approx 40. Nevertheless, I think these requirements will not change by more than 30% for changes to the interferometer config.
  2. The requirements are evaluated using the following criterion: assuming that the dominant noises are (i) coil driver at mid-frequencies and (ii) quantum noise at high frequencies, what do the RIN and phase noise on the LO have to be such that the equivalent displacement noise is a factor of 10 below? I opted for a safety factor of 10, this can be relaxed. 
  3. An unknown is how much contrast defect light we will end up having due to the mismatch between arms. I assumed a few representative values.
  4. The calculations were done analytically. This paper provides a good summary of the relations - although my RIN requirement is more stringent because of the safety factor of 10, and phase noise requirement is less stringent (despite the same safety factor) because we plan to read out at nearly the amplitude quadrature.
  5. Since we are discussing the possibility of delivering the LO field using a fiber-coupled pickoff of the laser prior to RF sidebands being added, these requirements do not benefit from passive filtering from the cavity transfer functions. Consequently, the requirements are pretty challenging I think.

Conclusions:

  1. The RIN requirement looks very challenging - we will need a shot noise limited ISS with 100 mW DC sensing light, and will likely have to relax the safety factor depending on how much contrast defect light we end up having. This actually sets some requirement on the amount of filtering we need from the OMC (next step).
  2. The phase noise requirement also looks very challenging - I need to look up what is possible with the double-pass through fiber technique.

Next steps:

  1. Evaluate the pointing stability requirement on the LO field (IFO output is filtered by the OMC).
  2. We still need to think of a control scheme for the LO phase - likely, I think we will need a suspended optic between the fiber collimator delivering the light to the BHD setup with some kind of length actuation capability. 
  3. Numerical validation of this analytic study. I believe Finesse is still missing some capabilities that allow us to calculate these couplings, but I'll ask the experts to be sure.
  4. Build up the requirements on the OMC cavity:
    • Backscatter requirement (related = OFI isolation requirement, relative length noise between SRM and OMC, OFI and SRM). Does the OFI also have to be suspended?
    • Filtering requirement
    • Pointing stability requirement
    • Length noise requirement 
Attachment 1: LOreqs.pdf
LOreqs.pdf
  14653   Tue Jun 4 10:56:31 2019 gautamUpdateIOOIMC diagnostics

I briefly managed to lock the IMC today - it stayed locked for ~10 minutes. Attachment #1 shows spectra of a few error and control signals for today's lock, and from a stretch yesterday before the problems surfaced*. The 60 Hz lines are much bigger, and MC_F signals broadband excess noise above a few Hz. I suspect a problem somewhere in the electronics.

*I confess the comparison isn't entirely valid because I had to tweak the FSS FAST gain from its nominal value of 22 to 25 in order to get the PC drive RMS down to the ~1.5V level. At the nominal gain setting, with the laser frequency locked to the cavity length, the PC Drive RMS was ~4 V. Still, indicative of something being off in the electronics.

Attachment 1: IMCdiag.pdf
IMCdiag.pdf
  14655   Tue Jun 4 23:41:13 2019 gautamUpdateCamerasSteps to interact with GigE

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

  14658   Thu Jun 6 18:49:22 2019 gautamUpdateBHDPreliminary BHD calculations

Summary:

I did some more calculations based on our discussions at the meeting yesterday. Posting preliminary results here for comments.

Details:

Attachment #1 - Schematic illustration for the scattering scenarios. For all three scenarios, we would like for the scattered field to be lower than unsqueezed vacuum (safety factor to be debated).

Attachment #2 - Requirements on a fraction \epsilon_{\mathrm{bs}} = 10 \, \mathrm{ppm} of the counter-propagating resonant mode of the OMC scattering back into the antisymmetric port, as a function of RIN and phase noise on this field (y-axis) and amount of field (depends on the amount of contrast defect light which can become resonant in the counter propagating mode). I don't encode any frequency dependence here.

Attachment #3 - Requirements on the direct scatter from the arm cavity resonant field (assumed to dominate any contribution from the PRC) onto the OMC DCPDs, for some assumed phase noise (y-axis) and fraction of the field that makes it onto the OMC DCPDs. This is a pretty stringent requirement. But the probability is low (it is the product of three presumably small numbers, (i) probablity of the beam scattering out of the TEM00 mode, (ii) BRDF of the scattering surface, (iii) probability of scattering back towards the DCPDs), so maybe feasible? I didn't model any RIN on this field, which would be an additional noise term to contend with. The range of the y-axis was chosen because I think these are reasonable amplitudes for chamber wall  / other scattering surface motion at acoustic frequencies.

Attachment 1: darkPortScatter.pdf
darkPortScatter.pdf
Attachment 2: OMCbackscatter.pdf
OMCbackscatter.pdf
Attachment 3: directScatter.pdf
directScatter.pdf
  14687   Sun Jun 23 08:09:53 2019 gautamUpdateIOONPRO diagnostics

Summary:

Over the last few days, I've been doing some (complementary) measurements to what Aaron and Koji have been looking at. The motivation was to identify if the problems we are seeing are optical (i.e. imprinted on the PSL light) or electronic. My findings:

  1. 60 Hz line noise in PMC REFL and PMC TRANS is heavily dependent on whether I connect cables between the measuring PDs and Acromag ADC or not - but even with the Acromag cable disconnected, the 60 Hz RIN is HUGE - 10 mVpp out of 670 mV DC, and lines are much dirtier if you have connections to the SLOW ADCs. Measurement was made by looking at the time-domain signals on a battery powered Tektronix oscilloscope. See Attachment #1. I believe this line noise is higher it was. Cause is unknown to me at this point.
  2. The NPRO noise eater seems to function as advertised. The measured RIN with the noise eater enabled (our nominal operating condition) is in line with what the manual tells us it should be. See Attachment #2.
  3. There isn't strong evidence of excess frequency noise (measured with PLL) out to 100 kHz. I didn't measure the high-frequency part yet, but maybe I'm doing something wrong with the PLL setup which should be first corrected. See Attachments #3, #4.
  4. The beat note frequency between the free-running PSL and EX NPRO's is definitely slewing more than the quadrature sum of the advertised 1 MHz/min slewing per the manual.

Evidence:

Attachment #1: Time domain look at PMC Refl and Trans signals under various operating conditions. During this work, I took the chance to remove ~4 BNC T connectors that were connected on the PMC TRANS photodiode (Thorlabs). Now, there is one cable going to the Acromag ADC, and one going to the Oscilloscope used to monitor these signals. Any further T-ing can be done at the oscilloscope.

Attachment #2: RIN measurement of the NPRO light. I opted to place a Thorlabs PDA55 in the IR ALS pickoff light path. This is before the light sees the PMC. A DC block was inserted between the PDA55 and the AG4395 used to make the measurement. DC level of the PD output was 3.1 V into high-Z and I used half this value to normalize the measurement made by the 50-ohm input AG4395 into RIN units. The measurement was made with the PZT and slow temperature controls to the NPRO connected/disconnected, but I saw no significant difference. 

Attachment #3: Frequency noise measurement via PLL. This shows the loop transfer funtion for the PLL. Some details of the setup:

  • The beat note for locking the PLL was made between the PSL NPRO and the EX NPRO (output of the IR ALS BeatMouth). ~4dBm beatnote.
  • Local oscillator was sourced by a Marconi, f_carrier=33 MHz, RF level = +10dBm.
  • Level 7 Mixer and LB1005 controller from the mode-spectroscopy PLL setup.
  • PLL control signal routed to EX NPRO PZT via Heliax cable running along south arm. 
  • Why EX and not PSL or Marconi FM? Latter has limited range, ~1/10th of that offered by NPRO PZT. PSL PZT has a 2.9 Hz corner freq Pomona box. I could disconnect this for the purpose of PLL locking, but I thought it may be interesting to see if there’s any hints of the problem being electrical, by looking at PLL spectra with / without Pomona box. The expected delay due to cabling is only 400 ns, so not really a limiting factor for the PLL bandwidth.
  • LB 1005 settings:
    • PI corner = 3 kHz.
    • G = 2.30 (I could not increase this further - with the PSL+Lightwave NPRO PLL, we could achieve a UGF of ~60 kHz, but in this setup, I can't do much better than ~7kHz before the loop starts oscillating, not sure if the fact that the PZT actuation coefficient for the Innolight is ~5x lower than for the Lightwave is enough to explain this?).
    • LFGL = 90 dB.
  • Mixer output had a maximum value of 800 mVpp => PLL discriminant is 400 mV/rad.
  • The "eye fit" is just the transfer function of two poles at DC (one for frequency to phase conversion in the PLL and one for the LB1005 integrator), and a zero at 3kHz (PI corner). I scaled the gain till the "fit" and measurement lined up, and then used this model to undo the loop suppression of the error signal to extract the frequency noise without worrying about the frequency vector of the measurement being limited.
  • Once again, slow temperature control and PZT controls to the PSL NPRO were disconnected so this measurement was made with two free-running NPROs.

Attachment #4: Frequency noise measurement via PLL. This shows the frequency noise. I've overlaid the expected frequency noise between 2 free-running NPROs, model used is in the text box in the plot. There isn't strong evidence of excess high frequency noise in this measurement. The fact that the "LB 1005 input terminated" trace is below all the others supports the hypothesis that I'm measuring real frequency noise. The bump around a few kHz could indicate some gain peaking?

However, I'm unable to find good agreement between the measured frequency noise using the error point and the control point. For the former, I used the PLL discriminant mentioned above of 400 mV/rad, and undid the loop suppression, and for the latter I used a PZT discriminant of 1.7 MHz/V. However, there is still a constant scale difference between these two traces. So I'm doing something wrong?

Next steps:

  1. More interpretation of the PLL measurement results required.
  2. Measure the PLL error signal spectrum to higher frequencies using the AG4395. 
  3. ???

I've not disturbed the PLL setup in case anyone else wants to repeat these measurements, but I have restored the normal electrical connections to the PSL PZT and temperature control.

Some other activity:

  1. Alignment into the PMC was tweaked.
  2. NPRO laser pump current was increased from 1.9 A to 2.0 A.
  3. PMC servo gain was changed from +18 to +17 to prevent the servo from oscillating.
Attachment 1: consolidatedOscopeScreenCaps.pdf
consolidatedOscopeScreenCaps.pdf
Attachment 2: RINcomp.pdf
RINcomp.pdf
Attachment 3: PLL_OLTF.pdf
PLL_OLTF.pdf
Attachment 4: PLLnoise.pdf
PLLnoise.pdf
  14688   Sun Jun 23 09:36:32 2019 gautamUpdateIOOIMC is locking normally again

After typing up the elog, I decided to try locking the IMC again - now it locks again with the "OLD" gain settings. I tested it ~5 times, the autolocker brings the lock back and the PC drive levels are normal. IMC transmission and MC REFL DC light levels in lock are normal. The PC Drive RMS voltage is <1V. What's more, there is no longer any evidence of 60 Hz line harmonics any more in the PMC diagnostics channels. Compare attachment #1 to this elog.

WTF.

I undid the changes Koji made to the autolocker gains, and am trying the old settings again. Let' see how stable or otherwise the config is. I must've jiggled some poor cable connection back into a good spot while working on the PSL table?

Anyway, this helps Kruthi and Milind.

Attachment 1: PMCdiag.pdf
PMCdiag.pdf
  14690   Mon Jun 24 08:12:10 2019 gautamUpdateIOOIMC is locking normally again

Over the last 24 hours, the IMC autolocker was able to keep the MC locked ~60% of the time. This is not particularly good, but is an improvement on ~2 weeks ago when the IMC couldn't be locked.

There are two periods, which I've indicated by vertical cursors, between which the autolocker was doing something strange - usually this kind of trend is caused by one or more of the VME crates being unresponsive and the autolocker gets stuck, but I confirmed that both c1psl and c1iool0 are telnet-able. So I conclude that the stability and reliability of the IMC loop is still not as good as it used to be.

Note also that while the PC drive RMS level mostly hovers around 1 V, there are several excursions above that level. This in itself isn't a new phenomenon. I will do some more characterization by measuring the in-loop error signal spectrum and maybe the OLTF of the IMC locking loop.

Quote:
 

Let' see how stable or otherwise the config is. I must've jiggled some poor cable connection back into a good spot while working on the PSL table? Or the NPRO decided to be less noisy on Sunday.

Attachment 1: IMCdutycycle.png
IMCdutycycle.png
  14691   Mon Jun 24 11:48:35 2019 gautamUpdateIOOIMC in-loop error spectra and OLTF

Attachment #1 - In loop error spectra, measured as Koji posted end of last week.

  • Main difference is that the line noise seems much lower.
  • For the "dark" measurement, I set the IN1 gain of the servo board to the value of +4 dB, which is what it is in lock.
  • As Koji mentioned, this isn't an apple-to-apple comparison as the IMC loop will squish the plotted orange trace.
  • Nevertheless, the fact that the blue trace is above orange everywhere gives confidence that we are in fact measuring frequency noise.
  • For the higher frequency measurement, I used the AG4395 analyzer, which has 50 ohm input impedance. So to get the measurements with the SR785 to line up, I multiplied these by x2.
  • For the frequency axis calibration, I used the value of 13 kHz/V for the PDH discriminant, which was what I measured it to be last year (but I didn't check again today).
  • Note that the IMC locking loop OLTF has not been undone, so this isn't the actual laser frequency noise on the transmitted beam. In order to measure the latter, we'd have to use (for example) an arm cavity as an analyzer.

Attachment #2 - OLTF of the IMC loop.

  • Measurement was made using the IN1/IN2 method, injection was done at the "A EXC" front panel BNC input.
  • For comparison, I've overlaid a measurement from the 2017 IMC loop investigations. Doesn't seem to be significantly different.
  • UGF and phase margin are in the ballpark of what they were reported to be in the past.

Attachment #3 - Photo courtesy Koji showing the bank of BNC connectors used for these measurements.

Clearly, these measurements were taken in a time when the IMC was "well behaved". How to characterize what's happening when this isn't the case?

Attachment 1: IMCfreqNoise.pdf
IMCfreqNoise.pdf
Attachment 2: IMC_OLTF.pdf
IMC_OLTF.pdf
Attachment 3: IMC_CMboard.jpg
IMC_CMboard.jpg
  14696   Tue Jun 25 15:32:16 2019 gautamUpdateIOOPMC and IMC locked again, some MEDM maintenance

Aaron complained to me earlier that the PMC could not be locked. Turned out to be a classic sticky slider problem, I keyed the c1psl VME crate, and did the usual burtrestore trick. After that, I could immediately lock the PMC and IMC with the nominal gain settings.

I also looked at the wiring at the rack. An SLP250 was installed at the mixer IF output, in parallel with a 50 ohm terminator to ground. I removed this, because as Aaron pointed out, the PMC servo card "FP1 TEST" input is already 50 ohm, and has two cascaded LC filter stages immediately after to filter out the 2f component, so the extra low-pass filtering is superfluous (in any case, 250 MHz is much too high a cutoff to be using for cutting out the 2f component which will be at ~70 MHz).

Finally, in the last ~2 weeks, we have been running with the PMC servo gain of +17 (as opposed to +18 from before). The old gain is too high, as noted by Milind. But the MEDM field for this gain goes RED unless the gain is +18. I adjusted the value of the  C1:PSL-STAT_PMC_NOM_GAIN channel to +17, so that this is no longer the case. I also edited the PMC MEDM screen to get rid of my comment that the "SLOW ADC IS DEAD" for the PMC TRANS field, since I have now hooked up the PMC trans photodiode to our temporary Acromag box.

Attachment 1: PMCctrl.png
PMCctrl.png
  14703   Wed Jun 26 20:45:03 2019 gautamUpdateCamerasField of view options

For the beam spot position tracking, I am wondering if there is any benefit to going for a wider field of view and getting the OSEMs in the frame? It may provide some "anchor points" against which whatever algorithm can calibrate the spot position against. But there are also several point scatterers visible in the current view, and perhaps the Gaussiam beam profile moving over them and tracking the scattered intensity from these point scatterers serves the same function? I don't know of a good solution to have a "switchable" field of view configuration in the already cramped camera enclosure though.

Also, I think it may be useful to have a cron job take a picture of MC2 and archive it (once a week? or daily?) to have some long term diagnostic of how the scattered light received by the camera changes over several months.

Quote:

The GigE is focused now and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs

  14704   Wed Jun 26 21:01:26 2019 gautamUpdateLSCPOX and POY locking

Now that the IMC is remaining locked for extended periods of time, the next problem to attack is the ASS dither alignment system. For a start, I decided to try and get the POX and POY locking working again, as we have not fully recovered the interferometer alignment after the most recent pumpdown. I spent a couple of hours tweaking the alignment of the arm cavity mirrors, BS, and TTs to try and recover the maximum possible TRX and TRY - however, my best efforts only yielded TRX~0.8, TRY~0.75. Moreover, the beam axis is such that the spot is significantly off in YAW on both ETMs, as evidenced by the camera views (also true but less obvious on the ITMs). However, trying to bring the beam back to the center of the optics yields TRY and TRX values lower than the above reported maxima. The EX green beam is currently unavailable to verify the arm cavity alignment because of my hijacking the EX NPROs PZT control for PLL investigations, but with the Y arm, I'm able to lock a TEM00 mode. Probably just needs more careful systematic alignment, but I'm not pursuing this tonight.

  14705   Thu Jun 27 14:28:12 2019 gautamUpdateLSCPOX and POY locking

After a more systematic alignment effort, I was able to get the spots better centered on the optics (judged by eye from the analog camera views). TRY ~0.7, TRX~1.15. The X-arm dither alignment system seems to work out-of-the-box with the existing settings, I was able to run it and maximize the X-arm transmission.

Other work: I also cleaned up the area around MC2 a litte - laptop from on top of the vacuum chamber was removed and a rogue ethernet cable was also removed. The resulted in some misalignment of the IMC, which I corrected by manual alignment. Now the IMC is locked again with nominal transmission levels.

On the PSL table, I re-routed the RF output from the BeatMouth to the regular IR-ALS electronics chain (it was hijacked for PLL investigations). At EX, I disconnected the cable running from the LB1005 to the EX NPRO laser PZT (again was being used for PLL locking), and re-connected the output from the Green uPDH box to allow for some ALS tests to be done. I could then lock the EX green beam to the X-arm, and achieved GTRY ~ 0.35 using the ASX system. More to follow on ALS tests later today.

  14716   Mon Jul 1 20:27:44 2019 gautamUpdateASCASX tuning

Summary:

To practise the dither alignment servo tuning, I decided to make the ASX system work again (mainly because it has fewer DoFs and so I thought it'd be easier to manage). Setup is: dither PZT mirrors on EX table-->demodulate green transmission at the dither frequencies-->Servo the error signals to 0 by an integrator.

Details:

  1. Started by checking the dither lines are showing up with good SNR in GTRX. They are, see Attachment #1. The dither lines are at 18.23 Hz, 27.13 Hz, 53.49 Hz and 41.68 Hz, and all of them show up with SNR ~100.
  2. Hand-aligned the beam till I got a maximum of GTRX ~ 0.35. This is lower than the usual ~0.5 I am used to - possibilities are (i) in the process of plugging in the BNC cable to the rear of the EX laser for my PLL investigations, I disturbed the alignment into the SHG crystal ever so slightly and I now have less green light going into the cavity or (ii) there is an iris on the EX table just before the green beam goes into the vacuum on which it is getting clipped. IIRC, I had centered the GTRX camera view such that the spot was well centered in the field of view, but now I see substantial mis-centering in pitch. So the cavity alignment for IR could also be sub-optimal (although I saw TRX ~1.15). Anyways, I decided to push on.
  3. Introduced a deliberate offset in a given DoF, e.g. M1 PIT. Then I looked at the demodulated error signals (filtered through an RLP0.5 filter post demodulation, so the 2f component should be attenuated by 100 dB at least), and tuned the demod phase until most of the signal appeared in the I-phase, which is what is used for servoing. The Q-phase signals were ~x10 lower than their I-phase counterparts after the tuning.
  4. Checked the linearity of the error signal in response to misalignment of a given DoF. I judged it to be sufficiently linear for all four DoFs about the quadratic part of the GTRX variation.
  5. Tweaked the overall servo gains to have the error signals be driven to 0 in ~10 seconds.
  6. There was quite significant cross-coupling between the DoFs - why should this be? I can understand the PIT->YAW coupling because of imperfect mounting of the PZT mounted mirror in a rotational sense, but I don't really understand the M1->M2 coupling.
  7. Nevertheless, the servo appears to work - see Attachment #2.

The adjusted demod phases, servo gains were saved to the .snap file which gets called when we run the "DITHER ON" script. Also updated the StripTool template.

I plan to repeat similar characterization on the IR dither alignment servos. I think the tuning of the ASS settings can be done independently of figuring out the mystery of why the TRY level is so low.

Attachment 1: ASX_ditherlines.pdf
ASX_ditherlines.pdf
Attachment 2: ASX.png
ASX.png
  14718   Tue Jul 2 12:30:53 2019 gautamUpdateElectronicsAcromag crate switched to Sorensens

[chub, gautam]

We crossed off another couple of bullets today.

It took me ~1 hour to realize that c1susaux requries the running of sudo /sbin/ifup eth0 to be run in order to see the martian network - why???

Activity:

  1. Stopped the c1susaux machine:
    • Moved alignment sliders of ITMX and ITMY to 0 as a precaution.
    • Shutdown the c1susaux machine so that it doesn't become unhappy with the missing Acromags when we power the unit down.
  2. Dialled down supply voltages on the +/- 15 V and +/- 20 V DC Sorensens. Current draw became 0 A on the front panel indicators.
  3. Chub tapped some new terminal blocks for +15 V DC and +20 V DC
    • This required some additional daisy chaining, which is why we dialled down the Sorensens.
    • New cables were made using the "standard" LIGO color scheme, which isn't really applicable in this case because we are using +15 V DC (orange sheath wire) and + 20 V DC (yellow sheath wire) whereas the closest LIGO standard voltages are +18 V DC and +24 V DC.
    • A test cable, presumably meant to be used in the electronics area (orange for +15 V DC) was destroyed for this work as we opted for speed rather than making a new cable.
  4. Disconnected bench power supplies that were powering the Acromags, and connected the new cables.
    • I opted to use 5 A fuses in the terminal blocks for these supplies as the current draw is pretty significant.
  5. Dialled the Sorensens back up to the nominal voltages:
    • Attachment #1 shows the front panels of the Sorensens before and after this work.
    • The current limit on the +20 V DC Sorensen had to be raised, because the Acromag box draws ~2.3 A on its own, whereas the previous current draw was 2.8 A.
  6. Brought the c1susaux machine back online. Took me a while to get to the bottom of why I wasn't able to see c1susaux on the martian, but eventually, I figured out the whole sbin/ifup thingy. 

I don't understand the exact chain of causation, but during this work, the fast c1sus model crashed. I had to go through a few iterations of the scripted vertex machine rebooting, but things seem to be back in a normal state now, see Attachment #2. Should probably run the IFO test suite to make sure everything is a-okay, but for now, I am able to lock the IMC so I'm moving on.

The main task remaining here is to take new pictures of everything and upload to the wiki. Also, need to update the Sorensen labels to reflect their current values, some of them are outdated.

Quote:
  • Take photos of the new setup, cabling.
  • Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
  • Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
Attachment 1: 1X5Sorensens.pdf
1X5Sorensens.pdf
Attachment 2: CDS_20190702.png
CDS_20190702.png
ELOG V3.1.3-