40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 290 of 341  Not logged in ELOG logo
IDup Date Author Type Category Subject
  14566   Wed Apr 24 16:06:44 2019 gautamUpdatePSLInnolight NPRO shutoff

After discussing with Koji, I turned the NPRO back on again, at ~4PM local time. I first dialled the injection current down to 0A. Then powered the control unit state to "ON". Then I ramped up the power by turning the front panel dial. Lasing started at 0.5A, and I saw no abrupt swings in the power (I used PMC REFL as a monitor, there were some mode flashes which are the dips seen in the power, and the x-axis is in units of time not pump current). PMC was relocked and IMC autolocker locked the IMC almost immediately.

Now we wait and watch I guess.

Attachment 1: PMCrefl.png
PMCrefl.png
  14567   Wed Apr 24 17:07:39 2019 gautamUpdateSUSc1susaux in-situ testing [and future of IFOtest]

[jon, gautam]

For the in-situ test, I decided that we will use the physical SRM to test the c1susaux Acromag replacement crate functionality for all 8 optics (PRM, BS, ITMX, ITMY, SRM, MC1, MC2, MC3). To facilitate this, I moved the backplane connector of the SRM SUS PD whitening board from the P1 connector to P2, per Koji's mods at ~5:10PM local time. Watchdog was shutdown, and the backplane connectors for the SRM coil driver board was also disconnected (this is interfaced now to the Acromag chassis).

I had to remove the backplane connector for the BS coil driver board in order to have access to the SRM backplane connector. Room in the back of these eurocrate boxes is tight in the existing config...

At ~6pm, I manually powered down c1susaux (as I did not know of any way to turn off the EPICS server run by the old VME crate in a software way). The point was to be able to easily interface with the MEDM screens. So the slow channels prefixed C1:SUS-* are now being served by the Supermicro called c1susaux2.

A critical wiring error was found. The channel mapping prepared by Johannes lists the watchdog enable BIO channels as "C1:SUS-<OPTIC>_<COIL>_ENABLE", which go to pins 23A-27A on the P1 connector, with returns on the corresponding C pins. However, we use the "TEST" inputs of the coil driver boards for sending in the FAST actuation signals. The correct BIO channels for switching this input is actually "C1:SUS-<OPTIC>_<COIL>_TEST", which go to pins 28A-32A on the P1 connector. For todays tests, I voted to fix this inside the Acromag crate for the SRM channels, and do our tests. Chub will unfortunately have to fix the remaining 7 optics, see Attachment #1 for the corrections required. I apportion 70% of the blame to Johannes for the wrong channel assignment, and accept 30% for not checking it myself.

The good news: the tests for the SRM channels all passed!

  • Attachment #2: Output of Jon's testing code. My contribution is the colored logs courtesy of python's coloredlogs package, but this needs a bit more work - mainly the PASS mssage needs to be green. This test applies bias voltages to PIT/YAW, and looks for the response in the PDmon channels. It backs out the correct signs for the four PDs based on the PIT/YAW actuation matrix, and checks that the optic has moved "sufficiently" for the applied bias. You can also see that the PD signals move with consistent signs when PIT/YAW misalignment is applied. Additionally, the DC values of the PDMon channels reported by the Acromag system are close to what they were using the VME system. I propose calling the next iteration of IFOtest "Sherlock".
  • Attachment #3: Confirmation (via spectra) that the SRM OSEM PD whitening can still be switched even after my move of the signals from the P1 connector to the P2 connector. I don't have an explanation right now for the shape of the SIDE coil spectrum.
  • Attachment #4: Applied 100 cts (~ 100*10/2**15/2 ~ 15mV at the monitor point) offset at the bias input of the coil output filters on SRM (this is a fast channel). Looked for the response in the Coil Vmon channels (these are SLOW channels). The correct coil showed consistent response across all 5 channels.

Additionally, I confirmed that the watchdog tripped when the RMS OSEM PD voltage exceeded 200 counts. Ideally we'd have liked to test the stability of the EPICS server, but we have shut it down and brought the crate back out to the electronics bench for Chub to work on tomorrow.

I restarted the old VME c1susaux at 915pm local time as I didn't want to leave the watchdogs in an undefined state. Unsurprisingly, ITMY is stuck. Also, the BS (cable #22) and SRM (cable #40) coil drivers are physically disconnected at the front DB15 output because of the undefined backplane inputs. I also re-opened the PSL shutter.

Attachment 1: 2019-04-24_20-29.pdf
2019-04-24_20-29.pdf
Attachment 2: Screenshot_from_2019-04-24_20-05-54.png
Screenshot_from_2019-04-24_20-05-54.png
Attachment 3: SRM_OSEMPD_WHT_ACROMAG.pdf
SRM_OSEMPD_WHT_ACROMAG.pdf
Attachment 4: DCVmon.png
DCVmon.png
  14568   Wed Apr 24 17:39:15 2019 YehonathanSummaryLoss MeasurementBasic analysis of loss measurement

Motivation

  • Getting myself familiar with Python.
  • Characterize statistical errors in the loss measurement.

Summary

​The precision of the measurement is excellent. We should move on to look for systematic errors. 

In Detail

According to Johannes and Gautam (see T1700117_ReflectionLoss .pdf in Attachment 1), the loss in the cavity mirror is obtained by measuring the light reflected from the cavity when it is locked and when it is misaligned. From these two measurements and by using the known transmissions of the cavity mirrors, the roundtrip loss is extracted.

I write a Python notebook (AnalyzeLossData.ipynb in Attachment 1) extracting the raw data from the measurement file (data20190216.hdf5 in Attachment 1) analyzing the statistics of the measurement and its PSD.

Attachment 2 shows the raw data. 

Attachment 3 shows the histogram of the measurement. It can be seen that the distribution is very close to being Gaussian.

The loss in the cavity pre roundtrip is measured to be 73.7+/-0.2 parts per million. The error is only due to the deviation in the PD measurement. Considering the uncertainty of the transmissions of the cavity mirrors should give a much bigger error.

Attachment 4 shows noise PSD of the PD readings. It can be seen that the noise spectrum is quite constant and there would be no big improvement by chopping the signal.

The situation might be different when the measurement is taken from the cavity lock PD where the signal is much weaker.

Attachment 1: LossMeasurementAnalysis.zip
Attachment 2: LossMeasurement_RawData.pdf
LossMeasurement_RawData.pdf
Attachment 3: LossMeasurement_Hist.pdf
LossMeasurement_Hist.pdf
Attachment 4: LossMeasurement_PSD.pdf
LossMeasurement_PSD.pdf
  14569   Thu Apr 25 00:30:45 2019 gautamUpdateSUSETMY BR mode

We briefly talked about the bounce and roll modes of the SOS optic at the meeting today. 

Attachment #1: BR modes for ETMY from my free-swinging run on 17 April. The LL coil has a very different behavior from the others.

Attachment #2: BR modes for ETMY from my free-swinging run on 18 April, which had a macroscopically different bias voltage for the PIT/YAW sliders. Here too, the LL coil has a very different behavior from the others.

Attachment #3: BR modes for ETMX from my free-swinging run on 27 Feb. There are many peaks in addition to the prominent ones visible here, compared to ITMY. The OSEM PD noise floor for UR and SIDE is mysteriously x2 lower than for the other 3 OSEMs???

In all three cases, a bounce mode around 16.4 Hz and a roll mode around 24.0 Hz are visible. The ratio between these is not sqrt(2), but is ~1.46, which is ~3% larger. But when I look at the database, I see that in the past, the bounce and roll modes were in fact at close to these frequencies.

In conclusion:

  1. the evidence thus far says that ETMY has 5 resonant modes in the free-swinging data between 0.5 Hz and 25 Hz.
  2. Either two modes are exactly degenerate, or there is a constraint in the system which removes 1 degree of freedom.
  3. How likely is the latter? As any mechanical constraint that removes one degree of freedom would presumably also damp the Qs of the other modes more than what we are seeing.
  4. Can some large piece of debris on the barrel change the PIT/YAW eigenvectors such that the eigenvalues became exactly degenerate?
  5. Furthermore, the AC actuation vectors for PIT and YAW are not close to orthogonal, but are rotated ~45 degrees relative to each other.

Because of my negligence and rushing the closeout procedure, I don't have a great close-out picture of the magnet positions in the face OSEMs, the best I can find is Attachment #4. We tried to replicate the OSEM arrangement (orientation of leads from the OSEM body) from July 2018 as closely as possible.

I will investigate the side coil actuation strength tomorrow, but if anyone can think of more in-air tests we should do, please post your thoughts/poetry here.

Attachment 1: ETMY_sensorSpectra_BRmode.pdf
ETMY_sensorSpectra_BRmode.pdf
Attachment 2: ETMY_sensorSpectra_BRmode.pdf
ETMY_sensorSpectra_BRmode.pdf
Attachment 3: ETMX_sensorSpectra_BRmode.pdf
ETMX_sensorSpectra_BRmode.pdf
Attachment 4: IMG_5993.JPG
IMG_5993.JPG
  14570   Thu Apr 25 01:03:29 2019 gautamUpdatePSLMC trans is ~1000 cts (~7%) lower than usual

When dialing up the current, I went up to 2.01 A on the front panel display, which is what I remember it being. The label on the controller is from when the laser was still putting out 2W, and says the pump current should be 2.1 A. Anyhow, the MC transmission is ~7% lower now (14500 cts compared to the usual 15000-15500 cts), even after tweaking the PMC alignment to minimize PMC REFL. Potentially there is less power coming out of the NPRO. I will measure it at the window tomorrow with a power meter.

  14571   Thu Apr 25 03:32:25 2019 AnjaliUpdateFrequency noise measurementMZ interferometer ---> DAQ
  • Attachment #1 shows the time domain output from this measurement. The contrast between the maximum and minimum is better in this case compared to the previous trials.
  • We also tried to extract the frequency noise of the laser from this measurement. Attachment #2 shows the frequency noise spectrum. The experimental result is compared with the theoretical value of frequency noise. Above 10 Hz, the trend is comparable to the expected 1/f characteristics, but there are other peak also appearing. Similarly, below 10 Hz, the experimentally observed value is higher compared to the theory.
  • One of the uncertainties in this result is because of the length fluctuation of the fiber. The phase fluctuation in the system could be either because of the frequency noise of the laser or because of the length fluctuation of the fiber.  So,one of the reasons for the discrepancy between the experimental result and theory could be because of  fiber length fluctuation. Also, there were no locking method been applied to operate the MZI in the linear range.
  • The next step would be to do a heterodyne measurement. Attachment #3 shows the schematic for the heterodyne measurement. A free space AOM can be inserted in one of the arms to do the frequency shift. At the output of photodiode, a RF heterodyne method as shown in attachment #3 can be applied to separate the inphase and quadrature component. These components need to be saved with a deep memory system. Then the phase and thus the frequency noise can be extracted.
  • Attachment #4 shows the noise budget prepared for the heterodyne setup. The length of the fiber considered is 60 m and the photodiode is PDA255. I also have to add the frequency noise of the RF driver and the intensity noise of the laser in the noise budget.
Quote:
  1. Delay fiber was replaced with 5m (~30 nsec delay)
    • The fringing of the MZ was way too large even with the free running NPRO (~3 fringes / sec)
    • Since the V/Hz is proportional to the delay, I borrowed a 5m patch cable from Andrew/ATF lab, wrapped it around a spool, and hooked it up to the setup
    • Much more satisfactory fringing rate (~1 wrap every 20 sec) was observed with no control to the NPRO
  2. MZ readout PDs hooked up to ALS channels
    • To facilitate further quantitative study, I hooked up the two PDs monitoring the two ports of the MZ to the channels normally used for ALS X.
    • ZHL3-A amps inputs were disconnected and were turned off. Then cables to their outputs were highjacked to pipe the DC PD signals to the 1Y3 rack
    • Unfortunately there isn't a DQ-ed fast version of this data (would require a model restart of c1lsc which can be tricky), but we can already infer the low freq fringing rate from overnight EPICS data and also use short segments of 16k data downloaded "live" for the frequency noise measurement.
    • Channels are C1:ALS-BEATX_FINE_I_IN1 and C1:ALS-BEATX_FINE_Q_IN1 for 16k data, and C1:ALS-BEATX_FINE_I_INMON and C1:ALS-BEATX_FINE_I_INMON for 16 Hz.

At some point I'd like to reclaim this setup for ALS, but meantime, Anjali can work on characterization/noise budgeting. Since we have some CDS signals, we can even think of temperature control of the NPRO using pythonPID to keep the fringe in the linear regime for an extended period of time.

Attachment 1: Time_domain_output.pdf
Time_domain_output.pdf
Attachment 2: Frequency_noise.pdf
Frequency_noise.pdf
Attachment 3: schematic_heterodyne_setup.png
schematic_heterodyne_setup.png
Attachment 4: Noise_budget_1_micron_in_Hz_per_rtHz.pdf
Noise_budget_1_micron_in_Hz_per_rtHz.pdf
  14572   Thu Apr 25 10:13:15 2019 ChubUpdateGeneralAir Handler Out of Commission

The air handler on the roof of the 40M that supplies the electronics shop and computer room is out of operation until next week.  Adding insult to injury, there is a strong odor of Liquid Wrench oil (a creeping oil for loosening stuck bolts that has a solvent additive) in the building.  If you don't truly need to be in the 40M, you may want to wait until the environment is back to being cool and "unscented".  On a positive note, we should have a quieter environment soon!

  14573   Thu Apr 25 10:25:19 2019 gautamUpdateFrequency noise measurementHomodyne v Heterodyne

If I understand correctly, the Mach-Zehnder readout port power is only a function of the differential phase accumulated between the two interfering light beams. In the homodyne setup, this phase difference can come about because of either fiber length change OR laser frequency change. We cannot directly separate the two effects. Can you help me understand what advantage, if any, the heterodyne setup offers in this regard? Or is the point of going to heterodyne mainly for the feedback control, as there is presumably some easy way to combine the I and Q outputs of the heterodyne measurement to always produce an error signal that is a linear function of the differential phase, as opposed to the sin^2 in the free-running homodyne setup? What is the scheme for doing this operation in a high bandwidth way (i.e. what is supposed to happen to the demodulated outputs in Attachment #3 of your elog)? What is the advantage of the heterodyne scheme over applying temperature feedback to the NPRO with 0.5 Hz tracking bandwidth so that we always stay in the linear regime of the homodyne readout?

Also, what is the functional form of the curve labelled "Theory" in Attachment #2? How did you convert from voltage units in Attachment #1 to frequency units in Attachment #2? Does it make sense that you're apparently measuring laser frequency noise above 10 Hz? i.e. where do the "Dark Current Noise" and "Shot Noise" traces for the experiment lie relative to the blue curve in Attachment #2? Can you point to where the data is stored, and also add a photo of the setup?

  14574   Thu Apr 25 10:32:39 2019 JonUpdateVACVac interlocks updated

I slightly cleaned up Gautam's disabling of the UPS-predicated vac interlock and restarted the interlock service. This interlock is intended to protect the turbo pumps after a power outage, but it has proven disruptive to normal operations with too many false triggers. It will be reenabled once a new UPS has been installed. For now, as it has been since 2001, the vac pumps are unprotected against an extended power outage.

  14575   Thu Apr 25 11:27:11 2019 gautamUpdateVACPSL shutter re-opened

This activity seems to have closed the PSL shutter (actually I'm not sure why that happened - the interlock should only trip if P1a exceeds 3 mtorr, and looking at the time series for the last 2 hours, it did not ever exceed this threshold). I saw no reason for it to remain closed so I re-opened it just now.

I vote for not remotely rebooting any of the vacuum / PSL subsystems. In the event of something going catastrophically wrong, someone should be on hand to take action in the lab.

  14576   Thu Apr 25 15:47:54 2019 AnjaliUpdateFrequency noise measurementHomodyne v Heterodyne

My understanding is that the main advantage in going to the heterodyne scheme is that we can extract the frequecy noise information without worrying about locking to the linear region of MZI. Arctan of the ratio of the inphase and quadrature component will give us phase as a function of time, with a frequency offset. We need to to correct for this frequency offset. Then the frequency noise can be deduced. But still the frequency noise value extracted would have the contribution from both the frequency noise of the laser as well as from fiber length fluctuation. I have not understood the method of giving temperature feedback to the NPRO.I would like to discuss the same.

The functional form used for the curve labeled as theory is 5x104/f. The power spectral density (V2/Hz) of the the data in attachment #1 is found using the pwelch function in Matlab and square root of the same gives y axis in V/rtHz. From the experimental data, we get the value of Vmax and Vmin. To ride from Vmax to Vmin , the corrsponding phase change is pi. From this information, V/rad can be calculated. This value is then multiplied with 2*pi*time dealy to get the quantity in V/Hz. Dividing V/rtHz value with V/Hz value gives  y axis in Hz/rtHz. The calculated value of shot noise and dark current noise are way below (of the order of 10-4 Hz/rtHz) in this frequency range. 

I forgor to take the picture of the setup at that time. Now Andrew has taken the fiber beam splitter back for his experiment. Attachment #1 shows the current view of the setup. The data from the previous trial is saved in /users/anjali/MZ/MZdata_20190417.hdf5

 

Quote:

If I understand correctly, the Mach-Zehnder readout port power is only a function of the differential phase accumulated between the two interfering light beams. In the homodyne setup, this phase difference can come about because of either fiber length change OR laser frequency change. We cannot directly separate the two effects. Can you help me understand what advantage, if any, the heterodyne setup offers in this regard? Or is the point of going to heterodyne mainly for the feedback control, as there is presumably some easy way to combine the I and Q outputs of the heterodyne measurement to always produce an error signal that is a linear function of the differential phase, as opposed to the sin^2 in the free-running homodyne setup? What is the scheme for doing this operation in a high bandwidth way (i.e. what is supposed to happen to the demodulated outputs in Attachment #3 of your elog)? What is the advantage of the heterodyne scheme over applying temperature feedback to the NPRO with 0.5 Hz tracking bandwidth so that we always stay in the linear regime of the homodyne readout?

Also, what is the functional form of the curve labelled "Theory" in Attachment #2? How did you convert from voltage units in Attachment #1 to frequency units in Attachment #2? Does it make sense that you're apparently measuring laser frequency noise above 10 Hz? i.e. where do the "Dark Current Noise" and "Shot Noise" traces for the experiment lie relative to the blue curve in Attachment #2? Can you point to where the data is stored, and also add a photo of the setup?

 

Attachment 1: Experimental_setup.JPG
Experimental_setup.JPG
  14577   Thu Apr 25 17:31:56 2019 gautamUpdatePSLInnolight NPRO shutoff

NPRO shutoff at ~1517  local time today afternoon. Again, not many clues from the NPRO diagnostics channel, but to my eye, the D1_POW channel shows the first variation from the "steady state", followed by the other channels. This is ~0.1 sec before the other channels register some change, so I don't know how much we can trust the synchronizaiton of the EPICS data streams. I won't turn it on again for now. I did check that the little fan on the back of the NPRO controller is still rotating.

gautam 10am 4/29: I also added a longer term trend of these diagnostic channels, no clear trends suggesting a fault are visible. The y-axis units for all plots are in Volts, and the data is sampled at 16 Hz.

Quote:

Now we wait and watch I guess.

Attachment 1: EdwinShutoff20190425.png
EdwinShutoff20190425.png
Attachment 2: EdwinShutdown_zoomOut.png
EdwinShutdown_zoomOut.png
  14578   Thu Apr 25 18:14:42 2019 AnjaliUpdatePSLDoor broken

It is noticed that one of the doors (door # 2 ) of the PSL table is broken. Attachement #1 shows the image

Attachment 1: IMG_6069.JPG
IMG_6069.JPG
  14579   Fri Apr 26 12:10:08 2019 AnjaliUpdateFrequency noise measurementFrequency noise measurement of 1 micron source

From the earlier results with homodyne measurement,the Vmax and Vmin values observed were comparable with the expected results . So in the time interval between these two points, the MZI is assumed to be in the linear region and I tried to find the frequency noise based  on data available in this region.This results is not significantly different from that we got before when we took the complete time series to calculate the frequency noise. Attachment #1 shows the time domain data considered and attachment #2 shows the frequecy noise extracted from that. 

As discussed, we will be trying the heterodyne method next. Initialy, we will be trying to save the data with two channel ADC with 16 kHz sampling rate. With this setup, we can get the information only upto 8 kHz. 

Attachment 1: Time_domain_data.pdf
Time_domain_data.pdf
Attachment 2: Frequency_noise_from_data_in_linear_region.pdf
Frequency_noise_from_data_in_linear_region.pdf
  14580   Fri Apr 26 12:32:35 2019 JonUpdatePSLmodbusPSL service shut down

Gautam and I are removing the prototype Acromag chassis from the 1x4 rack to make room for the new c1susuax hardware. I shut down and disabled the modbusPSL service running on c1auxex, which serves the PSL diagnostic channels hosted by this chassis. The service will need to be restarted and reenabled once the chassis has been reinstalled elsewhere.

  14581   Fri Apr 26 19:35:16 2019 JonUpdateSUSNew c1susaux installed, passed first round of scripted testing

[Jon, Gautam]

Today we installed the c1susaux Acromag chassis and controller computer in the 1X4 rack. As noted in 14580 the prototype Acromag chassis had to first be removed to make room in the rack. The signal feedthroughs were connected to the eurocrates by 10' DB-37 cables via adapters to 96-pin DIN.

Once installed, we ran a scripted set of suspension actuation tests using PyIFOTest. BS, PRM, SRM, MC1, MC2, and MC3 all passed these tests. We were unable to test ITMX and ITMY because both appear to be stuck. Gautam will shake them loose on Monday.

Although the new c1susaux is now mounted in the rack, there is more that needs to be done to make the installation permanent:

  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.

On Monday we plan to continue with additional scripted tests of the suspensions.


gautam - some more notes:

  • Backplane connectors for the SUS PD whitening boards, which now only serve the purpose of carrying the fast BIO signals used for switching the whitening, were moved from the P1 connector to P2 connector for MC1, MC2, MC3, ITMX, ITMY, BS and PRM.
  • In the process, the connectors for BS and PRM were detatched from the ribbon cable (there wasn't any good way to unseat the connector from the shell that I know of). These will have to be repaired by Chub, and the signal integrity will have to be checked (as they have to be for the connectors that are allegedly intact).
  • While we were doing the wiring, I disconnected the outputs of the coil driver board going to the satellite box (front panel DB15 connector on D010001). These were restored after our work for the testing phase.
  • The backplane cables to the eurocrate housing the coil driver boards were also disconnected. They are currently just dangling, but we will have to clean it up if the new crate is performing alright.
  • In general the cable routing cleanliness has to be checked and approved by Chub or someone else qualified. In particular, the power leads to the eurocrate are in the way of the DIN96-DB37 adaptor board of Johannes' design, particularly on the SUS PD eurocrate.
  • Tapping new power rails for the Acromag chassis will have to be done carefully. Ideally we shouldn't have to turn off the Sorensens.
  • There are some software issues we encountered today connected with the networking that have to be understood and addressed in a permanent way.
  • Sooner rather than later, we want to reconnect the Acromag crate that was monitoring the PSL channels, particularly given the NPRO's recent flakiness.
  • The NPRO was turned back on (following the same procedure of slowly dialing up the injection current). Primary motivation to see if the mode cleaner cavity could be locked with the new SUS electronics. Looks like it could. I'm leaving it on over the weekend...
Attachment 1: IMG_3254.jpg
IMG_3254.jpg
Attachment 2: IMG_3256.jpg
IMG_3256.jpg
  14582   Sun Apr 28 16:00:17 2019 gautamUpdateComputer Scripts / ProgramsList of suspension test

Here are some tests we should script.

  1. Checking Coil Vmons, OSEM PDmons, and watchdog enable wiring
    • Disable input to all the coil output filter modules using C1:SUS-<OPTIC>_<COIL>_SWSTAT (this is to prevent the damping loop control signals from being sent to the suspension)
    • Set ramptimes for these filter modules to 0 seconds.
    • Apply a step of 100 cts (~15 mV) using the offset field of this filter module (so the test signal is being generated by the fast CDS system).
    • Confirm that the step shows up in the correct coil Vmon channel with the appropriate size (in volts), and that other Vmons don't show any change (need to check the sign as well, based on the overall gain in this filter module).
    • Confirm that the largest response in the PDmon signals is for the same OSEM. There will be some cross-coupling but I think we always expect the largest response to be in the OSEM whose magnet we pushed provided the transimpedances are the same across all 5 coils (which is true except for PRM side), so this should be a robust criterion.
    • Take the step off using the watchdog enable field, C1:SUS-<OPTIC>_<COIL>_COMM. This allows us to confirm the watchdog signal wiring as well.
    • Reset ramptimes, watchdogs, input states to filter modules, and offsets to their pre-test values.
    • This test should tell us that the wiring assignments are correct, and that the Acromag ADC inputs are behaving as expected and are calibrated to volts.
    • This test should be done one channel at a time to check the wiring assignments are correct.
  2. Checking the SUS PD whitening
    • Measure spectrum of individual PD input (fast CDS) channels above 30 Hz with the whitening in a particular state
    • Toggle the whitening state.
    • Confirm that the whitened sensor noise above 30 Hz is below the unwhitened case (which is presumably ADC noise.
    • This test should be done one channel at a time to check the wiring assignments are correct.

Checking the Acromag DAC calibration is more complicated I think. There are measurements of the actuator calibration in units of nm/ct for the fast DACs. But these are only valid above the pendulum resonance frequency and I'm not sure we can synchronously drive a 10 Hz sine wave using the EPICs channels. The current test which drives the PIT/YAW DoFs with a DC misalingment and measures the response in the PDmon channels is a bit ad hoc in the way we set the "expected" response which is the PASS/FAIL criterion for the test. Moreover, the cross-coupling between the PDmon channels may be quite high. Needs some thought...

  14583   Mon Apr 29 16:25:22 2019 gautamUpdatePSLPSL turned on again

I turned the 2W NPRO back on again at ~4pm local time, dialing the injection current up from 0-2A in ~2 mins. I noticed today that the lasing only started at 1A, whereas just last week, it started lasing at 0.5A. After ~5 minutes of it being on, I measured 950 mW after the 11/55 MHz EOM on the PSL table. The power here was 1.06 W in January, so ~💯  mW lower now. 😮 

I found out today that the way the python FSS SLOW PID loop is scripted, if it runs into an EZCA error (due to the c1psl slow machine being dead), it doesn't handle this gracefully (it just gets stuck). I rebooted the crate for now and the MC autolcoker is running fine again. 

NPRO turned off again at ~8pm local time after Anjali was done with her data taking. I measured the power again, it was still 950mW, so at least the output power isn't degrading over 4 hours by an appreciable amount...

  14584   Mon Apr 29 16:34:27 2019 gautamUpdateElectronicsITMX/IMTY mis-labelling fixed at 1X4 and 1X5

After the X and Y arm naming conventions were changed, the labelling of the electronics in the eurocrates was not changed 😞 😔 😢 . This meant that when we hooked up the new Acromag crate, all the slow ITMX channels were in fact connected to the physical ITMY optic. I ♦️fixed♦️ the labelling now - Attachments #1 and #2 show the coil driver boards and SUS PD whitening boards correctly labelled. Our electronics racks are in desperate need of new photographs.

The "Y" arm runs in the EW direction, while the "X" arm runs in the NW direction as of April 29 2018.

ITMX was freed. ITMY is being worked on is also free..

Attachment 1: IMG_7400.JPG
IMG_7400.JPG
Attachment 2: IMG_7401.JPG
IMG_7401.JPG
  14585   Mon Apr 29 19:23:49 2019 JonUpdateComputer Scripts / ProgramsScripted tests of suspension VMons using fast system

I've added a scripted VMon/coil-enable test to PyIFOTest following the suggestion in #15542. Basically, a DC offset is added to one fast coil output at a time, and all the VMon responses are checked.

After resolving the swapped ITMX/ITMY eurocrate slots described in #14584, I ran the new scripted VMon test on all eight optics managed by c1susaux. All of them passed: SRM, BS, MC1, MC2, MC3, PRM, ITMX, ITMY. This is not the final suspension test we plan to do, but it gives me reasonably good confidence that all channels are connected correctly.

  14586   Tue Apr 30 17:27:35 2019 AnjaliUpdateFrequency noise measurementFrequency noise measurement of 1 micron source

We repeated the homodyne measurement to check whether we are measuring the actual frequency noise of the laser. The idea was to repeat the experiment when the laser is not locked and when the laser is locked to IMC.The frequency noise of the laser is expected to be reduced at higher frequency  (the expected value is about 0.1 Hz/rtHz at 100 Hz ) when it is locked to IMC . In this measurement, the fiber beam splitter used is Non PM. Following are the observations

1. Time domain output_laser unlocked.pdf : Time domain output when the laser is not locked. The frequency noise is estimated from data corresponds to the linear regime. Following time intervals are considered to calculate the frequency noise (a) 104-116 s (b) 164-167 s (c) 285-289 s

2. Frequency_noise_laser_unlocked.pdf: Frequency noise when the laser is not locked. The model used has the functional form of 5x104/f as we did before. Compared to our previous results, the closeness of the experimental results to the model is less from this measurement. In both the cases, we have the uncertainty because of the fiber length fluctuation. Moreover, this measurement could have effect of polarisation fluctuation as well.

3.Time domain output_laser locked.pdf :Time domain output when the laser is locked. Following time intervals are considered to calculate the frequency noise (a) 70-73 s (b) 142-145 s (c) 266-269 s. 

4. Frequency_noise_laser_locked.pdf : Frequency noise when the laser is locked

5. Frequency noise_comparison.pdf : Comparison of frequency noise in two cases. The two values are not significantly different above 10 Hz. We would expect reduction in frequency noise at higher frequency once the laser is locked to IMC. But this result may indicate that we are not really measuring the actual frequency noise of the laser.

Attachment 1: Homodyne_repeated_measurement.zip
  14587   Thu May 2 10:41:50 2019 gautamUpdateSUSSOS Magnet polarity

A concern was raised about the two ETMs and ITMX having the opposite response (relative to the other 7 SOS optics) in the OSEM PDmon channel in response to a given polarity of PIT/YAW offset being applied to the coils. Jon has factored into account all the digital gains in the actuation part of the CDS system in making this conclusion. I raised the possibility of the OSEM coil winding direction being opposite on the 15 OSEMs of the ETMs and ITMX, but I think it is more likely that the magnets are just glued on opposite to what they are "supposed" to be. See Attachment #6 of this elog (you'll have to rotate the photo either in your head or in your viewer) and note that it is opposite to what is specified in the assembly procedure, page 8. The net magnetic quadrupole moment is still 0, but the direction of actuation in response to current in the coil in a given direction would be opposite. I can't find magnet polarities for all the 10 SOS optics, but this hypothesis fits all the evidence so far..

  14588   Thu May 2 10:59:58 2019 JonUpdateSUSc1susux in situ wiring testing completed

Summary

Yesterday Gautam and I ran final tests of the eight suspensions controlled by c1susaux, using PyIFOTest. All of the optics pass a set of basic signal-routing tests, which are described in more detail below. The only issue found was with ITMX having an apparent DC bias polarity reversal (all four front coils) relative to the other seven susaux optics. However, further investigation found that ETMX and ETMY have the same reversal, and there is documentation pointing to the magnets being oppositely-oriented on these two optics. It seems likely that this is the case for ITMX as well. 

I conclude that all the new c1susaux wiring/EPICS interfacing works correctly. There are of course other tests that can still be scripted, but at this point I'm satisfied that the new Acromag machine itself is correctly installed. PyIFOTest has been morphed into a powerful general framework for automating IFO tests. Anything involving fast/slow IO can now be easily scripted. I highly encourage others to think of more applications this may have at the 40m.

Usage and Design

The code is currently located in /users/jon/pyifotest although we should find a permanent location for it. From the root level it is executed as

$ ./IFOTest <PARAMETER_FILE>

where PARAMETER_FILE is the filepath to a YAML config file containing the test parameters. I've created a config file for each of the suspended optics. They are located in the root-level directory and follow the naming convention SUS-<OPTIC>.yaml.

The code climbs a hierarchical "ladder" of actuation/readback-paired tests, with the test at each level depending on signals validated in the preceding level. At the base is the fast data system, which provides an independent reference against which the slow channels are tested. There are currently three scripted tests for the slow SUS channels, listed in order of execution:

  1. VMon test:  Validates the low-frequency sensing of SUS actuation (VMon channels). A DC offset is applied in the final filter module of the fast coil outputs, one coil at a time. The test confirms that the VMon of the actuated coil, and only this VMon, senses the displacement, and that the response has the correct polarity. The screen output is a matrix showing the change in VMon responses with actuation of each coil. A passing test, roughly, is diagonal values >> 0 and off-diagonal values << diagonal.

  2. Coil Enable test:  Validates the slow watchdog control of the fast coil outputs (Coil-Enable channels). Analogously to (1), this test also applies a DC offset via the fast system to one coil at a time and analyzes the VMon responses. However, in this case, the offset is enabled to all five coils simulataneously and only one coil output is enabled at a time. The screen output is again a \Delta VMon matrix interpreted in the same way as above.

     

  3. PDMon/DC Bias test:  Validates slow alignment control and readback (BiasAdj and PDMon channels). A DC misalignment is introduced first in pitch, then in yaw, with the OSEM PDMon responses measured in both cases. Using the gains from the PIT/YAW---> COIL output coupling matrix, the script verifies that each coil moves in the correct direction and by a sufficiently large magnitude for the applied DC bias. The screen output shows the change in PDMon responses with a pure pitch actuation, and with a pure yaw actuation. The output filter matrix coefficients have already been divided out, so a passing test is a sufficiently large, positive change under both pitch and yaw actuations.

     

  14589   Thu May 2 15:15:15 2019 JonOmnistructureComputerssusaux machine renamed

Now that the replacement susaux machine is installed and fully tested, I renamed it from c1susaux2 to c1susaux and updated the DNS lookup tables on chiara accordingly.

  14590   Thu May 2 15:35:54 2019 JonOmnistructureUpgradec1susaux upgrade documentation

For future reference:

  • The updated list of c1susaux channel wiring (includes the "coil enable" --> "coil test" digital outputs change)
  • Step-by-step instructions on how to set up an Acromag system from scratch
  14591   Fri May 3 09:12:31 2019 gautamUpdateSUSAll vertex SUS watchdogs were tripped

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames?

Attachment 1: SUSwatchdogs.png
SUSwatchdogs.png
  14592   Fri May 3 12:48:40 2019 gautamUpdateSUS1X4/1X5 cable admin

Chub and I crossed off some of these items today morning. The last bullet was addressed by Jon yesterday. I added a couple of new bullets.

The new power connectors will arrive next week, at which point we will install them. Note that there is no 24V Sorensen available, only 20V.

I am running a test on the 2W Mephisto for which I wanted the diagnostics connector plugged in again and Acromag channels to record them. So we set up the highly non-ideal but temporary set up shown in Attachment #1. This will be cleaned up by Monday evening latest.

update 1630 Monday 5/6: the sketchy PSL acromag setup has been disassembled.

Quote:
 
  • Take photos of the new setup, cabling.
  • Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
  • Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
Attachment 1: D38CC485-1EB6-4B34-9EB1-2CB1E809A21A.jpeg
D38CC485-1EB6-4B34-9EB1-2CB1E809A21A.jpeg
  14593   Fri May 3 12:51:58 2019 gautamUpdatePSLPSL turned on again

Per instructions from Coherent, I made the some changes to the NPRO settings. The value we were operating at is in the column labelled "Operating value", while that in the Innolight test datasheet is in the rightmost column. I changed the Xtal temp and pump current to the values Innolight tested them at (but not the diode temps as they were close and they require a screwdriver to adjust), and turned the laser on again at ~1245pm local time. The acromag channels are recording the diagnostic information.

update 2:30pm - looking at the trend, I saw that D2 TGuard channel was reporting 0V. This wasn't the case before. Suspecting a loose contact, I tightened the DSub connectors at the controller and Acromag box ends. Now it too reports ~10V, which according to the manual signals normal operation. So if one sees an abrupt change in this channel in the long trend since 1245pm, that's me re-seating the connector. According to the manual, an error state would be signalled by a negative voltage at this pin, up to -12V. Also, the Innolight manual says pin 13 of the diagnostics connector is indicating the "Interlock" state, but doesn't say what the "expected" voltage should be. The newer manual Coherent sent me has pin13 listed as "Do not use".

Setting Operating value Value Innolight tested at
Diode 1 temp [C] 20.74 21.98
Diode 2 temp [C] 21.31 23.01
Xtal temp [C] 29.39 25.00
Pump current [A] 2.05

2.10

  14594   Fri May 3 15:40:33 2019 gautamUpdateGeneralCVI 2" beamsplitters delivered

Four new 2" CVI 50/50 beamsplitters (2 for p-pol and 2 for s-pol) were delivered. They have been stored in the optics cabinet, along with the "Test Data" sheets from CVI.

  14595   Mon May 6 10:51:43 2019 gautamUpdatePSLPSL turned off again

As we have seen in the last few weeks, the laser turned itself off after a few hours of running. So bypassing the lab interlock system / reverting laser crystal temperature to the value from Innolight's test datasheet did not fix the problem.

I do not understand why the "Interlock" and "TGUARD" channels come revert to their values when the laser was lasing a few minutes after the shutoff. Is this just an artefact of the way the diagnostics is set up, or is this telling us something about what is causing the shutoff?

Attachment 1: NPROshutoff.png
NPROshutoff.png
  14596   Mon May 6 11:05:23 2019 JonUpdateSUSAll vertex SUS watchdogs were tripped

Yes, this was a consequence of the systemd scripting I was setting up. Unlike the old susaux system, we decided for safety NOT to allow the modbus IOC to automatically enable the coil outputs. Thus when the modbus service starts/restarts, it automatically restores all state except the watchdog channels, which are left in their default disabled state. They then have to be manully enabled by an operator, as I should have done after finishing testing.

Quote:

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

  14597   Wed May 8 19:04:20 2019 ranaUpdatePSLPSL turned on again
  1. Increased PSL HEPA Variac from 30 to 100% to get more airflow.
  2. All of the TEC setpoints seem cold to me, so I increased the laser crystal temperature to 30.6 C
  3. Adjusted the diode TEC setpoints individually to optimize the PMC REFL power (unlocked). DTEC A = 22.09 C, DTEC B = 21.04 C
  4. locked PMC at 1900 PT; let's see how long it lasts.

My hunch is that the TECs are working too hard and can't offload the heat onto the heat sinks. As the diode's degrade, more of the electrical power is converted to heat in the diodes rather than 808 nm photons. So hopefully the increased airflow will help.

I tried to increase the DTEC setpoints, but that seems to detune them too far from the laser absorption band, so that's not very efficient for us. IN any case, if we end up changin the laser temperature, we'll have to adjust the ALS lasers to match, and that will be annoying.

 

The office area was very cold and the HVAC air flow stronger than usual. I changed the setpoint on the thermostat near Steve's desk from 71 to 73F at 1830 today.

  14598   Wed May 8 22:11:46 2019 ranaSummaryComputersnew laptop setup: ASIA - yum issues
  • setup controls user using K Thorne LLO CDS offsite workstation instructions
  • modified /etc/fstab ala pianosa to NFS mount disks
  • set up symlinks as other workstations
  • troubles with libsasl2 and libmetaio libraries as usual for SL7 - doing symlink tricks
  • setup shared .bashrc
  • now running 'yum install gds-all' to see if we need more local libraries to run GDS from the shared disks...
  14599   Thu May 9 19:50:04 2019 gautamUpdatePSLPSL turned off again

This time, it stayed on for ~24 hours. I am not going to turn it on again today as the crane inspection is tomorrow and we plan to keep the VEA a laser safe area for speedy crane inspection.

But what is the next step? If these diode temps maximize the power output of the NPRO, then it isn't a good idea to raise the TEC setpoint futher, so should I just turn it on again with the same settings?

I did not turn the HEPA down on the PSL enclosure. I also turned off the NPROs at EX and EY so now all the four 1064nm lasers in the VEA are turned OFF (for crane inspection).

Quote:

locked PMC at 1900 PT; let's see how long it lasts.

My hunch is that the TECs are working too hard and can't offload the heat onto the heat sinks. As the diode's degrade, more of the electrical power is converted to heat in the diodes rather than 808 nm photons. So hopefully the increased airflow will help

 
Attachment 1: Screenshot_from_2019-05-09_19-49-29.png
Screenshot_from_2019-05-09_19-49-29.png
  14600   Thu May 9 22:26:39 2019 JonOmnistructurePSLSecond ADC added to PSL Acromag crate

This evening I added a second ADC module to the prototype Acromag chassis. This chassis can now read out all the PSL diagnostic channels.

I configured the second ADC identically to the first ("ADC0"), and assigned it IP address 192.168.113.122. I confirmed it is visible on the martian network.

There was an existing but unused DB-15 feedthrough which I used for ADC1 channels 1-7. The eighth channel I left unwired, but there are slots available in the neighboring DB-25 feedthough, if that channel is needed in the future. The channel wiring assignments are as follows.

ADC1 Channel DB-15 Feedthrough Pin
0+ 1
0- 9
1+ 2
1- 10
2+ 3
2- 11
3+ 4
3- 12
4+ 5
4- 13
5+ 6
5- 14
6+ 7
6- 15
7+ not connected
7- not connected

I tested all seven of these channels by applying a calibrated voltage source and measuring the response via the Windows Acromag software. All work and are correctly calibrated to better than 0.1%.

Attachment 1: IMG_3291.jpg
IMG_3291.jpg
  14601   Fri May 10 13:00:25 2019 ChubUpdateGeneralcrane inspection complete

The 40M jib cranes all passed inspection!

Attachment 1: 20190510_110245.jpg
20190510_110245.jpg
  14602   Fri May 10 15:18:04 2019 gautamUpdatePSLSome work on/around PSL table
  1. In anticipation of installing the new fan on the PSL, I disconencted the old fan and finally removed the bench power supply from the top shelf.
  2. Moved said bench supply to under the south-west corner of the PSL table.
  3. Installed temporary Acromag crate, now with two ADC cards, under the PSL table and hooked it up to the bench suppy (+15 VDC). Also ran an ethernet cable from 1X3 to the box on over head cable tray and connected it.
  4. Brought other end of 25-pin D-sub cable used to monitor the NPRO diagnostics channels from 1X4/1X5 to the PSL table. Rolled the excess length up and cable tied it, the excess is sitting on top of the PSL enclosure. Key parts of the setup are shown in Attachments #1-3. This is not an ideal setup and is only meant to get us through to the install of the new c1psl/c1ioo Acromag crate.
  5. Edited the modbus config file at /cvs/cds/caltech/target/c1psl2/npro_config.cmd to add Jon's new ADC card to the list.
  6. Edited EPICS database file at /cvs/cds/caltech/target/c1psl2/psl.db to add entries for the C1:PSL-FSS_RMTEMP and C1:PSL-PMC_PMCTRANSPD channels.
  7. Hooked up said channels to the physical ADC inputs via a DB15 cable and breakout board on the PSL table.
    CH0 --- FSS_RMTEMP (Pins 5/18 of the DB25 connector on the interface box to pins 1/9 of the Acromag DB15 connector)
    CH1 --- PMC TRANS (BNC cable from PD to pomona minigrabber to pins 2/10 of the Acromag DB15 connector)
    CH2-6 are unsued currently and are available via the DB15 breakout board shown in Attachment #3. CH7 is not connected at the time of writing
    The pin-out for the temperature sensor interface box may be found here. Restarted the modbus process. The channels are now being recorded, see Attachment #4, although checking the status of the modbus process, I get some error message, not sure what that's about.

So now we can monitor both the temperature of the enclosure (as reported by the sensor on the PSL table) and the NPRO diagnostics channels. The new fan for the controller has not been installed yet, due to us not having a good mounting solution for the new fans, all of which have a bigger footprint than the installed fan. But since the laser isn't running right now, this is probably okay.

modbusPSL.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusPSL.service; disabled)
   Active:
active (running) since Fri 2019-05-10 13:17:54 PDT; 2h 13min ago
  Process: 8824 ExecStop=/bin/kill -9 ` cat /run/modbusPSL.pid`
(code=exited, status=1/FAILURE)
 Main PID: 8841 (procServ)
   CGroup: /system.slice/modbusPSL.service
           ├─8841 /usr/bin/procServ -f -L /home/controls/modbusPSL.log -p /run/modbusPSL.pid 8009 /cvs/cds/rtapps/epics-3.14.12.2_long/module...
           ├─8842 /cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/bin/linux-x86_64/modbusApp /cvs/cds/caltech/target/c1psl2/npro_config.c...
           └─8870 caRepeater

May 10 13:17:54 c1auxex systemd[1]: Started ModbusIOC Service via procServ.

Attachment 1: IMG_7427.JPG
IMG_7427.JPG
Attachment 2: IMG_7428.JPG
IMG_7428.JPG
Attachment 3: IMG_7429.JPG
IMG_7429.JPG
Attachment 4: newPSLAcro.png
newPSLAcro.png
  14603   Fri May 10 18:24:29 2019 gautamUpdateNoiseBudgetaligoNB

I pulled the aligoNB git repo to /ligo/GIT/aligoNB/aligoNB. There isn't a reqs.txt file in the repo so installing the dependencies on individual workstations to get this running is a bit of a pain. I found the easiest thing to do was to setup a virtual environment for the python3 stuff, this way we can run python2 for the cdsutils package (hopefully that gets updated soon). I'm setting up a C1 directory in there, plan is to budget some subsystems like Oplev, ALS for now, and develop the code for the eventual IFO locking. As a test, I ran the H1 noise budget (./aligonb H1), works, so looks like I got all the dependencies...

  14604   Sat May 11 11:48:54 2019 JonUpdatePSLSome work on/around PSL table

I took a look at the error being encountered by the modbusPSL service. The problem is that the /run/modbusPSL.pid file is not being generated by procServ, even though the -p flag controlling this is correctly set. I don't know the reason for this, but it was also a problem on c1vac and c1susaux. The solution is to remove the custom kill command (ExecStop=...) and just allow systemd to stop it via its default internal kill method.

modbusPSL.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusPSL.service; disabled)
   Active:
active (running) since Fri 2019-05-10 13:17:54 PDT; 2h 13min ago
  Process: 8824 ExecStop=/bin/kill -9 ` cat /run/modbusPSL.pid`
(code=exited, status=1/FAILURE)
 Main PID: 8841 (procServ)
   CGroup: /system.slice/modbusPSL.service
           ├─8841 /usr/bin/procServ -f -L /home/controls/modbusPSL.log -p /run/modbusPSL.pid 8009 /cvs/cds/rtapps/epics-3.14.12.2_long/module...
           ├─8842 /cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/bin/linux-x86_64/modbusApp /cvs/cds/caltech/target/c1psl2/npro_config.c...
           └─8870 caRepeater

May 10 13:17:54 c1auxex systemd[1]: Started ModbusIOC Service via procServ.

  14605   Mon May 13 10:45:38 2019 gautamUpdatePSLPSL turned ON again

I used some double-sided tape to attach a San Ace 60 9S0612H4011 to the Innolight controller (Attachment #1). This particular fan is rated to run with up to 13.8V, but I'm using a +15V Sorensen output - at best, this shortens the lifespan of the fan, but I don't have a better solution for now. Then I turned the laser on again (~1040 local time), using the same settings Rana configured earlier in this thread. PMC was locked, and the IMC also could be locked but I closed the shutter for now while the laser frequency/intensity stabilizes after startup. The purpose is to facilitate completion of the pre-vent alignment checklist in prep for the planned vent tomorrow. PMC Trans reports 0.63 after alignment was optimized, which is ~15% lower than in Oct 2016.

Attachment 1: IMG_7431.JPG
IMG_7431.JPG
  14606   Mon May 13 18:48:32 2019 gautamUpdateGeneralVent prep
  1. c1auxey and c1aux VME crates were keyed.
  2. EX and EY NPROs were turned on.
  3. Y arm was aligned to the IR - best effort TRY ~0.75.
  4. EY green was aligned to the Y arm cavity. The spot is on the lower right quadrant on the CCD monitor, but GTRY ~0.35.
  5. #3 and #4 were repeated for XARM.
  6. All beams were centerd on Oplev and IP POS QPDs with this reference alignment - see Attachment #1. SOS Optic and TT DC bias positions were saved to burt snap files.
  7. I've never really used it but I updated all the SUS "driftmon" values - Attachment #2.
  8. Power going into the IMC was cut from 945 mW to 100 mW (both numbers measured with FieldMate power meter) by rotating the HWP installed last time for this purpose from 244 degrees (OLD) to 208 degrees (NEW). There was no beam dump for the reflected port of the PBS used to cut power, so I installed one, see Attachment #4.
  9. The T=90% BeamSplitter in the MC REFL path was replaced with a 2" HR mirror as is the norm for the low power IMC locking. Alignment of the MC REFL beam onto the MC REFL PD was tweaked.
  10. init.d file was edited and MCautolocker initctl process was restarted on Megatron to adopt the low power settings. It was locked, MCT ~1350 counts, see Attachment #3. Also adjusted the threshold level above which to have the slow PID offloading of FSS PZT voltage from 10000 to 1000.

I believe this completes the non-Chub portions of the pre-vent checklist, we will start letting air into the main volume ASAP tomorrow morning after crossing off the remaining items.

Main goal of this vent is to investigate the oddness of the YARM suspensions. I leave the PSL NPRO on overnight in the interest of data gathering, it's been running ~10 hrs now - I suspect it'll turn itself off before we are ready to vent in the AM.

Attachment 1: ventPrep_20190514.png
ventPrep_20190514.png
Attachment 2: driftMon_20190514.png
driftMon_20190514.png
Attachment 3: lowPowIMC.png
lowPowIMC.png
Attachment 4: IMG_7434.JPG
IMG_7434.JPG
  14607   Tue May 14 10:35:58 2019 gautamUpdateGeneralVent underway
  1. PSL had stayed on overnight. There was an EQ (M 4.6 near Costa Rica) which showed up on the Seis BLRMS, and I noticed that several optics were reporting Oplev spots off their QPDs (I had just centered these yesterday). So I did a quick alignment check:
    • IMC was readily locked
    • After moving test mass bias sliders to bring Oplev spots back to the center, the EX and EY green beams were readily locked to a TEM00 mode
    • IR flashes could be seen in TRX and TRY (though their levels are low, since we are operating with 1/10th the nominal power
    • The IP-POS QPD channels were reporting a "segmentation fault" so I keyed the c1iscaux crate and they came back. Still the QPD was reporting a low SUM value, but this too is because of the lower power. Conveniently, there was an ND2.0 filter in the beam path on a flip mount which I just flipped out of the way for the low-power tracking.
    • Then, PSL and green shutters were closed and Oplev loops were disengaged.
  2. Checked that we have an RGA scan from today
  3. During the walkthrough to check the jam nuts, Chub noticed that the outer nuts on the bellows between the OMC chamber and the IMC chamber were loose to the finger! He is tightening them now and checking the remaining jam nuts. AFAIK, Steve made it sound like this was always a formality. Should we be concerned? The other jam nuts are fine according to Chub.
  4. We valved off the pumpspool from the main volume and annuli, and started letting Nitrogen into the main volume at ~1045am.
  5. Started letting instrument grade air into the main volume at ~1130am. We are aiming for a pressure increase of 3 torr/min
  6. 4 cylinders of dry air were exhausted by ~330pm. It actually looks like we over-pressured the main volume by ~20torr - this is bad, we should've stopped the air inletting at 700 psi and then let it equilibriate to lab air pressure.
  7. At some point during the vent, the main volume pressure exceeded the working range of the cold cathode gauge CC1. It reports "Current Fail" on its LED display, which I'm assuming meant it auto-shutoff its HV to protect itself, Jon tells me the vacuum code isn't responsible for initiating any manual shutoff.
  8. A new vacuum state was added to reflect these conditions (pumpspool under vacuum, main volume at atmosphere).
  9. The annuli remain under vacuum for now. Tomorrow, when we remove the EY door, we will vent the EY annulus.

IMC was locked, MC2T ~ 1200cts after some alginment touch ups. The test mass oplevs indicate some drift, ~100urad. I didn't realign them.

The EY door removal will only be done tomorrow. I will take some free-swinging ETMY data today (suspension was kicked at 1241919438) to see if anything has changed (it shouldn't have). I need to think up a systematic debugging plan in the meantime.

Attachment 1: vent.png
vent.png
Attachment 2: Screenshot_from_2019-05-14_16-35-16.png
Screenshot_from_2019-05-14_16-35-16.png
  14608   Wed May 15 00:40:19 2019 gautamUpdateSUSETMY diagnosis plan

I collected some free-swinging data from earlier today evening. There are still only 3 peaks visible in the ASDs, see Attachment #1.

Plan for tomorrow:

TBH, I don't have any clear ideas as to what we are supposed to do to to fix the problem (or even what the problem is). So here is my plan for now:

  1. Take pictures of relative position of magnet and OSEM coil for all five coils
  2. Inspect positions of all EQ stops - back them well out if any look suspiciously close
  3. Inspect suspension wire for any kinks
  4. Inspect position of suspension wire in standoff

I anticipate that these will throw up some more clues 

Attachment 1: ETMY_sensorSpectra.pdf
ETMY_sensorSpectra.pdf
  14609   Wed May 15 10:56:47 2019 gautamUpdatePSLPSL turned ON again

To test the hypothesis that the fan replacement had any effect on the NPRO shutoff phenomena, I turned the HEPA on the PSL table down to the nominal 30% setting at ~10am.

Tomorrow I will revert the laser crystal temperature to whatever the nominal value was. If the NPRO runs in that configuration (i.e. the only change from March 2019 are the diode TEC setpoints and the new fan on the back of the controller), then hurray.

  14610   Wed May 15 10:57:57 2019 gautamUpdateSUSEY chamber opened

[chub, gautam]

  1. Vented the EE annulus.
  2. Took the heavy door off, put it on the wooden rack, put a light door on at ~11am.
  14611   Wed May 15 17:46:24 2019 gautamUpdateSUSETMY inspection

I setup the usual mini-cleanroom setup around the ETMY chamber. Then I carried out the investigative plan outlined here.

Main finding: I saw a fiber of what looks like first contact on the bottom left (as viewed from HR side) of ETMY, connecting the optic to the cage. See Attachment #1. I don't know that this can explain the problem with the missing eigenmode, it's not a hard constraint.  Seems like something that should be addressed in any case. How do we want to remove this? Just use a tweezer and pull it off, or apply a larger FC patch and then pull it off? I'm pretty sure it's first contact and not a piece of PEEK mesh because I can see it is adhered to the HR side of the optic, but couldn't capture that detail in a photo.

There weren't any obvious problem with the magnet positioning inside the OSEM, or the suspension wire. All the EQ stop tips were >3mm away from the optic.

I also backed out the bottom EQ stops on the far (south side) of the optic by ~2 full turns of the screw. Taking another free-swinging dataset now to see if anything has changed. I will upload all the photos I took, with annotations, to the gPhotos later today eve. Light doors back on at ~1730.

Update 10pm: the photos have been uploaded. I've added a "description" to each photo which should convey the message of that particualr shot, it shows up in my browser on the bottom left of the photo but can also be accessed by clicking the "info" icon. Please have a look and comment if something sticks out as odd / requires correction.

Update 1045pm: I looked at the freeswinging data from earlier today. Still only 3 peaks around 1 Hz.

The following optics were kicked:
ETMY
Wed May 15 17:45:51 PDT 2019
1242002769
Attachment 1: firstContactFiber.JPG
firstContactFiber.JPG
Attachment 2: ETMY_sensorSpectra.pdf
ETMY_sensorSpectra.pdf
  14612   Wed May 15 19:36:29 2019 KojiUpdateSUSETMY instepction

A pair of tweezer is OK as long as there is no magnets around. You need to (somewhat) constrain the mirror with the EQ stops so that you can pull the fiber without dragging the mirror.

  14613   Thu May 16 13:07:14 2019 gautamUpdateSUSFirst contact residue removal

I  used a pair of tweezers to remove the stray fiber of first contact. As Koji predicted, this was rather dry and so it didn't have the usual elasticity, so while I was able to pull most of it off, there is a small spot remaining on the HR surface of the ETM. We will remove this with a fresh application of a small patch of FC.

I the meantime, I'm curious if this has actually fixed the suspension woes, so yet another round of freeswinging data collection is ongoing. From the first 5 mins, looks positive, I see 4 peaks around 1Hz cool!

The following optics were kicked:
ETMY
Thu May 16 13:06:39 PDT 2019
1242072418

Update 730pm: There are now four well-defined peaks around 1 Hz. Together with the Bounce and Roll modes, that makes six. The peak at 0.92 Hz, which I believe corresponds to the Yaw eigenmode, is significantly lower than the other three. I want to get some info about the input matrix but there was some NDS dropout and large segments of data aren't available using the python nds fetch method, so I am trying again, kicked ETMY at 1828 PDT. It may be that we could benefit from some adjustment of the OSEM positions, the coupling of bounce mode to LL is high. Also the SIDE/POS resonances aren't obviously deconvolved. The stray first contact has to be removed too. But overall I think it was a successful removal, and the suspension characteristics are more in line with what is "expected". 

Attachment 1: etmy_sensors.pdf
etmy_sensors.pdf
Attachment 2: etmy_BRmode.pdf
etmy_BRmode.pdf
  14614   Thu May 16 22:58:25 2019 gautamUpdateASSIn air ASS test with green?

We were wondering yesterday if we can somehow test the ASS system in air. Though the arm cavity can be locked with the low power IMC transmission, I think the dither would render the POY lock unstable. But I wonder if we can use the green beam for a test. The steering PZTs installed by Yuki can serve the role of TT1/TT2 and we can dither the arm cavity mirrors while the green TEM00 mode is locked to the arm no problem. This would at least give us confidence that the actuation of ETMY/ITMY are okay (in addition to the other suspension tests). Then on the sensing side, after pumping down, the only thing we'd be foiled by is in-vacuum clipping or some major gunk on ETMY - everything else should be de-buggable even after pumping down?

I think most of the CDS infrastructure for this is already in place.

  14615   Thu May 16 23:31:55 2019 gautamUpdateSUSETMY suspension characterization

Here is my analysis. I think there are still some problems with this suspension.

Attachment #1: Time domain plots of the ringdown. The LL coil has peak response ~half of the other face OSEMs. I checked that the signal isn't being railed, the lowest level is > 100 cts.

Attachment #2: Complex TF from UL to the other coils. While there are four peaks now, looking at the phase information, it isn't possible to clearly disentangle PIT or YAW motion - in fact, for all peaks, there are at least three face shadow sensors which report the same phase. The gains are also pretty poorly balanced - e.g. for the 0.77 Hz peak, the magnitude of UR->UL is ~0.3, while LR->UL is ~3. Is it reasonable that there is a factor of 10 imbalance?

Attachment #3: Nevertheless, I assumed the following mapping of the peaks (quoted f0 is from a lorentzian fit) and attempted to find the input matrix that best convers the Sensor basis into the Euler basis.

DoF f0 [Hz]
POS 1.004
PIT 0.771
YAW 0.920
SIDE 0.967

Unsurprisingly, the elements of this matrix are very different from unity (I have to fix the normalization of the rows).

Attachment #4: Pre and post diagonalization spectra. The null stream certainly looks cleaner, but then again, this is by design so I'm not sure if this matrix is useful to implement.

Next steps:

  1. Repeat the actuator diagnonality test detailed here.
  2. ???

In case anyone wants to repeat the analysis, the suspension was kicked at 1828 PDT today and this analysis uses 15000 seconds of data from then onwards.

​Update 18 May 3pm:  Attachment #5 better presentation of the data shown in Attachment #2, the remark about the odd phasing of the coils is more clearly seen in this zoomed in view.  Attachment #6 shows Lorentzian fits to the peaks - the Qs are comparable to that seen for the other optics, although the Q for the 0.77 Hz peak is rather low.

Attachment 1: ETMY_sensors_timeDomain.pdf
ETMY_sensors_timeDomain.pdf
Attachment 2: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 3: matrixDiag.png
matrixDiag.png
Attachment 4: ETMY_diagComp.pdf
ETMY_diagComp.pdf
Attachment 5: ETMY_cplxTF.pdf
ETMY_cplxTF.pdf
Attachment 6: ETMY_pkFitNaive.pdf
ETMY_pkFitNaive.pdf
ELOG V3.1.3-