40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
 40m Log, Page 46 of 335 Not logged in
ID Date Author Type Category Subject
14595   Mon May 6 10:51:43 2019 gautamUpdatePSLPSL turned off again

As we have seen in the last few weeks, the laser turned itself off after a few hours of running. So bypassing the lab interlock system / reverting laser crystal temperature to the value from Innolight's test datasheet did not fix the problem.

I do not understand why the "Interlock" and "TGUARD" channels come revert to their values when the laser was lasing a few minutes after the shutoff. Is this just an artefact of the way the diagnostics is set up, or is this telling us something about what is causing the shutoff?

Attachment 1: NPROshutoff.png
14594   Fri May 3 15:40:33 2019 gautamUpdateGeneralCVI 2" beamsplitters delivered

Four new 2" CVI 50/50 beamsplitters (2 for p-pol and 2 for s-pol) were delivered. They have been stored in the optics cabinet, along with the "Test Data" sheets from CVI.

14593   Fri May 3 12:51:58 2019 gautamUpdatePSLPSL turned on again

Per instructions from Coherent, I made the some changes to the NPRO settings. The value we were operating at is in the column labelled "Operating value", while that in the Innolight test datasheet is in the rightmost column. I changed the Xtal temp and pump current to the values Innolight tested them at (but not the diode temps as they were close and they require a screwdriver to adjust), and turned the laser on again at ~1245pm local time. The acromag channels are recording the diagnostic information.

update 2:30pm - looking at the trend, I saw that D2 TGuard channel was reporting 0V. This wasn't the case before. Suspecting a loose contact, I tightened the DSub connectors at the controller and Acromag box ends. Now it too reports ~10V, which according to the manual signals normal operation. So if one sees an abrupt change in this channel in the long trend since 1245pm, that's me re-seating the connector. According to the manual, an error state would be signalled by a negative voltage at this pin, up to -12V. Also, the Innolight manual says pin 13 of the diagnostics connector is indicating the "Interlock" state, but doesn't say what the "expected" voltage should be. The newer manual Coherent sent me has pin13 listed as "Do not use".

Setting Operating value Value Innolight tested at
Diode 1 temp [C] 20.74 21.98
Diode 2 temp [C] 21.31 23.01
Xtal temp [C] 29.39 25.00
Pump current [A] 2.05

2.10

Chub and I crossed off some of these items today morning. The last bullet was addressed by Jon yesterday. I added a couple of new bullets.

The new power connectors will arrive next week, at which point we will install them. Note that there is no 24V Sorensen available, only 20V.

I am running a test on the 2W Mephisto for which I wanted the diagnostics connector plugged in again and Acromag channels to record them. So we set up the highly non-ideal but temporary set up shown in Attachment #1. This will be cleaned up by Monday evening latest.

update 1630 Monday 5/6: the sketchy PSL acromag setup has been disassembled.

 Quote: Take photos of the new setup, cabling. Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there. Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm) New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack. All 24 new DB-37 signal cables need to be labeled. New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates. General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening. Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
Attachment 1: D38CC485-1EB6-4B34-9EB1-2CB1E809A21A.jpeg
14591   Fri May 3 09:12:31 2019 gautamUpdateSUSAll vertex SUS watchdogs were tripped

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames?

Attachment 1: SUSwatchdogs.png

For future reference:

• The updated list of c1susaux channel wiring (includes the "coil enable" --> "coil test" digital outputs change)
• Step-by-step instructions on how to set up an Acromag system from scratch
14589   Thu May 2 15:15:15 2019 JonOmnistructureComputerssusaux machine renamed

Now that the replacement susaux machine is installed and fully tested, I renamed it from c1susaux2 to c1susaux and updated the DNS lookup tables on chiara accordingly.

14588   Thu May 2 10:59:58 2019 JonUpdateSUSc1susux in situ wiring testing completed

## Summary

Yesterday Gautam and I ran final tests of the eight suspensions controlled by c1susaux, using PyIFOTest. All of the optics pass a set of basic signal-routing tests, which are described in more detail below. The only issue found was with ITMX having an apparent DC bias polarity reversal (all four front coils) relative to the other seven susaux optics. However, further investigation found that ETMX and ETMY have the same reversal, and there is documentation pointing to the magnets being oppositely-oriented on these two optics. It seems likely that this is the case for ITMX as well.

I conclude that all the new c1susaux wiring/EPICS interfacing works correctly. There are of course other tests that can still be scripted, but at this point I'm satisfied that the new Acromag machine itself is correctly installed. PyIFOTest has been morphed into a powerful general framework for automating IFO tests. Anything involving fast/slow IO can now be easily scripted. I highly encourage others to think of more applications this may have at the 40m.

## Usage and Design

The code is currently located in /users/jon/pyifotest although we should find a permanent location for it. From the root level it is executed as

./IFOTest <PARAMETER_FILE> where PARAMETER_FILE is the filepath to a YAML config file containing the test parameters. I've created a config file for each of the suspended optics. They are located in the root-level directory and follow the naming convention SUS-<OPTIC>.yaml. The code climbs a hierarchical "ladder" of actuation/readback-paired tests, with the test at each level depending on signals validated in the preceding level. At the base is the fast data system, which provides an independent reference against which the slow channels are tested. There are currently three scripted tests for the slow SUS channels, listed in order of execution: 1. VMon test: Validates the low-frequency sensing of SUS actuation (VMon channels). A DC offset is applied in the final filter module of the fast coil outputs, one coil at a time. The test confirms that the VMon of the actuated coil, and only this VMon, senses the displacement, and that the response has the correct polarity. The screen output is a matrix showing the change in VMon responses with actuation of each coil. A passing test, roughly, is diagonal values >> 0 and off-diagonal values << diagonal. 2. Coil Enable test: Validates the slow watchdog control of the fast coil outputs (Coil-Enable channels). Analogously to (1), this test also applies a DC offset via the fast system to one coil at a time and analyzes the VMon responses. However, in this case, the offset is enabled to all five coils simulataneously and only one coil output is enabled at a time. The screen output is again a \Delta VMon matrix interpreted in the same way as above. 3. PDMon/DC Bias test: Validates slow alignment control and readback (BiasAdj and PDMon channels). A DC misalignment is introduced first in pitch, then in yaw, with the OSEM PDMon responses measured in both cases. Using the gains from the PIT/YAW---> COIL output coupling matrix, the script verifies that each coil moves in the correct direction and by a sufficiently large magnitude for the applied DC bias. The screen output shows the change in PDMon responses with a pure pitch actuation, and with a pure yaw actuation. The output filter matrix coefficients have already been divided out, so a passing test is a sufficiently large, positive change under both pitch and yaw actuations. 14587 Thu May 2 10:41:50 2019 gautamUpdateSUSSOS Magnet polarity A concern was raised about the two ETMs and ITMX having the opposite response (relative to the other 7 SOS optics) in the OSEM PDmon channel in response to a given polarity of PIT/YAW offset being applied to the coils. Jon has factored into account all the digital gains in the actuation part of the CDS system in making this conclusion. I raised the possibility of the OSEM coil winding direction being opposite on the 15 OSEMs of the ETMs and ITMX, but I think it is more likely that the magnets are just glued on opposite to what they are "supposed" to be. See Attachment #6 of this elog (you'll have to rotate the photo either in your head or in your viewer) and note that it is opposite to what is specified in the assembly procedure, page 8. The net magnetic quadrupole moment is still 0, but the direction of actuation in response to current in the coil in a given direction would be opposite. I can't find magnet polarities for all the 10 SOS optics, but this hypothesis fits all the evidence so far.. 14586 Tue Apr 30 17:27:35 2019 AnjaliUpdateFrequency noise measurementFrequency noise measurement of 1 micron source We repeated the homodyne measurement to check whether we are measuring the actual frequency noise of the laser. The idea was to repeat the experiment when the laser is not locked and when the laser is locked to IMC.The frequency noise of the laser is expected to be reduced at higher frequency (the expected value is about 0.1 Hz/rtHz at 100 Hz ) when it is locked to IMC . In this measurement, the fiber beam splitter used is Non PM. Following are the observations 1. Time domain output_laser unlocked.pdf : Time domain output when the laser is not locked. The frequency noise is estimated from data corresponds to the linear regime. Following time intervals are considered to calculate the frequency noise (a) 104-116 s (b) 164-167 s (c) 285-289 s 2. Frequency_noise_laser_unlocked.pdf: Frequency noise when the laser is not locked. The model used has the functional form of 5x104/f as we did before. Compared to our previous results, the closeness of the experimental results to the model is less from this measurement. In both the cases, we have the uncertainty because of the fiber length fluctuation. Moreover, this measurement could have effect of polarisation fluctuation as well. 3.Time domain output_laser locked.pdf :Time domain output when the laser is locked. Following time intervals are considered to calculate the frequency noise (a) 70-73 s (b) 142-145 s (c) 266-269 s. 4. Frequency_noise_laser_locked.pdf : Frequency noise when the laser is locked 5. Frequency noise_comparison.pdf : Comparison of frequency noise in two cases. The two values are not significantly different above 10 Hz. We would expect reduction in frequency noise at higher frequency once the laser is locked to IMC. But this result may indicate that we are not really measuring the actual frequency noise of the laser. Attachment 1: Homodyne_repeated_measurement.zip 14585 Mon Apr 29 19:23:49 2019 JonUpdateComputer Scripts / ProgramsScripted tests of suspension VMons using fast system I've added a scripted VMon/coil-enable test to PyIFOTest following the suggestion in #15542. Basically, a DC offset is added to one fast coil output at a time, and all the VMon responses are checked. After resolving the swapped ITMX/ITMY eurocrate slots described in #14584, I ran the new scripted VMon test on all eight optics managed by c1susaux. All of them passed: SRM, BS, MC1, MC2, MC3, PRM, ITMX, ITMY. This is not the final suspension test we plan to do, but it gives me reasonably good confidence that all channels are connected correctly. 14584 Mon Apr 29 16:34:27 2019 gautamUpdateElectronicsITMX/IMTY mis-labelling fixed at 1X4 and 1X5 After the X and Y arm naming conventions were changed, the labelling of the electronics in the eurocrates was not changed 😞 😔 😢 . This meant that when we hooked up the new Acromag crate, all the slow ITMX channels were in fact connected to the physical ITMY optic. I ♦️fixed♦️ the labelling now - Attachments #1 and #2 show the coil driver boards and SUS PD whitening boards correctly labelled. Our electronics racks are in desperate need of new photographs. The "Y" arm runs in the EW direction, while the "X" arm runs in the NW direction as of April 29 2018. ITMX was freed. ITMY is being worked on is also free.. Attachment 1: IMG_7400.JPG Attachment 2: IMG_7401.JPG 14583 Mon Apr 29 16:25:22 2019 gautamUpdatePSLPSL turned on again I turned the 2W NPRO back on again at ~4pm local time, dialing the injection current up from 0-2A in ~2 mins. I noticed today that the lasing only started at 1A, whereas just last week, it started lasing at 0.5A. After ~5 minutes of it being on, I measured 950 mW after the 11/55 MHz EOM on the PSL table. The power here was 1.06 W in January, so ~💯 mW lower now. 😮 I found out today that the way the python FSS SLOW PID loop is scripted, if it runs into an EZCA error (due to the c1psl slow machine being dead), it doesn't handle this gracefully (it just gets stuck). I rebooted the crate for now and the MC autolcoker is running fine again. NPRO turned off again at ~8pm local time after Anjali was done with her data taking. I measured the power again, it was still 950mW, so at least the output power isn't degrading over 4 hours by an appreciable amount... 14582 Sun Apr 28 16:00:17 2019 gautamUpdateComputer Scripts / ProgramsList of suspension test Here are some tests we should script. 1. Checking Coil Vmons, OSEM PDmons, and watchdog enable wiring • Disable input to all the coil output filter modules using C1:SUS-<OPTIC>_<COIL>_SWSTAT (this is to prevent the damping loop control signals from being sent to the suspension) • Set ramptimes for these filter modules to 0 seconds. • Apply a step of 100 cts (~15 mV) using the offset field of this filter module (so the test signal is being generated by the fast CDS system). • Confirm that the step shows up in the correct coil Vmon channel with the appropriate size (in volts), and that other Vmons don't show any change (need to check the sign as well, based on the overall gain in this filter module). • Confirm that the largest response in the PDmon signals is for the same OSEM. There will be some cross-coupling but I think we always expect the largest response to be in the OSEM whose magnet we pushed provided the transimpedances are the same across all 5 coils (which is true except for PRM side), so this should be a robust criterion. • Take the step off using the watchdog enable field, C1:SUS-<OPTIC>_<COIL>_COMM. This allows us to confirm the watchdog signal wiring as well. • Reset ramptimes, watchdogs, input states to filter modules, and offsets to their pre-test values. • This test should tell us that the wiring assignments are correct, and that the Acromag ADC inputs are behaving as expected and are calibrated to volts. • This test should be done one channel at a time to check the wiring assignments are correct. 2. Checking the SUS PD whitening • Measure spectrum of individual PD input (fast CDS) channels above 30 Hz with the whitening in a particular state • Toggle the whitening state. • Confirm that the whitened sensor noise above 30 Hz is below the unwhitened case (which is presumably ADC noise. • This test should be done one channel at a time to check the wiring assignments are correct. Checking the Acromag DAC calibration is more complicated I think. There are measurements of the actuator calibration in units of nm/ct for the fast DACs. But these are only valid above the pendulum resonance frequency and I'm not sure we can synchronously drive a 10 Hz sine wave using the EPICs channels. The current test which drives the PIT/YAW DoFs with a DC misalingment and measures the response in the PDmon channels is a bit ad hoc in the way we set the "expected" response which is the PASS/FAIL criterion for the test. Moreover, the cross-coupling between the PDmon channels may be quite high. Needs some thought... 14581 Fri Apr 26 19:35:16 2019 JonUpdateSUSNew c1susaux installed, passed first round of scripted testing [Jon, Gautam] Today we installed the c1susaux Acromag chassis and controller computer in the 1X4 rack. As noted in 14580 the prototype Acromag chassis had to first be removed to make room in the rack. The signal feedthroughs were connected to the eurocrates by 10' DB-37 cables via adapters to 96-pin DIN. Once installed, we ran a scripted set of suspension actuation tests using PyIFOTest. BS, PRM, SRM, MC1, MC2, and MC3 all passed these tests. We were unable to test ITMX and ITMY because both appear to be stuck. Gautam will shake them loose on Monday. Although the new c1susaux is now mounted in the rack, there is more that needs to be done to make the installation permanent: • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack. • All 24 new DB-37 signal cables need to be labeled. • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates. • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening. • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara. On Monday we plan to continue with additional scripted tests of the suspensions. gautam - some more notes: • Backplane connectors for the SUS PD whitening boards, which now only serve the purpose of carrying the fast BIO signals used for switching the whitening, were moved from the P1 connector to P2 connector for MC1, MC2, MC3, ITMX, ITMY, BS and PRM. • In the process, the connectors for BS and PRM were detatched from the ribbon cable (there wasn't any good way to unseat the connector from the shell that I know of). These will have to be repaired by Chub, and the signal integrity will have to be checked (as they have to be for the connectors that are allegedly intact). • While we were doing the wiring, I disconnected the outputs of the coil driver board going to the satellite box (front panel DB15 connector on D010001). These were restored after our work for the testing phase. • The backplane cables to the eurocrate housing the coil driver boards were also disconnected. They are currently just dangling, but we will have to clean it up if the new crate is performing alright. • In general the cable routing cleanliness has to be checked and approved by Chub or someone else qualified. In particular, the power leads to the eurocrate are in the way of the DIN96-DB37 adaptor board of Johannes' design, particularly on the SUS PD eurocrate. • Tapping new power rails for the Acromag chassis will have to be done carefully. Ideally we shouldn't have to turn off the Sorensens. • There are some software issues we encountered today connected with the networking that have to be understood and addressed in a permanent way. • Sooner rather than later, we want to reconnect the Acromag crate that was monitoring the PSL channels, particularly given the NPRO's recent flakiness. • The NPRO was turned back on (following the same procedure of slowly dialing up the injection current). Primary motivation to see if the mode cleaner cavity could be locked with the new SUS electronics. Looks like it could. I'm leaving it on over the weekend... Attachment 1: IMG_3254.jpg Attachment 2: IMG_3256.jpg 14580 Fri Apr 26 12:32:35 2019 JonUpdatePSLmodbusPSL service shut down Gautam and I are removing the prototype Acromag chassis from the 1x4 rack to make room for the new c1susuax hardware. I shut down and disabled the modbusPSL service running on c1auxex, which serves the PSL diagnostic channels hosted by this chassis. The service will need to be restarted and reenabled once the chassis has been reinstalled elsewhere. 14579 Fri Apr 26 12:10:08 2019 AnjaliUpdateFrequency noise measurementFrequency noise measurement of 1 micron source From the earlier results with homodyne measurement,the Vmax and Vmin values observed were comparable with the expected results . So in the time interval between these two points, the MZI is assumed to be in the linear region and I tried to find the frequency noise based on data available in this region.This results is not significantly different from that we got before when we took the complete time series to calculate the frequency noise. Attachment #1 shows the time domain data considered and attachment #2 shows the frequecy noise extracted from that. As discussed, we will be trying the heterodyne method next. Initialy, we will be trying to save the data with two channel ADC with 16 kHz sampling rate. With this setup, we can get the information only upto 8 kHz. Attachment 1: Time_domain_data.pdf Attachment 2: Frequency_noise_from_data_in_linear_region.pdf 14578 Thu Apr 25 18:14:42 2019 AnjaliUpdatePSLDoor broken It is noticed that one of the doors (door # 2 ) of the PSL table is broken. Attachement #1 shows the image Attachment 1: IMG_6069.JPG 14577 Thu Apr 25 17:31:56 2019 gautamUpdatePSLInnolight NPRO shutoff NPRO shutoff at ~1517 local time today afternoon. Again, not many clues from the NPRO diagnostics channel, but to my eye, the D1_POW channel shows the first variation from the "steady state", followed by the other channels. This is ~0.1 sec before the other channels register some change, so I don't know how much we can trust the synchronizaiton of the EPICS data streams. I won't turn it on again for now. I did check that the little fan on the back of the NPRO controller is still rotating. gautam 10am 4/29: I also added a longer term trend of these diagnostic channels, no clear trends suggesting a fault are visible. The y-axis units for all plots are in Volts, and the data is sampled at 16 Hz.  Quote: Now we wait and watch I guess. Attachment 1: EdwinShutoff20190425.png Attachment 2: EdwinShutdown_zoomOut.png 14576 Thu Apr 25 15:47:54 2019 AnjaliUpdateFrequency noise measurementHomodyne v Heterodyne My understanding is that the main advantage in going to the heterodyne scheme is that we can extract the frequecy noise information without worrying about locking to the linear region of MZI. Arctan of the ratio of the inphase and quadrature component will give us phase as a function of time, with a frequency offset. We need to to correct for this frequency offset. Then the frequency noise can be deduced. But still the frequency noise value extracted would have the contribution from both the frequency noise of the laser as well as from fiber length fluctuation. I have not understood the method of giving temperature feedback to the NPRO.I would like to discuss the same. The functional form used for the curve labeled as theory is 5x104/f. The power spectral density (V2/Hz) of the the data in attachment #1 is found using the pwelch function in Matlab and square root of the same gives y axis in V/rtHz. From the experimental data, we get the value of Vmax and Vmin. To ride from Vmax to Vmin , the corrsponding phase change is pi. From this information, V/rad can be calculated. This value is then multiplied with 2*pi*time dealy to get the quantity in V/Hz. Dividing V/rtHz value with V/Hz value gives y axis in Hz/rtHz. The calculated value of shot noise and dark current noise are way below (of the order of 10-4 Hz/rtHz) in this frequency range. I forgor to take the picture of the setup at that time. Now Andrew has taken the fiber beam splitter back for his experiment. Attachment #1 shows the current view of the setup. The data from the previous trial is saved in /users/anjali/MZ/MZdata_20190417.hdf5  Quote: If I understand correctly, the Mach-Zehnder readout port power is only a function of the differential phase accumulated between the two interfering light beams. In the homodyne setup, this phase difference can come about because of either fiber length change OR laser frequency change. We cannot directly separate the two effects. Can you help me understand what advantage, if any, the heterodyne setup offers in this regard? Or is the point of going to heterodyne mainly for the feedback control, as there is presumably some easy way to combine the I and Q outputs of the heterodyne measurement to always produce an error signal that is a linear function of the differential phase, as opposed to the sin^2 in the free-running homodyne setup? What is the scheme for doing this operation in a high bandwidth way (i.e. what is supposed to happen to the demodulated outputs in Attachment #3 of your elog)? What is the advantage of the heterodyne scheme over applying temperature feedback to the NPRO with 0.5 Hz tracking bandwidth so that we always stay in the linear regime of the homodyne readout? Also, what is the functional form of the curve labelled "Theory" in Attachment #2? How did you convert from voltage units in Attachment #1 to frequency units in Attachment #2? Does it make sense that you're apparently measuring laser frequency noise above 10 Hz? i.e. where do the "Dark Current Noise" and "Shot Noise" traces for the experiment lie relative to the blue curve in Attachment #2? Can you point to where the data is stored, and also add a photo of the setup? Attachment 1: Experimental_setup.JPG 14575 Thu Apr 25 11:27:11 2019 gautamUpdateVACPSL shutter re-opened This activity seems to have closed the PSL shutter (actually I'm not sure why that happened - the interlock should only trip if P1a exceeds 3 mtorr, and looking at the time series for the last 2 hours, it did not ever exceed this threshold). I saw no reason for it to remain closed so I re-opened it just now. I vote for not remotely rebooting any of the vacuum / PSL subsystems. In the event of something going catastrophically wrong, someone should be on hand to take action in the lab. 14574 Thu Apr 25 10:32:39 2019 JonUpdateVACVac interlocks updated I slightly cleaned up Gautam's disabling of the UPS-predicated vac interlock and restarted the interlock service. This interlock is intended to protect the turbo pumps after a power outage, but it has proven disruptive to normal operations with too many false triggers. It will be reenabled once a new UPS has been installed. For now, as it has been since 2001, the vac pumps are unprotected against an extended power outage. 14573 Thu Apr 25 10:25:19 2019 gautamUpdateFrequency noise measurementHomodyne v Heterodyne If I understand correctly, the Mach-Zehnder readout port power is only a function of the differential phase accumulated between the two interfering light beams. In the homodyne setup, this phase difference can come about because of either fiber length change OR laser frequency change. We cannot directly separate the two effects. Can you help me understand what advantage, if any, the heterodyne setup offers in this regard? Or is the point of going to heterodyne mainly for the feedback control, as there is presumably some easy way to combine the I and Q outputs of the heterodyne measurement to always produce an error signal that is a linear function of the differential phase, as opposed to the sin^2 in the free-running homodyne setup? What is the scheme for doing this operation in a high bandwidth way (i.e. what is supposed to happen to the demodulated outputs in Attachment #3 of your elog)? What is the advantage of the heterodyne scheme over applying temperature feedback to the NPRO with 0.5 Hz tracking bandwidth so that we always stay in the linear regime of the homodyne readout? Also, what is the functional form of the curve labelled "Theory" in Attachment #2? How did you convert from voltage units in Attachment #1 to frequency units in Attachment #2? Does it make sense that you're apparently measuring laser frequency noise above 10 Hz? i.e. where do the "Dark Current Noise" and "Shot Noise" traces for the experiment lie relative to the blue curve in Attachment #2? Can you point to where the data is stored, and also add a photo of the setup? 14572 Thu Apr 25 10:13:15 2019 ChubUpdateGeneralAir Handler Out of Commission The air handler on the roof of the 40M that supplies the electronics shop and computer room is out of operation until next week. Adding insult to injury, there is a strong odor of Liquid Wrench oil (a creeping oil for loosening stuck bolts that has a solvent additive) in the building. If you don't truly need to be in the 40M, you may want to wait until the environment is back to being cool and "unscented". On a positive note, we should have a quieter environment soon! 14571 Thu Apr 25 03:32:25 2019 AnjaliUpdateFrequency noise measurementMZ interferometer ---> DAQ • Attachment #1 shows the time domain output from this measurement. The contrast between the maximum and minimum is better in this case compared to the previous trials. • We also tried to extract the frequency noise of the laser from this measurement. Attachment #2 shows the frequency noise spectrum. The experimental result is compared with the theoretical value of frequency noise. Above 10 Hz, the trend is comparable to the expected 1/f characteristics, but there are other peak also appearing. Similarly, below 10 Hz, the experimentally observed value is higher compared to the theory. • One of the uncertainties in this result is because of the length fluctuation of the fiber. The phase fluctuation in the system could be either because of the frequency noise of the laser or because of the length fluctuation of the fiber. So,one of the reasons for the discrepancy between the experimental result and theory could be because of fiber length fluctuation. Also, there were no locking method been applied to operate the MZI in the linear range. • The next step would be to do a heterodyne measurement. Attachment #3 shows the schematic for the heterodyne measurement. A free space AOM can be inserted in one of the arms to do the frequency shift. At the output of photodiode, a RF heterodyne method as shown in attachment #3 can be applied to separate the inphase and quadrature component. These components need to be saved with a deep memory system. Then the phase and thus the frequency noise can be extracted. • Attachment #4 shows the noise budget prepared for the heterodyne setup. The length of the fiber considered is 60 m and the photodiode is PDA255. I also have to add the frequency noise of the RF driver and the intensity noise of the laser in the noise budget.  Quote: Delay fiber was replaced with 5m (~30 nsec delay) The fringing of the MZ was way too large even with the free running NPRO (~3 fringes / sec) Since the V/Hz is proportional to the delay, I borrowed a 5m patch cable from Andrew/ATF lab, wrapped it around a spool, and hooked it up to the setup Much more satisfactory fringing rate (~1 wrap every 20 sec) was observed with no control to the NPRO MZ readout PDs hooked up to ALS channels To facilitate further quantitative study, I hooked up the two PDs monitoring the two ports of the MZ to the channels normally used for ALS X. ZHL3-A amps inputs were disconnected and were turned off. Then cables to their outputs were highjacked to pipe the DC PD signals to the 1Y3 rack Unfortunately there isn't a DQ-ed fast version of this data (would require a model restart of c1lsc which can be tricky), but we can already infer the low freq fringing rate from overnight EPICS data and also use short segments of 16k data downloaded "live" for the frequency noise measurement. Channels are C1:ALS-BEATX_FINE_I_IN1 and C1:ALS-BEATX_FINE_Q_IN1 for 16k data, and C1:ALS-BEATX_FINE_I_INMON and C1:ALS-BEATX_FINE_I_INMON for 16 Hz. At some point I'd like to reclaim this setup for ALS, but meantime, Anjali can work on characterization/noise budgeting. Since we have some CDS signals, we can even think of temperature control of the NPRO using pythonPID to keep the fringe in the linear regime for an extended period of time. Attachment 1: Time_domain_output.pdf Attachment 2: Frequency_noise.pdf Attachment 3: schematic_heterodyne_setup.png Attachment 4: Noise_budget_1_micron_in_Hz_per_rtHz.pdf 14570 Thu Apr 25 01:03:29 2019 gautamUpdatePSLMC trans is ~1000 cts (~7%) lower than usual When dialing up the current, I went up to 2.01 A on the front panel display, which is what I remember it being. The label on the controller is from when the laser was still putting out 2W, and says the pump current should be 2.1 A. Anyhow, the MC transmission is ~7% lower now (14500 cts compared to the usual 15000-15500 cts), even after tweaking the PMC alignment to minimize PMC REFL. Potentially there is less power coming out of the NPRO. I will measure it at the window tomorrow with a power meter. 14569 Thu Apr 25 00:30:45 2019 gautamUpdateSUSETMY BR mode We briefly talked about the bounce and roll modes of the SOS optic at the meeting today. Attachment #1: BR modes for ETMY from my free-swinging run on 17 April. The LL coil has a very different behavior from the others. Attachment #2: BR modes for ETMY from my free-swinging run on 18 April, which had a macroscopically different bias voltage for the PIT/YAW sliders. Here too, the LL coil has a very different behavior from the others. Attachment #3: BR modes for ETMX from my free-swinging run on 27 Feb. There are many peaks in addition to the prominent ones visible here, compared to ITMY. The OSEM PD noise floor for UR and SIDE is mysteriously x2 lower than for the other 3 OSEMs??? In all three cases, a bounce mode around 16.4 Hz and a roll mode around 24.0 Hz are visible. The ratio between these is not sqrt(2), but is ~1.46, which is ~3% larger. But when I look at the database, I see that in the past, the bounce and roll modes were in fact at close to these frequencies. In conclusion: 1. the evidence thus far says that ETMY has 5 resonant modes in the free-swinging data between 0.5 Hz and 25 Hz. 2. Either two modes are exactly degenerate, or there is a constraint in the system which removes 1 degree of freedom. 3. How likely is the latter? As any mechanical constraint that removes one degree of freedom would presumably also damp the Qs of the other modes more than what we are seeing. 4. Can some large piece of debris on the barrel change the PIT/YAW eigenvectors such that the eigenvalues became exactly degenerate? 5. Furthermore, the AC actuation vectors for PIT and YAW are not close to orthogonal, but are rotated ~45 degrees relative to each other. Because of my negligence and rushing the closeout procedure, I don't have a great close-out picture of the magnet positions in the face OSEMs, the best I can find is Attachment #4. We tried to replicate the OSEM arrangement (orientation of leads from the OSEM body) from July 2018 as closely as possible. I will investigate the side coil actuation strength tomorrow, but if anyone can think of more in-air tests we should do, please post your thoughts/poetry here. Attachment 1: ETMY_sensorSpectra_BRmode.pdf Attachment 2: ETMY_sensorSpectra_BRmode.pdf Attachment 3: ETMX_sensorSpectra_BRmode.pdf Attachment 4: IMG_5993.JPG 14568 Wed Apr 24 17:39:15 2019 YehonathanSummaryLoss MeasurementBasic analysis of loss measurement Motivation • Getting myself familiar with Python. • Characterize statistical errors in the loss measurement. Summary ​The precision of the measurement is excellent. We should move on to look for systematic errors. In Detail According to Johannes and Gautam (see T1700117_ReflectionLoss .pdf in Attachment 1), the loss in the cavity mirror is obtained by measuring the light reflected from the cavity when it is locked and when it is misaligned. From these two measurements and by using the known transmissions of the cavity mirrors, the roundtrip loss is extracted. I write a Python notebook (AnalyzeLossData.ipynb in Attachment 1) extracting the raw data from the measurement file (data20190216.hdf5 in Attachment 1) analyzing the statistics of the measurement and its PSD. Attachment 2 shows the raw data. Attachment 3 shows the histogram of the measurement. It can be seen that the distribution is very close to being Gaussian. The loss in the cavity pre roundtrip is measured to be 73.7+/-0.2 parts per million. The error is only due to the deviation in the PD measurement. Considering the uncertainty of the transmissions of the cavity mirrors should give a much bigger error. Attachment 4 shows noise PSD of the PD readings. It can be seen that the noise spectrum is quite constant and there would be no big improvement by chopping the signal. The situation might be different when the measurement is taken from the cavity lock PD where the signal is much weaker. Attachment 1: LossMeasurementAnalysis.zip Attachment 2: LossMeasurement_RawData.pdf Attachment 3: LossMeasurement_Hist.pdf Attachment 4: LossMeasurement_PSD.pdf 14567 Wed Apr 24 17:07:39 2019 gautamUpdateSUSc1susaux in-situ testing [and future of IFOtest] [jon, gautam] For the in-situ test, I decided that we will use the physical SRM to test the c1susaux Acromag replacement crate functionality for all 8 optics (PRM, BS, ITMX, ITMY, SRM, MC1, MC2, MC3). To facilitate this, I moved the backplane connector of the SRM SUS PD whitening board from the P1 connector to P2, per Koji's mods at ~5:10PM local time. Watchdog was shutdown, and the backplane connectors for the SRM coil driver board was also disconnected (this is interfaced now to the Acromag chassis). I had to remove the backplane connector for the BS coil driver board in order to have access to the SRM backplane connector. Room in the back of these eurocrate boxes is tight in the existing config... At ~6pm, I manually powered down c1susaux (as I did not know of any way to turn off the EPICS server run by the old VME crate in a software way). The point was to be able to easily interface with the MEDM screens. So the slow channels prefixed C1:SUS-* are now being served by the Supermicro called c1susaux2. A critical wiring error was found. The channel mapping prepared by Johannes lists the watchdog enable BIO channels as "C1:SUS-<OPTIC>_<COIL>_ENABLE", which go to pins 23A-27A on the P1 connector, with returns on the corresponding C pins. However, we use the "TEST" inputs of the coil driver boards for sending in the FAST actuation signals. The correct BIO channels for switching this input is actually "C1:SUS-<OPTIC>_<COIL>_TEST", which go to pins 28A-32A on the P1 connector. For todays tests, I voted to fix this inside the Acromag crate for the SRM channels, and do our tests. Chub will unfortunately have to fix the remaining 7 optics, see Attachment #1 for the corrections required. I apportion 70% of the blame to Johannes for the wrong channel assignment, and accept 30% for not checking it myself. The good news: the tests for the SRM channels all passed! • Attachment #2: Output of Jon's testing code. My contribution is the colored logs courtesy of python's coloredlogs package, but this needs a bit more work - mainly the PASS mssage needs to be green. This test applies bias voltages to PIT/YAW, and looks for the response in the PDmon channels. It backs out the correct signs for the four PDs based on the PIT/YAW actuation matrix, and checks that the optic has moved "sufficiently" for the applied bias. You can also see that the PD signals move with consistent signs when PIT/YAW misalignment is applied. Additionally, the DC values of the PDMon channels reported by the Acromag system are close to what they were using the VME system. I propose calling the next iteration of IFOtest "Sherlock". • Attachment #3: Confirmation (via spectra) that the SRM OSEM PD whitening can still be switched even after my move of the signals from the P1 connector to the P2 connector. I don't have an explanation right now for the shape of the SIDE coil spectrum. • Attachment #4: Applied 100 cts (~ 100*10/2**15/2 ~ 15mV at the monitor point) offset at the bias input of the coil output filters on SRM (this is a fast channel). Looked for the response in the Coil Vmon channels (these are SLOW channels). The correct coil showed consistent response across all 5 channels. Additionally, I confirmed that the watchdog tripped when the RMS OSEM PD voltage exceeded 200 counts. Ideally we'd have liked to test the stability of the EPICS server, but we have shut it down and brought the crate back out to the electronics bench for Chub to work on tomorrow. I restarted the old VME c1susaux at 915pm local time as I didn't want to leave the watchdogs in an undefined state. Unsurprisingly, ITMY is stuck. Also, the BS (cable #22) and SRM (cable #40) coil drivers are physically disconnected at the front DB15 output because of the undefined backplane inputs. I also re-opened the PSL shutter. Attachment 1: 2019-04-24_20-29.pdf Attachment 2: Screenshot_from_2019-04-24_20-05-54.png Attachment 3: SRM_OSEMPD_WHT_ACROMAG.pdf Attachment 4: DCVmon.png 14566 Wed Apr 24 16:06:44 2019 gautamUpdatePSLInnolight NPRO shutoff After discussing with Koji, I turned the NPRO back on again, at ~4PM local time. I first dialled the injection current down to 0A. Then powered the control unit state to "ON". Then I ramped up the power by turning the front panel dial. Lasing started at 0.5A, and I saw no abrupt swings in the power (I used PMC REFL as a monitor, there were some mode flashes which are the dips seen in the power, and the x-axis is in units of time not pump current). PMC was relocked and IMC autolocker locked the IMC almost immediately. Now we wait and watch I guess. Attachment 1: PMCrefl.png 14565 Wed Apr 24 11:22:59 2019 awadeBureaucracyEquipment loanBorrowed Zurich HF2LI Lock in Amplifer to QIL Borrowed Zurich HF2LI Lock in Amplifer to QIL lab Wed Apr 24 11:25:11 2019. 14564 Tue Apr 23 19:31:45 2019 JonUpdateSUSWatchdog channels separated from autoBurt.req For the new c1susaux, Gautam and I moved the watchdog channels from autoBurt.req to a new file named autoBurt_watchdogs.req. When the new modbus service starts, it loads the state contained in autoBurt.snap. We thought it best for the watchdogs to not be automatically enabled at this stage, but for an operator to manually have to do this. By moving the watchdog channels to a separate snap file, the entire SUS state can be loaded while leaving just the watchdogs disabled. This same modification should be made to the ETMX and ETMY machines. 14563 Tue Apr 23 18:48:25 2019 JonUpdateSUSc1susaux bench testing completed Today I tested the remaining Acromag channels and retested the non-functioning channels found yesterday, which Chub repaired this morning. We're still not quite ready for an in situ test. Here are the issues that remain. ## Analog Input Channels Channel Issue C1:SUS-MC2_URPDMon No response C1:SUS-MC2_LRPDMon No response I further diagnosed these channels by connecting a calibrated DC voltage source directly to the ADC terminals. The EPICS channels do sense this voltage, so the problem is isolated to the wiring between the ADC and DB37 feedthrough. ## Analog Output Channels Channel Issue C1:SUS-ITMX_ULBiasAdj No output signal C1:SUS-ITMX_LLBiasAdj No output signal C1:SUS-ITMX_URBiasAdj No output signal C1:SUS-ITMX_LRBiasAdj No output signal C1:SUS-ITMY_ULBiasAdj No output signal C1:SUS-ITMY_LLBiasAdj No output signal C1:SUS-ITMY_URBiasAdj No output signal C1:SUS-ITMY_LRBiasAdj No output signal C1:SUS-MC1_ULBiasAdj No output signal C1:SUS-MC1_LLBiasAdj No output signal C1:SUS-MC1_URBiasAdj No output signal C1:SUS-MC1_LRBiasAdj No output signal To further diagnose these channels, I connected a voltmeter directly to the DAC terminals and toggled each channel output. The DACs are outputting the correct voltage, so these problems are also isolated to the wiring between DAC and feedthrough. In testing the DC bias channels, I did not check the sign of the output signal, but only that the output had the correct magnitude. As a result my bench test is insensitive to situations where either two degrees of freedom are crossed or there is a polarity reversal. However, my susPython scripting tests for exactly this, fetching and applying all the relevant signal gains between pitch/yaw input and coil bias output. It would be very time consuming to propagate all these gains by hand, so I've elected to wait for the automated in situ test. ## Digital Output Channels Everything works. 14562 Mon Apr 22 22:43:15 2019 gautamUpdateSUSETMY sensor diagnosis Here are the results from this test. The data for 17 April is with the DC bias for ETMY set to the nominal values (which gives good Y arm cavity alignment), while on 18 April, I changed the bias values until all four shadow sensors reported values that were at least 100 cts different from 17 April. The times are indicated in the plot titles in case anyone wants to pull the data (I'll point to the directory where they are downloaded and stored later). There are 3 visible peaks. There was negligible shift in position (<5 mHz) / change in Q of any of these with the applied Bias voltage. I didn't attempt to do any fitting as it was not possible to determine which peak corresponds to which DoF by looking at the complex TFs between coils (at each peak, different combinations of 3 OSEMs have the same phase, while the fourth has ~180 deg phase lead/lag). FTR, the wiki leads me to expect the following locations for the various DoFs, and I've included the closest peak in the current measured data in parentheses: DoF Frequency [Hz] POS 0.982 (0.947) PIT 0.86 (0.886) YAW 0.894 (0.886) SIDE 1.016 (0.996) However, this particular SOS was re-suspended in 2016, and this elog reports substantially different peak positions, in particular, for the YAW DoF (there were still 4). The Qs of the peaks from last week's measurements are in the range 250-350.  Quote: Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286. Attachment 1: ETMY_sensorSpectra_consolidated.pdf 14561 Mon Apr 22 21:33:17 2019 JonUpdateSUSBench testing of c1susaux replacement Today I bench-tested most of the Acromag channels in the replacement c1susaux. I connected a DB37 breakout board to each chassis feedthrough connector in turn and tested channels using a multimeter and calibrated voltage source. Today I got through all the digital output channels and analog input channels. Still remaining are the analog output channels, which I will finish tomorrow. There have been a few wiring issues found so far, which are noted below. Channel Type Issue C1:SUS2-PRM_URVMon Analog input No response C1:SUS2-PRM_LRVMon Analog input No response C1:SUS2-BS_UL_ENABLE Digital output Crossed with LR C1:SUS2-BS_LL_ENABLE Digital output Crossed with UR C1:SUS2-BS_UR_ENABLE Digital output Crossed with LL C1:SUS2-BS_LR_ENABLE Digital output Crossed with UL C1:SUS2-ITMY_SideVMon Analog input Polarity reversed C1:SUS2-MC2_UR_ENABLE Digital output Crossed with LR C1:SUS2-MC2_LR_ENABLE Digital output Crossed with UR 14560 Fri Apr 19 20:21:52 2019 gautamUpdatePSLInnolight NPRO shutoff Happened again at ~730pm. The NPRO diag channels don't really tell me what happened in a causal way, but the interlock channel seems suspicious. Why is the nominal value 0.04 V? From the manual, it looks like the TGUARD is an indication of deviations between the set temperature and actual diode laser temperature. Is it normal for it to be putting out 11V? I'm not going to turn it on again right now while I ponder which of my hands I need to chop off.  Quote: I'm restoring it now in the hope we can get some more info on what exactly happened if this is a recurring event. Attachment 1: Screenshot_from_2019-04-19_20-27-04.png 14559 Fri Apr 19 19:22:15 2019 ranaUpdateSUSActuation matrix still not orthogonal If thy left hand troubles thee then let the mirror show the right for if it troubles enough to cut it off it would not offend thy sight 14558 Fri Apr 19 16:19:42 2019 gautamUpdateSUSActuation matrix still not orthogonal I repeated the exercise from yesterday, this time driving the butterfly mode [+1 -1 -1 +1] and adding the tuned PIT and YAW vectors from yesterday to it to minimize appearance in the Oplev error signals. The measured output matrix is $\begin{bmatrix} 0.98 & 0.64 & 1.5 & 1.037 \\ 0.96 & 1.12 & -0.5 & -0.998 \\ 1.04 & -1.12 & 0.5 & -1.002 \\ 1.02 & -0.64 & -1.5 & 0.963 \end{bmatrix}$, where rows are the coils in the order [UL,UR,LL,LR] and columns are the DOFs in the order [POS,PIT,YAW,Butterfly]. The conclusions from my previous elog still hold though - the orthogonality between PIT and YAW is poor, so this output matrix cannot be realized by a simple gain scaling of the coil output gains. The "adjustment matrix", i.e. the 4x4 matrix that we must multiply the "ideal" output matrix by to get the measured output matrix has a condition number of 134 (1 is a good condition number, signifies closeness to the identity matrix).  Quote: let us have 3 by 4, nevermore so that the number of columns is no less and no more than the number of rows so that forevermore we live as 4 by 4 14557 Fri Apr 19 15:13:38 2019 ranaUpdateSUSNo consistent solution for output matrix let us have 3 by 4, nevermore so that the number of columns is no less and no more than the number of rows so that forevermore we live as 4 by 4  Quote: I'm struggling to think 14556 Fri Apr 19 14:06:36 2019 gautamUpdatePSLInnolight NPRO shutoff When I got back from lunch just now, I noticed that the PMC TRANS and REFL cameras were showing no spots. I went onto the PSL table, and saw that the NPRO was in fact turned off. I turned it back on. The laser was definitely ON when I left for lunch around 130pm, and this happend around 140pm. Anjali says no one was in the lab in between. None of the FEs are dead, suggesting there wasn't a labwide power outage, and the EX and EY NPROs were not affected. I had pulled out the diagnostics connector logged by Acromag, I'm restoring it now in the hope we can get some more info on what exactly happened if this is a recurring event. So FSS_RMTEMP isn't working from now on. Sooner we get the PSL Acromag crate together, the better... Attachment 1: Screenshot_from_2019-04-19_14-06-11.png 14555 Fri Apr 19 12:06:31 2019 awadeBureaucracyElectronicsBorrowed Busby Box May 19th 2019 I've borrowed the Busby Box for a day or so. Location: QIL lab at Bridge West. Edit Sat Apr 20 21:16:46 2019 (awade): returned. 14554 Fri Apr 19 11:36:23 2019 gautamUpdateSUSNo consistent solution for output matrix Ther isn't a consistent set of OSEM coil gains that explains the best actuation vectors we determined yesterday. Here are the explicit matrices: 1. POS (tuned to minimize excitation at ~13.5 Hz in the Oplev PIT and YAW error signals): $\begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 0.98 \\ 0.96 \\ 1.04 \\ 1.02 \\ \end{bmatrix}$ 2. PIT (tuned to minimize cross coupled peak in the Oplev YAW error signal at ~10.5 Hz): ​$\begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 0.64 \\ 1.12 \\ -1.12 \\ -0.64 \\ \end{bmatrix}$ 3. YAW (tuned to minimize cross coupled peak in the Oplev PIT error signal at ~13.5 Hz): $\begin{bmatrix} \text{UL} & \text{UR} & \text{LL} & \text{LR} \end{bmatrix}\begin{bmatrix} 1.5 \\ -0.5 \\ 0.5 \\ -1.5 \\ \end{bmatrix}$ There isn't a solution to the matrix equation $\begin{bmatrix} \alpha_1 & \alpha_2 & \alpha_3 & \alpha_4 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & -1 \\ 1 & -1 & 1 \\ 1 & -1 & -1 \end{bmatrix} =\begin{bmatrix} 0.98 & 0.64 & 1.5 \\ 0.96 & 1.12 & -0.5 \\ 1.04 & -1.12 & 0.5 \\ 1.02 & -0.64 & -1.5 \end{bmatrix}$, i.e. we cannot simply redistribute the actuation vectors we found as gains to the coils and preserve the naive actuation matrix. What this means is that in the OSEM coil basis, the actuation eigenvectors aren't the naive ones we would think for PIT and YAW and POS. Instead, we can put these custom eigenvectors into the output matrix, but I'm struggling to think of what the physical implication is. I.e. what does it mean for the actuation vectors for PIT, YAW and POS to not only be scaled, but also non-orthogonal (but still linearly independent) at ~10 Hz, which is well above the resonant frequencies of the pendulum? The PIT and YAW eigenvectors are the least orthogonal, with the angle between them ~40 degrees rather than the expected 90 degrees.  Quote: So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC. 14553 Fri Apr 19 09:42:18 2019 KojiBureaucracyGeneralItem borrowing (40m->OMC) Apr 16, 2019 Borrowed two laser goggles from the 40m. (Returned Apr 29, 2019) Apr 19, 2019 Borrowed from the 40m: - Universal camera mount - 50mm CCD lens - zoom CCD lens (Returned Apr 29, 2019) - Olympus SP-570UZ (Returned Apr 29, 2019) - Special Olympus USB Cable (Returned Apr 29, 2019) 14552 Thu Apr 18 23:10:12 2019 gautamUpdateLoss MeasurementX arm misaligned Yehonathan wanted to take some measurements for loss determination. I misaligned the X arm completely and we installed a PD on the AS table so there is no light reaching the AS55 and AS110 PDs. Yehonathan will post the detailed elog. 14551 Thu Apr 18 22:35:23 2019 gautamUpdateSUSETMY actuator diagnosis [rana, gautam] Rana did a checkout of my story about oddness of the ETMY suspension. Today, we focused on the actuators - the goal was to find the correct coefficients on the 4 face coils that would result in diagonal actuation (i.e. if we actuate on PIT, it only truly moves the PIT DoF, as witnessed by the Oplev, and so on for the other DoFs). Here are the details: 1. Ramp times for filter modules: • All the filter modules in the output matrix did not have ramp times set. • We used python, cdsutils and ezca to script the writing of a 3 second ramp to all the elements of the 5x6 output matrix. • The script lives at /opt/rtcds/caltech/c1/scripts/cds/addRampTimes.py, can be used to implement similar scripts to initialize large numbers of channels (limiters, ramp times etc). 2. Bounce mode checkout: • ​The motivation here was to check if there is anomalously large coupling of the bounce mode to any of the other DoFs for ETMY relative to the other optics • The ITMs have a different (~15.9 Hz) bounce mode frequency compared to the ETMs (~16.2 Hz). • I hypothesize that this is because the ETMs were re-suspended in 2016 using new suspension wire. • We should check out specs of the wires, look for either thickness differences or alloying composition variation (Steve has already documented some of this in the elog linked above). Possibly also check out the bounce mode for a 250g load on the table top. 3. Step responses for PIT and YAW • With the Oplevs disabled (but other local damping loops engaged), we applied a step of 100 DAC counts to the PIT and YAW DoFs from the realtime system (one at a time) • We saw significant cross-coupling of the YAW step coupling to PIT, at the level of 50%. 4. OSEM coil coefficient balancing • I had done this a couple of months ago looking at the DC gain of the 1/f^2 pendulum response. • Rana suggested an alternate methodology • we used the lock-in amplifier infrastructure on the SUS screens to drive a sine wave • Frequencies were chosen to be ~10.5 Hz and ~13.5 Hz, to be outside the Oplev loop bandwidth • Tests were done with the Oplev loop engaged. The Oplev error signal was used as a diagnostic to investigate the PIT/YAW cross coupling. • In the initial tests, we saw coupling at the 20% level. If the Oplev head is rotated by 0.05 rad relative to the "true" horizontal-vertical coordinate system, we'd expect 5% cross coupling. So this was already a red flag (i.e. it is hard to believe that Oplev QPD shenanigans are responsible for our observations). We decided to re-diagonalize the actuation. • The output matrix elements for the lock-in-amplifier oscillator signals were adjusted by adding some amount of YAW to the PIT elements (script lives at /opt/rtcds/caltech/c1/scripts/SUS/stepOutMat.py), and vice versa, and we tried to reduce the height of the cross-coupled peaks (viewed on DTT using exponential weighting, 4 avgs, 0.1 Hz BW - note that the DTT cursor menu has a peak find option!). DTT Template saved at /users/Templates/SUS/ETMY-actDiag.xml • This worked really well for minimizing PIT response while driving YAW, not as well for minimizing YAW in PIT. • Next, we added some YAW to a POS drive to minimize the any signal at this drive frequency in the Oplev YAW error signal. Once that was done, we minimized the peak in the Oplev PIT error signal by adding some amount of PIT actuation. • So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC. 5. Next steps: • All of our tests tonight were at AC - once the coil balancing has been done at AC, we have to check the cross coupling at DC. If everything is working correctly, the response should also be fairly well decoupled at DC, but if not, we have to come up with a hypothesis as to why the AC and DC responses are different. • Can we gain any additional info from driving the pringle mode and minimizing it in the Oplev error signals? Or is the problem overconstrained? • After the output matrix diagonalization is done, drive the optic in POS, PIT and YAW, and construct the input matrix this way (i.e. transfer function), as an alternative to the usual free-swinging ringdown method. Look at what kind of an input matrix we get. • Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286. 14550 Wed Apr 17 18:12:06 2019 gautamUpdateVACVac interlock tripped again After getting the go ahead from Chub and Jon, I restored the Vacuum state to "Vacuum normal", see Attachment #1. Steps: 1. Interlock code modifications • Backed up /opt/target/python/interlocks/interlock_conditions.yaml to /opt/target/python/interlocks/interlock_conditions_UPS.yaml • The "power_loss" condition was removed for every valve and pump inside /opt/target/python/interlocks/interlock_conditions.yaml • The interlock service was restarted using sudo systemctl restart interlock.service • Looking at the status of the service, I saw that it was dying ~ every 1 second. • Traced this down to a problem in/opt/target/python/interlocks/interlock_conditions.yaml when the "pump_managers" are initialized - the way this is coded up, doesn't play nice if there are no conditions specified in the yaml file. For now, I just commented this part out. The git diff below: 2. Restoring vacuum normal: • Spun up TP1, TP2 and TP3 • Opened up foreline of TP1 to TP2, and then opened main volume to TP1 • Opened up annulus foreline to TP3, and then opened the individual annular volumes to TP3. controls@c1vac:/opt/target/python/interlocks git diff interlock.py
diff --git a/python/interlocks/interlock.py b/python/interlocks/interlock.py
index 28d3366..46a39fc 100755
--- a/python/interlocks/interlock.py
+++ b/python/interlocks/interlock.py
@@ -52,8 +52,8 @@ class Interlock(object):
self.pumps = []
for pump in interlocks['pumps']:
pm = PumpManager(pump['name'])
-            for condition in pump['conditions']:
-                pm.register_condition(*condition)
+            #for condition in pump['conditions']:
+            #    pm.register_condition(*condition)
self.pumps.append(pm)

So far the pressure is coming down smoothly, see Attachment #2. I'll keep an eye on it.

PSL shutter was opened at 645pm local time. IMC locked almost immediately.

Update 11pm: The pressure has reached 8.5e-6 torr without hiccup.

Attachment 1: Screenshot_from_2019-04-17_18-11-45.png
Attachment 2: Screenshot_from_2019-04-17_18-21-30.png
14549   Wed Apr 17 11:01:49 2019 gautamUpdateALSLarge 2kHz peak (and harmonics) in ALS X no more

EX green stayed locked to XARM length overnight without a problem. The spectrogram doesn't show any alarming time varying features around 2 kHz (or at any other frequency).

Attachment 1: EX_PDH_specGram.pdf
14548   Wed Apr 17 00:50:17 2019 gautamUpdateALSLarge 2kHz peak (and harmonics) in ALS X no more

I looked into this issue today. Initially, my thinking was that I'd somehow caused clipping in the beampath somewhere which was causing this 2kHz excitation. However, on looking at the spectrum of the in-loop error signal today (Attachment #1), I found no evidence of the peak anymore!

Since the vacuum system is in a non-nominal state, and also because my IR ALS beat setup has been hijacked for the MZ interferometer, I don't have an ALS spectrum, but the next step is to try single arm locking using the ALS error signal. To investigate whether the 2kHz peak is a time-dependent feature, I left the EX green locked to the arm (with the SLOW temperature offloading servo ON), hopefully it stays locked overnight...

 Quote: These weren't present last week. The peaks are present in the EX PDH error monitor signal, and so are presumably connected with the green locking system. My goal tonight was to see if the arm length control could be done using the ALS error signal as opposed to POX, but I was not successful.
Attachment 1: EX_PDHnoise.pdf
14547   Wed Apr 17 00:43:38 2019 gautamUpdateFrequency noise measurementMZ interferometer ---> DAQ
1. Delay fiber was replaced with 5m (~30 nsec delay)
• The fringing of the MZ was way too large even with the free running NPRO (~3 fringes / sec)
• Since the V/Hz is proportional to the delay, I borrowed a 5m patch cable from Andrew/ATF lab, wrapped it around a spool, and hooked it up to the setup
• Much more satisfactory fringing rate (~1 wrap every 20 sec) was observed with no control to the NPRO
2. MZ readout PDs hooked up to ALS channels
• To facilitate further quantitative study, I hooked up the two PDs monitoring the two ports of the MZ to the channels normally used for ALS X.
• ZHL3-A amps inputs were disconnected and were turned off. Then cables to their outputs were highjacked to pipe the DC PD signals to the 1Y3 rack
• Unfortunately there isn't a DQ-ed fast version of this data (would require a model restart of c1lsc which can be tricky), but we can already infer the low freq fringing rate from overnight EPICS data and also use short segments of 16k data downloaded "live" for the frequency noise measurement.
• Channels are C1:ALS-BEATX_FINE_I_IN1 and C1:ALS-BEATX_FINE_Q_IN1 for 16k data, and C1:ALS-BEATX_FINE_I_INMON and C1:ALS-BEATX_FINE_I_INMON for 16 Hz.

At some point I'd like to reclaim this setup for ALS, but meantime, Anjali can work on characterization/noise budgeting. Since we have some CDS signals, we can even think of temperature control of the NPRO using pythonPID to keep the fringe in the linear regime for an extended period of time.

14546   Tue Apr 16 22:06:51 2019 gautamUpdateVACVac interlock tripped again

This happened again, about 30,000 seconds (~2:06pm local time according to the logfile) ago. The cited error was the same -

2019-04-16 14:06:05,538 - C1:Vac-error_status => VA6 closed. AC power loss.

Hard to believe there was any real power loss, nothing else in the lab seems to have been affected so I am inclined to suspect a buggy UPS communication channel. The PSL shutter was not closed - I believe the condition is for P1a to exceed 3 mtorr (it is at 1 mtorr right now), but perhaps this should be modified to close the PSL shutter in the event of any interlock tripping. Also, probably not a bad idea to send an email alert to the lab mailing list in the event of a vac interlock failure.

For tonight, I only plan to work with the EX ALS system anyways so I'm closing the PSL shutter, I'll work with Chub to restore the vacuum if he deems it okay tomorrow.

Attachment 1: Screenshot_from_2019-04-16_22-05-47.png
Attachment 2: Screenshot_from_2019-04-16_22-06-02.png
ELOG V3.1.3-