I've added a scripted VMon/coil-enable test to PyIFOTest following the suggestion in #15542. Basically, a DC offset is added to one fast coil output at a time, and all the VMon responses are checked.
After resolving the swapped ITMX/ITMY eurocrate slots described in #14584, I ran the new scripted VMon test on all eight optics managed by c1susaux. All of them passed: SRM, BS, MC1, MC2, MC3, PRM, ITMX, ITMY. This is not the final suspension test we plan to do, but it gives me reasonably good confidence that all channels are connected correctly.
After the X and Y arm naming conventions were changed, the labelling of the electronics in the eurocrates was not changed 😞 😔 😢 . This meant that when we hooked up the new Acromag crate, all the slow ITMX channels were in fact connected to the physical ITMY optic. I ♦️fixed♦️ the labelling now - Attachments #1 and #2 show the coil driver boards and SUS PD whitening boards correctly labelled. Our electronics racks are in desperate need of new photographs.
The "Y" arm runs in the EW direction, while the "X" arm runs in the NW direction as of April 29 2018.
ITMX was freed. ITMY is being worked on is also free..
I turned the 2W NPRO back on again at ~4pm local time, dialing the injection current up from 0-2A in ~2 mins. I noticed today that the lasing only started at 1A, whereas just last week, it started lasing at 0.5A. After ~5 minutes of it being on, I measured 950 mW after the 11/55 MHz EOM on the PSL table. The power here was 1.06 W in January, so ~💯 mW lower now. 😮
I found out today that the way the python FSS SLOW PID loop is scripted, if it runs into an EZCA error (due to the c1psl slow machine being dead), it doesn't handle this gracefully (it just gets stuck). I rebooted the crate for now and the MC autolcoker is running fine again.
NPRO turned off again at ~8pm local time after Anjali was done with her data taking. I measured the power again, it was still 950mW, so at least the output power isn't degrading over 4 hours by an appreciable amount...
Here are some tests we should script.
Checking the Acromag DAC calibration is more complicated I think. There are measurements of the actuator calibration in units of nm/ct for the fast DACs. But these are only valid above the pendulum resonance frequency and I'm not sure we can synchronously drive a 10 Hz sine wave using the EPICs channels. The current test which drives the PIT/YAW DoFs with a DC misalingment and measures the response in the PDmon channels is a bit ad hoc in the way we set the "expected" response which is the PASS/FAIL criterion for the test. Moreover, the cross-coupling between the PDmon channels may be quite high. Needs some thought...
Today we installed the c1susaux Acromag chassis and controller computer in the 1X4 rack. As noted in 14580 the prototype Acromag chassis had to first be removed to make room in the rack. The signal feedthroughs were connected to the eurocrates by 10' DB-37 cables via adapters to 96-pin DIN.
Once installed, we ran a scripted set of suspension actuation tests using PyIFOTest. BS, PRM, SRM, MC1, MC2, and MC3 all passed these tests. We were unable to test ITMX and ITMY because both appear to be stuck. Gautam will shake them loose on Monday.
Although the new c1susaux is now mounted in the rack, there is more that needs to be done to make the installation permanent:
On Monday we plan to continue with additional scripted tests of the suspensions.
gautam - some more notes:
Gautam and I are removing the prototype Acromag chassis from the 1x4 rack to make room for the new c1susuax hardware. I shut down and disabled the modbusPSL service running on c1auxex, which serves the PSL diagnostic channels hosted by this chassis. The service will need to be restarted and reenabled once the chassis has been reinstalled elsewhere.
From the earlier results with homodyne measurement,the Vmax and Vmin values observed were comparable with the expected results . So in the time interval between these two points, the MZI is assumed to be in the linear region and I tried to find the frequency noise based on data available in this region.This results is not significantly different from that we got before when we took the complete time series to calculate the frequency noise. Attachment #1 shows the time domain data considered and attachment #2 shows the frequecy noise extracted from that.
As discussed, we will be trying the heterodyne method next. Initialy, we will be trying to save the data with two channel ADC with 16 kHz sampling rate. With this setup, we can get the information only upto 8 kHz.
It is noticed that one of the doors (door # 2 ) of the PSL table is broken. Attachement #1 shows the image
NPRO shutoff at ~1517 local time today afternoon. Again, not many clues from the NPRO diagnostics channel, but to my eye, the D1_POW channel shows the first variation from the "steady state", followed by the other channels. This is ~0.1 sec before the other channels register some change, so I don't know how much we can trust the synchronizaiton of the EPICS data streams. I won't turn it on again for now. I did check that the little fan on the back of the NPRO controller is still rotating.
gautam 10am 4/29: I also added a longer term trend of these diagnostic channels, no clear trends suggesting a fault are visible. The y-axis units for all plots are in Volts, and the data is sampled at 16 Hz.
Now we wait and watch I guess.
My understanding is that the main advantage in going to the heterodyne scheme is that we can extract the frequecy noise information without worrying about locking to the linear region of MZI. Arctan of the ratio of the inphase and quadrature component will give us phase as a function of time, with a frequency offset. We need to to correct for this frequency offset. Then the frequency noise can be deduced. But still the frequency noise value extracted would have the contribution from both the frequency noise of the laser as well as from fiber length fluctuation. I have not understood the method of giving temperature feedback to the NPRO.I would like to discuss the same.
The functional form used for the curve labeled as theory is 5x104/f. The power spectral density (V2/Hz) of the the data in attachment #1 is found using the pwelch function in Matlab and square root of the same gives y axis in V/rtHz. From the experimental data, we get the value of Vmax and Vmin. To ride from Vmax to Vmin , the corrsponding phase change is pi. From this information, V/rad can be calculated. This value is then multiplied with 2*pi*time dealy to get the quantity in V/Hz. Dividing V/rtHz value with V/Hz value gives y axis in Hz/rtHz. The calculated value of shot noise and dark current noise are way below (of the order of 10-4 Hz/rtHz) in this frequency range.
I forgor to take the picture of the setup at that time. Now Andrew has taken the fiber beam splitter back for his experiment. Attachment #1 shows the current view of the setup. The data from the previous trial is saved in /users/anjali/MZ/MZdata_20190417.hdf5
If I understand correctly, the Mach-Zehnder readout port power is only a function of the differential phase accumulated between the two interfering light beams. In the homodyne setup, this phase difference can come about because of either fiber length change OR laser frequency change. We cannot directly separate the two effects. Can you help me understand what advantage, if any, the heterodyne setup offers in this regard? Or is the point of going to heterodyne mainly for the feedback control, as there is presumably some easy way to combine the I and Q outputs of the heterodyne measurement to always produce an error signal that is a linear function of the differential phase, as opposed to the sin^2 in the free-running homodyne setup? What is the scheme for doing this operation in a high bandwidth way (i.e. what is supposed to happen to the demodulated outputs in Attachment #3 of your elog)? What is the advantage of the heterodyne scheme over applying temperature feedback to the NPRO with 0.5 Hz tracking bandwidth so that we always stay in the linear regime of the homodyne readout?
Also, what is the functional form of the curve labelled "Theory" in Attachment #2? How did you convert from voltage units in Attachment #1 to frequency units in Attachment #2? Does it make sense that you're apparently measuring laser frequency noise above 10 Hz? i.e. where do the "Dark Current Noise" and "Shot Noise" traces for the experiment lie relative to the blue curve in Attachment #2? Can you point to where the data is stored, and also add a photo of the setup?
This activity seems to have closed the PSL shutter (actually I'm not sure why that happened - the interlock should only trip if P1a exceeds 3 mtorr, and looking at the time series for the last 2 hours, it did not ever exceed this threshold). I saw no reason for it to remain closed so I re-opened it just now.
I vote for not remotely rebooting any of the vacuum / PSL subsystems. In the event of something going catastrophically wrong, someone should be on hand to take action in the lab.
I slightly cleaned up Gautam's disabling of the UPS-predicated vac interlock and restarted the interlock service. This interlock is intended to protect the turbo pumps after a power outage, but it has proven disruptive to normal operations with too many false triggers. It will be reenabled once a new UPS has been installed. For now, as it has been since 2001, the vac pumps are unprotected against an extended power outage.
The air handler on the roof of the 40M that supplies the electronics shop and computer room is out of operation until next week. Adding insult to injury, there is a strong odor of Liquid Wrench oil (a creeping oil for loosening stuck bolts that has a solvent additive) in the building. If you don't truly need to be in the 40M, you may want to wait until the environment is back to being cool and "unscented". On a positive note, we should have a quieter environment soon!
At some point I'd like to reclaim this setup for ALS, but meantime, Anjali can work on characterization/noise budgeting. Since we have some CDS signals, we can even think of temperature control of the NPRO using pythonPID to keep the fringe in the linear regime for an extended period of time.
When dialing up the current, I went up to 2.01 A on the front panel display, which is what I remember it being. The label on the controller is from when the laser was still putting out 2W, and says the pump current should be 2.1 A. Anyhow, the MC transmission is ~7% lower now (14500 cts compared to the usual 15000-15500 cts), even after tweaking the PMC alignment to minimize PMC REFL. Potentially there is less power coming out of the NPRO. I will measure it at the window tomorrow with a power meter.
We briefly talked about the bounce and roll modes of the SOS optic at the meeting today.
Attachment #1: BR modes for ETMY from my free-swinging run on 17 April. The LL coil has a very different behavior from the others.
Attachment #2: BR modes for ETMY from my free-swinging run on 18 April, which had a macroscopically different bias voltage for the PIT/YAW sliders. Here too, the LL coil has a very different behavior from the others.
Attachment #3: BR modes for ETMX from my free-swinging run on 27 Feb. There are many peaks in addition to the prominent ones visible here, compared to ITMY. The OSEM PD noise floor for UR and SIDE is mysteriously x2 lower than for the other 3 OSEMs???
In all three cases, a bounce mode around 16.4 Hz and a roll mode around 24.0 Hz are visible. The ratio between these is not sqrt(2), but is ~1.46, which is ~3% larger. But when I look at the database, I see that in the past, the bounce and roll modes were in fact at close to these frequencies.
Because of my negligence and rushing the closeout procedure, I don't have a great close-out picture of the magnet positions in the face OSEMs, the best I can find is Attachment #4. We tried to replicate the OSEM arrangement (orientation of leads from the OSEM body) from July 2018 as closely as possible.
I will investigate the side coil actuation strength tomorrow, but if anyone can think of more in-air tests we should do, please post your thoughts/poetry here.
The precision of the measurement is excellent. We should move on to look for systematic errors.
According to Johannes and Gautam (see T1700117_ReflectionLoss .pdf in Attachment 1), the loss in the cavity mirror is obtained by measuring the light reflected from the cavity when it is locked and when it is misaligned. From these two measurements and by using the known transmissions of the cavity mirrors, the roundtrip loss is extracted.
I write a Python notebook (AnalyzeLossData.ipynb in Attachment 1) extracting the raw data from the measurement file (data20190216.hdf5 in Attachment 1) analyzing the statistics of the measurement and its PSD.
Attachment 2 shows the raw data.
Attachment 3 shows the histogram of the measurement. It can be seen that the distribution is very close to being Gaussian.
The loss in the cavity pre roundtrip is measured to be 73.7+/-0.2 parts per million. The error is only due to the deviation in the PD measurement. Considering the uncertainty of the transmissions of the cavity mirrors should give a much bigger error.
Attachment 4 shows noise PSD of the PD readings. It can be seen that the noise spectrum is quite constant and there would be no big improvement by chopping the signal.
The situation might be different when the measurement is taken from the cavity lock PD where the signal is much weaker.
For the in-situ test, I decided that we will use the physical SRM to test the c1susaux Acromag replacement crate functionality for all 8 optics (PRM, BS, ITMX, ITMY, SRM, MC1, MC2, MC3). To facilitate this, I moved the backplane connector of the SRM SUS PD whitening board from the P1 connector to P2, per Koji's mods at ~5:10PM local time. Watchdog was shutdown, and the backplane connectors for the SRM coil driver board was also disconnected (this is interfaced now to the Acromag chassis).
I had to remove the backplane connector for the BS coil driver board in order to have access to the SRM backplane connector. Room in the back of these eurocrate boxes is tight in the existing config...
At ~6pm, I manually powered down c1susaux (as I did not know of any way to turn off the EPICS server run by the old VME crate in a software way). The point was to be able to easily interface with the MEDM screens. So the slow channels prefixed C1:SUS-* are now being served by the Supermicro called c1susaux2.
A critical wiring error was found. The channel mapping prepared by Johannes lists the watchdog enable BIO channels as "C1:SUS-<OPTIC>_<COIL>_ENABLE", which go to pins 23A-27A on the P1 connector, with returns on the corresponding C pins. However, we use the "TEST" inputs of the coil driver boards for sending in the FAST actuation signals. The correct BIO channels for switching this input is actually "C1:SUS-<OPTIC>_<COIL>_TEST", which go to pins 28A-32A on the P1 connector. For todays tests, I voted to fix this inside the Acromag crate for the SRM channels, and do our tests. Chub will unfortunately have to fix the remaining 7 optics, see Attachment #1 for the corrections required. I apportion 70% of the blame to Johannes for the wrong channel assignment, and accept 30% for not checking it myself.
The good news: the tests for the SRM channels all passed!
Additionally, I confirmed that the watchdog tripped when the RMS OSEM PD voltage exceeded 200 counts. Ideally we'd have liked to test the stability of the EPICS server, but we have shut it down and brought the crate back out to the electronics bench for Chub to work on tomorrow.
I restarted the old VME c1susaux at 915pm local time as I didn't want to leave the watchdogs in an undefined state. Unsurprisingly, ITMY is stuck. Also, the BS (cable #22) and SRM (cable #40) coil drivers are physically disconnected at the front DB15 output because of the undefined backplane inputs. I also re-opened the PSL shutter.
After discussing with Koji, I turned the NPRO back on again, at ~4PM local time. I first dialled the injection current down to 0A. Then powered the control unit state to "ON". Then I ramped up the power by turning the front panel dial. Lasing started at 0.5A, and I saw no abrupt swings in the power (I used PMC REFL as a monitor, there were some mode flashes which are the dips seen in the power, and the x-axis is in units of time not pump current). PMC was relocked and IMC autolocker locked the IMC almost immediately.
Borrowed Zurich HF2LI Lock in Amplifer to QIL lab Wed Apr 24 11:25:11 2019.
For the new c1susaux, Gautam and I moved the watchdog channels from autoBurt.req to a new file named autoBurt_watchdogs.req. When the new modbus service starts, it loads the state contained in autoBurt.snap. We thought it best for the watchdogs to not be automatically enabled at this stage, but for an operator to manually have to do this. By moving the watchdog channels to a separate snap file, the entire SUS state can be loaded while leaving just the watchdogs disabled.
This same modification should be made to the ETMX and ETMY machines.
Today I tested the remaining Acromag channels and retested the non-functioning channels found yesterday, which Chub repaired this morning. We're still not quite ready for an in situ test. Here are the issues that remain.
I further diagnosed these channels by connecting a calibrated DC voltage source directly to the ADC terminals. The EPICS channels do sense this voltage, so the problem is isolated to the wiring between the ADC and DB37 feedthrough.
No output signal
To further diagnose these channels, I connected a voltmeter directly to the DAC terminals and toggled each channel output. The DACs are outputting the correct voltage, so these problems are also isolated to the wiring between DAC and feedthrough.
In testing the DC bias channels, I did not check the sign of the output signal, but only that the output had the correct magnitude. As a result my bench test is insensitive to situations where either two degrees of freedom are crossed or there is a polarity reversal. However, my susPython scripting tests for exactly this, fetching and applying all the relevant signal gains between pitch/yaw input and coil bias output. It would be very time consuming to propagate all these gains by hand, so I've elected to wait for the automated in situ test.
Here are the results from this test. The data for 17 April is with the DC bias for ETMY set to the nominal values (which gives good Y arm cavity alignment), while on 18 April, I changed the bias values until all four shadow sensors reported values that were at least 100 cts different from 17 April. The times are indicated in the plot titles in case anyone wants to pull the data (I'll point to the directory where they are downloaded and stored later).
There are 3 visible peaks. There was negligible shift in position (<5 mHz) / change in Q of any of these with the applied Bias voltage. I didn't attempt to do any fitting as it was not possible to determine which peak corresponds to which DoF by looking at the complex TFs between coils (at each peak, different combinations of 3 OSEMs have the same phase, while the fourth has ~180 deg phase lead/lag). FTR, the wiki leads me to expect the following locations for the various DoFs, and I've included the closest peak in the current measured data in parentheses:
However, this particular SOS was re-suspended in 2016, and this elog reports substantially different peak positions, in particular, for the YAW DoF (there were still 4). The Qs of the peaks from last week's measurements are in the range 250-350.
Repeat the free-swinging ringdown with the ETMY bias voltage adjusted such that all the OSEM PDmons report ~100 um different position from the "nominal" position (i.e. when the Y arm cavity is aligned). Investigate whether the resulting eigenmode frequencies / Qs are radically different. I'm setting the optic free-swinging on my way out tonight. Optic kicked at 1239690286.
Today I bench-tested most of the Acromag channels in the replacement c1susaux. I connected a DB37 breakout board to each chassis feedthrough connector in turn and tested channels using a multimeter and calibrated voltage source. Today I got through all the digital output channels and analog input channels. Still remaining are the analog output channels, which I will finish tomorrow.
There have been a few wiring issues found so far, which are noted below.
Crossed with LR
Crossed with UR
Crossed with LL
Crossed with UL
Happened again at ~730pm.
The NPRO diag channels don't really tell me what happened in a causal way, but the interlock channel seems suspicious. Why is the nominal value 0.04 V? From the manual, it looks like the TGUARD is an indication of deviations between the set temperature and actual diode laser temperature. Is it normal for it to be putting out 11V?
I'm not going to turn it on again right now while I ponder which of my hands I need to chop off.
I'm restoring it now in the hope we can get some more info on what exactly happened if this is a recurring event.
If thy left hand troubles thee
then let the mirror show the right
for if it troubles enough to cut it off
it would not offend thy sight
I repeated the exercise from yesterday, this time driving the butterfly mode [+1 -1 -1 +1] and adding the tuned PIT and YAW vectors from yesterday to it to minimize appearance in the Oplev error signals.
The measured output matrix is , where rows are the coils in the order [UL,UR,LL,LR] and columns are the DOFs in the order [POS,PIT,YAW,Butterfly]. The conclusions from my previous elog still hold though - the orthogonality between PIT and YAW is poor, so this output matrix cannot be realized by a simple gain scaling of the coil output gains. The "adjustment matrix", i.e. the 4x4 matrix that we must multiply the "ideal" output matrix by to get the measured output matrix has a condition number of 134 (1 is a good condition number, signifies closeness to the identity matrix).
let us have 3 by 4, nevermore
so that the number of columns is no less
and no more
than the number of rows
so that forevermore we live as 4 by 4
so that forevermore we live as 4 by 4
I'm struggling to think
When I got back from lunch just now, I noticed that the PMC TRANS and REFL cameras were showing no spots. I went onto the PSL table, and saw that the NPRO was in fact turned off. I turned it back on.
The laser was definitely ON when I left for lunch around 130pm, and this happend around 140pm. Anjali says no one was in the lab in between. None of the FEs are dead, suggesting there wasn't a labwide power outage, and the EX and EY NPROs were not affected. I had pulled out the diagnostics connector logged by Acromag, I'm restoring it now in the hope we can get some more info on what exactly happened if this is a recurring event. So FSS_RMTEMP isn't working from now on. Sooner we get the PSL Acromag crate together, the better...
I've borrowed the Busby Box for a day or so. Location: QIL lab at Bridge West.
Edit Sat Apr 20 21:16:46 2019 (awade): returned.
Ther isn't a consistent set of OSEM coil gains that explains the best actuation vectors we determined yesterday. Here are the explicit matrices:
There isn't a solution to the matrix equation , i.e. we cannot simply redistribute the actuation vectors we found as gains to the coils and preserve the naive actuation matrix. What this means is that in the OSEM coil basis, the actuation eigenvectors aren't the naive ones we would think for PIT and YAW and POS. Instead, we can put these custom eigenvectors into the output matrix, but I'm struggling to think of what the physical implication is. I.e. what does it mean for the actuation vectors for PIT, YAW and POS to not only be scaled, but also non-orthogonal (but still linearly independent) at ~10 Hz, which is well above the resonant frequencies of the pendulum? The PIT and YAW eigenvectors are the least orthogonal, with the angle between them ~40 degrees rather than the expected 90 degrees.
So we now have matrices that minimize the cross coupling between these DoFs - the idea is to back out the actuation coefficients for the 4 OSEM coils that gives us the most diagonal actuation, at least at AC.
Apr 16, 2019
Borrowed two laser goggles from the 40m. (Returned Apr 29, 2019)
Apr 19, 2019
Borrowed from the 40m:
- Universal camera mount
- 50mm CCD lens
- zoom CCD lens (Returned Apr 29, 2019)
- Olympus SP-570UZ (Returned Apr 29, 2019)
- Special Olympus USB Cable (Returned Apr 29, 2019)
Yehonathan wanted to take some measurements for loss determination. I misaligned the X arm completely and we installed a PD on the AS table so there is no light reaching the AS55 and AS110 PDs. Yehonathan will post the detailed elog.
Rana did a checkout of my story about oddness of the ETMY suspension. Today, we focused on the actuators - the goal was to find the correct coefficients on the 4 face coils that would result in diagonal actuation (i.e. if we actuate on PIT, it only truly moves the PIT DoF, as witnessed by the Oplev, and so on for the other DoFs). Here are the details:
After getting the go ahead from Chub and Jon, I restored the Vacuum state to "Vacuum normal", see Attachment #1. Steps:
controls@c1vac:/opt/target/python/interlocks$ git diff interlock.py
diff --git a/python/interlocks/interlock.py b/python/interlocks/interlock.py
index 28d3366..46a39fc 100755
@@ -52,8 +52,8 @@ class Interlock(object):
self.pumps = 
for pump in interlocks['pumps']:
pm = PumpManager(pump['name'])
- for condition in pump['conditions']:
+ #for condition in pump['conditions']:
+ # pm.register_condition(*condition)
So far the pressure is coming down smoothly, see Attachment #2. I'll keep an eye on it.
PSL shutter was opened at 645pm local time. IMC locked almost immediately.
Update 11pm: The pressure has reached 8.5e-6 torr without hiccup.
EX green stayed locked to XARM length overnight without a problem. The spectrogram doesn't show any alarming time varying features around 2 kHz (or at any other frequency).
I looked into this issue today. Initially, my thinking was that I'd somehow caused clipping in the beampath somewhere which was causing this 2kHz excitation. However, on looking at the spectrum of the in-loop error signal today (Attachment #1), I found no evidence of the peak anymore!
Since the vacuum system is in a non-nominal state, and also because my IR ALS beat setup has been hijacked for the MZ interferometer, I don't have an ALS spectrum, but the next step is to try single arm locking using the ALS error signal. To investigate whether the 2kHz peak is a time-dependent feature, I left the EX green locked to the arm (with the SLOW temperature offloading servo ON), hopefully it stays locked overnight...
These weren't present last week. The peaks are present in the EX PDH error monitor signal, and so are presumably connected with the green locking system. My goal tonight was to see if the arm length control could be done using the ALS error signal as opposed to POX, but I was not successful.
This happened again, about 30,000 seconds (~2:06pm local time according to the logfile) ago. The cited error was the same -
Hard to believe there was any real power loss, nothing else in the lab seems to have been affected so I am inclined to suspect a buggy UPS communication channel. The PSL shutter was not closed - I believe the condition is for P1a to exceed 3 mtorr (it is at 1 mtorr right now), but perhaps this should be modified to close the PSL shutter in the event of any interlock tripping. Also, probably not a bad idea to send an email alert to the lab mailing list in the event of a vac interlock failure.
For tonight, I only plan to work with the EX ALS system anyways so I'm closing the PSL shutter, I'll work with Chub to restore the vacuum if he deems it okay tomorrow.
It is feeling cold in the office area. According to the digital wall clock near the coffee machine, it is 19C. Rana bumped the thermostat setpoint up by 2F (from 75F to 77F). We need to setup long-term monitoring.
just main points, anajli is going to fill out the details.
To rule out mode-matching as the reason for non-ideal output from the MZ, I suggested using the setup I have on the NW side of the PSL enclosure for the measurement. This uses two identical fiber collimators, and the distance between collimator and recombination BS is approximately the same, so the spatial modes should be pretty well matched.
The spooled fiber we found was not suitable for use as it had a wide key connector and I couldn't find any wide-key FC/PC to narrow-key FC/APC adaptors. So we decided to give the fiber going to the Y end and back (~90m estimated length) a shot. We connected the two fibers at the EY table using a fiber mating sleeve (so the fiber usually bringing the IR pickoff from EY to the PSL table was disconnected from its collimator).
In summary, we cannot explain why the contrast of the MZ is <5%. Spatial mode-overlap is definitely not to blame. Power asymmetry in the two arms of the MZ is one possible explanation, could also be unstable polarization, even though we think the entire fiber chain is PM. Anjali is investigating.
We saw today that the Thorlabs PM beam splitters (borrowed from Andrew until our AFW components arrive) do not treat the two special axes (fast and slow) of the fiber on equal footing. When we coupled light into the fast axis, we saw huge asymmetry between the two split arms of the beamsplitter (3:1 ratio in power instead of the expected 1:1 for a 50/50 BS). Looking at the patch cord with an IR viewer, we also saw light leaking through the core along it. Turns out this part is meant to be used with light coupled to the slow axis only.
had trouble using YUM to update. This turned out to be a config problem with our Martian router, not the new laptop. Since I've changed the WiFi pwd awhile ago for the martian access for the CDS laptops, you'll have to enter that in order to use the laptops.
turned out to be some Access Control nonsense inside of the router. Even loggin in as admin with a cable gave some of the fields the greyed out color (had to hover over the link and then type the URL directly in the browser window). ASIA is now able to connect and use YUM + usual connections. Gautam and I have also moved the router a little to get easier view of its LED lights and not blockk its WiFi signal with the cable tray. We'll get a little shelf so that we can mount it ~1 foot off of the wall.
still, this seems like a bad laptop choice: the Lenovo Ideapad 330 will not have its touchpad supported by SL7 without compiling a new version of the kernel
if this gonna be general for everybody, maybe it oughta be called IFOtest (like the last incarnation that was tried in Livingston) ?
then the SUS tests could just be some smaller set of measurements. Using the same code base, but different params.
The AS spot on the camera was oscillating at ~3 Hz. Looking at the Oplevs, the culprit was the BS PIT DoF. Started about 12 hours ago, not sure what triggered it. I disabled Oplev damping, and waited for the angular motion to settle down a bit, and then re-enabled the servo - damps fine now...
The alignement was disturbed after the replcement of the beam splitter. We tried to get the alignment back . But we are not succeeded yet in getting good interfernce pattern. This is mainly because of poor mode matching of two beams. We will also try with the spooled fiber.
In anticipation of needing to test hundreds of suspension signals after the c1susaux upgrade, I've started developing a Python package to automate these tests: susPython
The core of this package is not any particular test, but a general framework within which any scripted test can be "nested." Built into this framework is extensive signal trapping and exception handling, allowing actuation tests to be performed safely. Namely it protects against crashes of the test scripts that would otherwise leave the suspensions in an arbitrary state (e.g., might destroy alignment).
The package is designed to be used as a standalone from the command line. From within the root directory, it is executed with a single positional argument specifying the suspension to test:
$ python -m suspython ITMY
Currently the package requires Python 2 due to its dependence on the cdsutils package, which does not yet exist for Python 3.
So far I've implemented a cross-consistency test between the DC-bias outputs to the coils and the shadow sensor readbacks. The suspension is actuated in pitch, then in yaw, and the changes in PDMon signals are measured. The expected sign of the change in each coil's PDMon is inferred from the output filter matrix coefficients. I believe this test is sensitive to two types of signal-routing errors: no change in PDMon response (actuator is not connected), incorrect sign in either pitch or yaw response, or in both (two actuators are cross-wired).
The next test I plan to implement is a test of the slow system using the fast system. My idea is to inject a 3-8 Hz excitation into the coil output filter modules (either bandlimited noise or a sine wave), with all coil outputs initially disabled. One coil at a time will be enabled and the change in all VMon signals monitored, to verify the correct coil readback senses the excitation. In this way, a signal injected from the independent and unchanged fast system provides an absolute reference for the slow system.
I'm also aware of ideas for more advanced tests, which go beyond testing the basic signal routing. These too can be added over time within the susPython framework.
Testing is finished.
Will advise when I'm finished, will be by 1 pm for ALS work to begin.
I could probably install the new fan if we have one. Can you do without the laser for a while?
This thread: ELOG 10295
My interpretation of these ELOGs is that we did not have the replacement, and then I brought unknown fan from WB. At the same time, Steve ordered replacement fans which we found in the blue tower yesterday.
The next action is to replace the internal fan, I believe.