I assume this QPD set is a D1600079/D1600273 combo.
How much was the SUM output during the measurement? Also how much were the beam radii of this beam (from the error func fittings)?
Then the calibration [V/m] is going to be the linear/inv-linear function of the incident power and the beam radus.
You mean the linear range is +/-50mV (for a given beam), I guess.
Gautam managed to lock PRFPMI a little before ~ 22:00 local time. The ALS to RF handoff logic was found to be repeatable, which enabled us to lock a total of 4 times this evening. Under this nominal state, we can work on PRFPMI to narrow down less known issues and carry out systematic optimization. The second time we achieved lock, we ran sensing lines before entering the ASC stage (which we knew would destroy the lock), and offline analysis of the sensing matrix is pending (gpstime = 1310792709 + 5 min).
Things to note:
(a) there is an unexpected offset suggesting that the ALS and RF disagreed on what the lock setpoint should be, and it is still unclear where the offset is coming from.
(b) the first time the lock was reached, the ASC up stage destroyed it, suggesting these loops need some care (we were able to engage the ASC loops at low gains (0.2 instead of 1) but as soon as we enabled some integrators this consistently destroyed the lock
(c) gautam had (burt) restored to the settings from back in March when the PRFPMI was last locked, suggesting there was a small but somehow significant difference in the IFO that helped today relative to last week
Take home message--> The mere fact that we were able to lock PRFPMI rules out the considerably more serious problems with the signal chain electronics or processing. This should also be a good starting point for further debugging and optimization.
gautam: the circulating power, when the ASC was tweaked, hit 400 (normalized to single arm locked with a misaligned PRM) suggesting a recycling gain of 22.5, and an average arm loss of ~30ppm round trip (assuming 2% loss in the PRC).
[ian, anchal, paco]
We hooked up the infrasensing unit to power and changed its default IP address from 192.168.11.160 (factory default) to 192.168.113.240 in the martian network. The sensor is online with user controls and the usual password for most workstations in that IP address.
Attachment 1 shows a five and a half day minute-trend of the three temperature sensors. Logging started last Thursday ~ 2 pm when all sensors were finally deployed. While it appears that there is a 7 degree gradient along the XARM it seems like the "vertex" (more like ITMX) sensor was just placed on top of a network switch (which feels lukewarm to the touch) so this needs to be fixed. A similar situation is observed in the ETMY sensor. I shall do this later today.
Done. The temperature reading should now be more independent from nearby instruments.
Wed Aug 11 09:34:10 2021 I updated the plot with the full trend before and after rearranging the sensors.
This morning we noted most optics were tripped, probably as a result of a recent M>5 earthquake in the area (on Sun 08/20). Most optics were restored and damped nicely, except for ITMY.
PMC locked to HOM --> realigned and locked
We aligned PMC to maximize its transmission to ~ 0.670, after this IMC was locked and we engaged the WFS to recover the alignment.
ETMY oplev laser --> replaced aligned and locked
Most suspended optics were restored, but we noticed the OpLev sum on ETMY and ITMY were too low so we checked the lasers on both optics. The ITMY HeNe laser is on, but the one on ETMY is off. JC tested with a new laser head and the controller was determined to be good. Then, we tried resetting the previous one (labeled Oct 25 2020) but didn't have luck, so yet another HeNe laser died. We removed the old one and luckily our spare had the same form factor so it wasn't hard to recover the nominal alignment. After this we verified that the OPLEV loops on ETMY were working.
ITMY local damping --> still "stuck" or worse
The local damping on ITMY is not working properly. This puts it in a weird alignment state which is why we also don't see a large Oplev sum count on the QPD. The shadow sensor (OSEM) signals are all small, the available rms monitors are ~ 0.0, 0.1 mV, and kicking the optic around doesn't produce a corresponding OSEM signal, even when undamped. Therefore, we believe ITMY is either stuck (UR/LR) or worse. We tried the usual "shake" technique but didn't see any sensors being restored.
[JC, Koji-remote, paco]
ITMY stuck --> Shaken remotely and restored, ARMS aligned
With Koji's assistance we restored ITMY (it was stuck) and finished aligning both arms. Then JC centered the OpLevs for ETMs, ITMs and BS
ITMY camera blinking --> Replaced camera
JC checked the situation with our ITMYF (face) camera as the image seemed faulty and blinking. The issue this time was not in the power supply as has been before, but rather the CCD itself. After replacing the unit and aligning the ARM cavity, we redrew the marker "guides" on the control room screen for quick reference.
While continuing our efforts to lock, we noticed the procedure failed at a point it had gotten past last night: turning on the bounce/roll filters in MICH, PRC, and SRC. We checked the MICH transfer function and noticed that the unity gain point was ~10 Hz, well below the bounce modes. We tried increasing the gain but found saturation, and Rob suggested that there could be misalignment on the AP table, which Steve worked on today. We went out and found two of the PDs (ASDD133 and AS166) to be badly misaligned probably due to a bumped optic upstream. We re-aligned.
We ran cable from the suspension rack to the IOO rack to record the signals with DAQ channels.
The test channels:
UL coil C1:IOO-MC_DRUM1 (Caryn was using, we will replace when we are done)
UL input C1:IOO-MC_TMP1 (Caryn was using, we will replace when we are done)
LR coil C1:PEM-OSA_SPTEMP
LR input C1:PEM-OSA_APTEMP
We will leave these overnight; we intend to remove them tomorrow or Monday.
We closed the PSL shutter and killed the MC autolocker.
Last night, we put the IFO in FP Michelson configuration. We took transfer functions of CARM and DARM, first using CM excitations directly on the ETMs, and then using modulations of the laser frequency via MC excitation. We found that there was basically no coupling into DARM using the MC excitation, but that there was coherence in DARM using the ETM excitation. Therefore, I tuned the ETM common mode in the output matrix. I did this by taking transfer functions of PD1_Q with PD2_I (see attached plot). I changed the drdown_bang script to set C1:LSC-BTMTRX_14 0.98 and C1:LSC-BTMTRX_24 1.02.
We installed the watchLockLoss script in scripts/AutoDTT/. This script monitors arm power and uses command line
DTT to save 5 s snapshot of the interferometer when it senses loss of lock. We ran it on linux and it seemed to
save an xml file about half the time; we'll try it on solaris.
I managed to get up to arm power of about 20 a couple of times. IFO lost lock a couple of times after turning
off moving zero. MC2 would often get tripped by lock loss and need resetting. Maybe we will try to stiffen the
At about 1am or so Yoichi and I opened VC1. CC1 had fallen to about 5e-5 torr.
I've plotted TRX, TRY, PD12I and PD11Q. Arm powers after locking increase for a few tens of minutes, peak out, and then decrease before lock is lost.
I should have mentioned that the AS port camera image seems to get progressively uglier over the course of these locks. Maybe we can use the JoeCam to make a movie of it.
locks last for about an hour. this was true last night as well (see "arm power curve" entries). the second lock shown here evolves differently for unknown reasons. the jumps in the arm powers of the first lock are due to turning on DC readout. length-to-angle needs tuning.
the align script was run after the third lock here. it would have been interesting to see the arm powers in a 4th lock
After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115) was noisy. Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag). We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts. I popped in the ETMX quad and everything looked fine. I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine. We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads.
Attached is a plot. The reference curves are with the faulty quad (115). The others are with the 121.
I adjusted the ETMY quad gains up by a factor of 10 so that the SUM is similar to what it was before.
It seems that the MC3 problem is intermittent (one-day trend attached). I tried to take advantage of a "clean MC3" night, but the watch script would usually fail at the transition to DC CARM and DARM. It got past this twice and then failed later, during powering up. I need to check the handoff.
I checked the four rear coils on ETMX by exciting XXCOIL_EXC channel in DTT with amplitude 1000@ 500 Hz and observing the oplev PERROR and YERROR channels. Each coil showed a clear signal in PERROR, about 2e-6 cts. Anyway, the coils passed this test.
I found some neat signal analysis software for my mac (http://www.faberacoustical.com/products/), and took a spectrum of the ambient noise coming from the cryopump. The two main noise peaks from that bad boy were nowhere near 3.7 kHz.
I also made xfer fctns of the 4 piston coils on ETMY and ETMX with OL_PIT. (I looked at all 4 even though the attached plot only shows three.) So it looks ike the coils are OK.
Looks like something went nuts in late April. We have yet to try a hard reboot.
We worked on tuning the DD handoff tonight. We checked the DD PD alignments and they looked fine. First I tuned the 3 demod phases to minimize offsets. Then I noticed that the post-handoff MICH xfer function needed an increase in gain to look like the pre-handoff xfer function (which has a UGF of about 25 Hz). I increased the MICH PD9_Q gain from 2 to 7 in the input matrix. But, the handoff to PRC still failed, so tomorrow we will try to find out why.
In the plot, ref0 is before MICH handoff, and ref1 is after MICH handoff. There is also a PRC trace (before PRC handooff).
rob, alberto, rana, pete
we reset this computer, which was out of sync (16384 in the FE_SYNC field instead of 0)
Rana, Alberto, Pete
We have the DD handoff nominally working. Sometimes, increasing the SRC gain at the end makes MICH get unstable. This could be due to a non-diagonal term in the matrix, or possibly because the DRM locks in a funky mode sometimes.
To get the DD handoff working, first we tuned demod phases in order to zero the offsets in the PD signals handed-off-to. Based on transer function measurements, I set the PRC PD6_I element to 0.1, and set the PD8_I signal to 0, since it didn't seem to be contributing much. We also commented out the MICH gain increase at the end of the DD_handoff script.
It could still be more stable, but it seems to work most of the time.
I played with the DD handoff during the day. The DRM dark port was flickering like a candle flame in Dracula's castle. The demod offsets for the handoff signals looked fine. After MICH handoff, the MICH_CTRL started to get unstable at some low frequency, maybe 3 Hz (I didn't measure). So I increased the MICH gain from 0.1 to 0.17 and it settled down. PRC and SRC went fine. Then the DD_handoff script raised the MICH gain to 0.7, and an instability started to grow in MICH_CTRL (at some higher frequency). I decreased the MICH gain from 0.7 to 0.5, and it settled down and stayed stable.
Looks like yesterday was particularly noisy. It's unclear to me why diurnal variation much more visible in MC1_Y, and why the floor wanders.
The first plot shows 5 days. The second plot shows 20 days.
After fixing the tp problem, I tried locking again. Grabbing and DD handoff, no problem. Died earlier than last night, handing off CARM to REFL_DC, around arm power of 4 or so. Seems to happen after turning off the moving zero, Rob says it might be touchy in daytime.
Last night Rob ran senseDRM and loadDRMImatrixData and came up with the following for the input matrix:
tdswrite C1:LSC-ITMTRX_b2 0.065778 \
C1:LSC-ITMTRX_d2 2.2709 \
C1:LSC-ITMTRX_f2 2.9361 \
C1:LSC-ITMTRX_122 0.42826 \
C1:LSC-ITMTRX_b3 -0.064839 \
C1:LSC-ITMTRX_d3 -0.016913 \
C1:LSC-ITMTRX_f3 -0.021576 \
C1:LSC-ITMTRX_123 -0.0025243 \
C1:LSC-ITMTRX_b5 0.3719 \
C1:LSC-ITMTRX_d5 1.3109 \
C1:LSC-ITMTRX_f5 -0.16412 \
C1:LSC-ITMTRX_125 0.39574 \
C1:LSC-ITMTRX_33 0 \
C1:LSC-ITMTRX_42 0 \
Today, I reran these and got the following, and DD_handoff remained happy:
tdswrite C1:LSC-ITMTRX_b2 -0.10329 \
C1:LSC-ITMTRX_d2 2.0344 \
C1:LSC-ITMTRX_f2 3.2804 \
C1:LSC-ITMTRX_122 0.22516 \
C1:LSC-ITMTRX_b3 -0.076292 \
C1:LSC-ITMTRX_d3 -0.014603 \
C1:LSC-ITMTRX_f3 -0.12101 \
C1:LSC-ITMTRX_123 0.0054128 \
C1:LSC-ITMTRX_b5 0.33521 \
C1:LSC-ITMTRX_d5 1.1425 \
C1:LSC-ITMTRX_f5 -0.32759 \
C1:LSC-ITMTRX_125 0.25877 \
C1:LSC-ITMTRX_33 0 \
C1:LSC-ITMTRX_42 0 \
I wanted to remeasure with the canonical output matrix (-0.7 from MICH to PRM and 0.7 from MICH to SRM), but the DRM freaked out when MICH to PRM went below -0.3.
I added a temporary channel, to input 9 on the PEM ADCU. Beware the 30, 31, and 32 inputs. I tried 32 and it only gave noise.
I compiled and ran a simple (i.e. empty) front end controller on scipe12 at wilson house. I hooked a signal into the ADC and watched it in the auto-generated medm screens.
There were a couple of gotchas:
1. Add an entry SYS to the file /etc/rc.local, to the /etc/setup_shmem.rtl line, where the system file is SYS.mdl.
2. If necessary, do a BURT restore. Or in the case of a mockup set the BURT Restore bit (in SYS_GDS_TP.adl) to 1.
Yesterday, Jay brought over the IO box for megatron, and got it working. We plan to firewall megatron this afternoon, with the help of Jay and Alex, so we can set up GDS there and play without worrying about breaking things. In the meantime, we went to Wilson House to get some breakout boards so we can take transfer functions with the 785, for an ETMX controller. We put in a sine wave, and all looks good on the auto-generated epics screens, with an "empty" system (no filters on). Next we'll load in filters and take transfer functions.
Unfortunately we promised to return the breakout boards by 1pm today. This is because, according to denizens of Wilson House, Osamu "borrowed" all their breakout boards and these were the last two! If we can't locate Osamu's cache, they expect to have more in a day or two.
Here is the transfer function of the through filter working at 16KHz sampling. It looks fine except for the fact that the dc gain is ~0.8. Koji is going to characterize the digital down sampling filter in order to try to compare with the generated code and the filter coefficients.
Alex has firewalled megatron. We have started a framebuilder there and added testpoints. Now it is possible to take transfer functions with the shared memory MDC+MDP sandbox system. I have also copied filters into MDC (the controller) and made a really ugly medm master screen for the system, which I will show to no one.
Yesterday we found that the channel C1:MDP-POS_EXC looked distorted and had what appeared to be doubled frequency componenets, in the dataviewer. This was because the dcu_rate in the file /caltech/target/fb/daqdrc was set to 16K while the adl file was set to 32K. When daqdrc was corrected it was fixed. I am going to recompile and run all these models at 16K. Once the 40 m moves over to the new front end system, we may find it advantageous to take advantage of the faster speeds, but maybe it's a good idea to get everything working at 16K first.
We put a simple pendulum into the MDP model, and everything communicates. We're still having some kind of TP or daq problem, so we're still in debugging mode. We went back to 32K in the .adl's, and when driving MDP, the MDC-ETMX_POS_OUT is nasty, it follows the sine wave envelope but goes to zero 16 times per second.
The breakout boards have arrived. The plan is to fix this daq problem, then demonstrate the model MDC/MDP system. Then we'll switch to the "external" system (called SAM) and match control TF to the model. Then we'd like to hook up ETMX, and run the system isolated from the rest of the IFO. Finally we'd like to tie it into the IFO using reflective memory.
The daq on megatron was nuts. Alex and I discovered that there was no gds installation for site_letter=C (i.e. Caltech) so the default M was being used (for MIT). Apparently we are the first Caltech installation. We added the appropriate line to the RCG Makefile and recompiled and reinstalled (at 16K). Now DV looks good on MDP and MDC, and I made a transfer function that replicates bounce-roll filter. So DTT works too.