PRM, SRM and the ENDs are kicking up. Computers are down. PMC slider is stuck at low voltage.
Still not able to resolve the issue.
Except for c1lsc, the models are not running on any of the FE machines . I can ssh into all the machines but could not restart the models on the FE using the usual rtcds restart <modelname>
Something happened around 4AM (inferring from the Striptool on the wall) and the models have not been running since then.
Diego Bersanetti received 40m specific safety training today.
Everything seems to be back up and running.
The computers weren't such a big problem (or at least didn't seem to be). I turned off the watchdogs, and remotely rebooted all of the computers (except for c1lsc, which Manasa already had gotten working). After this, I also ssh-ed to c1lsc and restarted all of the models, since half of them froze or something while the other computers were being power cycled.
However, this power cycling somehow completely screwed up the vertex suspensions. The MC suspensions were fine, and SRM was fine, but the ITMs, BS and PRM were not damping. To get them to kind of damp rather than ring up, we had to flip the signs on the pos and pit gains. Also, we were a little suspicious of potential channel-hopping, since touching one optic was occasionally time-coincident with another optic ringing up. So, no hard evidence on the channel hopping, but suspicions.
Anyhow, at some point I was concerned about the suspension slow computer, since the watchdogs weren't tripping even though the osem sensor rmses were well over the thresholds, so I keyed that crate. After this, the watchdogs tripped as expected when we enabled damping but the RMS was higher than the threshold.
I eventually remotely rebooted c1sus again. This totally fixed everything. We put all of the local damping gains back to the values that we found them (in particular, undoing our sign flips), and everything seems good again. I don't know what happened, but we're back online now.
Q notes that the bounce mode for at least ITMX (haven't checked the others) is rung up. We should check if it is starting to go down in a few hours.
Also, the FSS slow servo was not running, we restarted it on op340m.
T-240 has a different convention than we use. It assumes that North is aligned with the Y-axis. Since this is the new guy, and we've been using the Guralps with North = X for many years, Diego and I rotated the T-240, and put a label on it that N/S is Y, and E/W is X. Obviously Vert is still Z.
We noticed last night that the yarm couldn't handle the old nominal gain for the ASS servos. We were able to run the ASS using about 0.3 in the overall gain. So, I have reduced the gain in each of the individual servos by a factor of 3, so that the scripts work, and can set the overall gain to 1.
I took a quick look at single arm RIN. Actuating on MC2 vs. the ETM, or using AS55 instead of POY11 made no noticeable difference in the arm cavity RIN. Not too surprising, but there it is.
In order to do high quality huddle subtraction, we need to align the seismometer axes to high precision. We would need 1000x subtraction to see the instrument noise floor, but are likely to only get 100x. For that we need to align the axes to 0.5 deg (or do a Wiener coordinate transform with the data). To do this, we need to use a high quality bubble level and eventually iterate after trying out.
We should strain relieve the seismometer cables on the slab. It should be a tight clamp so that acoustic vibrations on the cables are terminated at the clamp and don't get to the seismometers. The clamp can be attached to the slab using some strong epoxy.
I modified the /cvs/cds/caltech/target/c1psl/psl.db file to adjust the records for the FSS-FAST signal (to make it go yellow / red at the correct voltages). This was needed to match 5V offset which Koji added to the output of the FSS board back in August.
I also manually adjusted the alarm levels with caput so that we don't have to reboot c1psl. Beware of potential tiimebomb / boot issues if I made a typo! psl.db update in the SVN (also, there were ~12 uncomitted changes in that directory....please be responsible and commit ALL changes you make in the scripts directory, even if its just a little thing and you are too cool for SVN)
We've known for years that the IMC WFS sensing chain is pointlessly bad, but until recently, we haven't thought it was worth it to fix.
There are problems in all parts of the chain:
I aligned the beam into the PMC, mostly in yaw. Don't know why it drifted, but it was annoying me, so I fixed it.
Elog from ~5am last night:
Tonight was just several trials of PRFPMI locking, while trying to pay more attention to the lockloss plots each time.
I tried once to acquire DRMI on 1f while the arms were held off resonance. I wasn't catching lock, so I went back to PRMI+arms. I aligned the PMC, which I noted in a separate elog.
I was able to hold the PRMI on REFL33I&Q, and have ALS CARM and DARM at zero CARM offset. The arm would "buzz" through the resonance regularly. I use the word buzz because that's kind of what it sounded like. This is the noise of the ALS system.
I think we want to add the transmission QPD angular signals to the frames. Right now, we just keep the sums. It would have been handy to glance at them, and see if they were wiggling in the same way that some other signal was waggling.
All the data files are in /opt/rtcds/caltech/c1/scripts/LSC/LocklossData. Each folder is one lockloss. It includes text files for each trace, as well as any plots that I've made, and any notes taken. The text files are several MB each, so I'm not going to bog the elog down with them. There are a few folders that end in "_notInteresting". These ones are, as one might guess, not interesting. 2 were MC locklosses (I'm not actuating on MC2, so I declared these independent from my work) and one was when I knew that my ALS was bad - the beatnotes weren't in good places, and so the ALS noise was high.
Working notes: Lost lock because POP22 went too low. PRCL and MICH triggered off. After this, changed PRCL and MICH "down" thresholds to 0.5, from 10.
Conclusion: Easy fix. Changed the down thresholds for MICH and PRCL to be lower, although still low enough that they will trigger off for a true lockloss. Why though do we lose so much sideband power when the arm transmission goes high? POP22 dipped below 10 when TRX went above 29. Does this happen on both sides of the CARM offset? Quick simulation needed.
Working notes: PRFPMI, reducing CARM offset to arm powers of 7. CARM on sqrtInv, DARM on DCtrans. PRMI on REFL33 I&Q. Don't know why I lost lock. Maybe angular stuff in PRC? I think POP spot was moving in yaw as it started to go bad.
Note, later: regathered data to also get POP angular stuff. Don't think it's POP angular. Not sure what it is.
Conclusion: I'm not sure what this lockloss was caused by, although it is not something that I can see in the POP QPD (which was my initial suspicion). It is, like many of the rest of the cases, one where I see significant bounce and roll mode oscillations (error and control signals oscillating at 16 and 24 Hz). I don't think those are causing the locklosses though.
Working notes: PRFPMI, carm_up script finished, sitting at arm powers of 8. CARM, DARM on DC trans. PRMI on REFL33. Don't know why lost lock.
[Don't have any? - I'll make some]
Conclusion: Again, I see 16 and 24 Hz oscillations, but I don't think those are causing the lockloss.
Working notes: PRFPMI, arms about 8. CARM, DARM on DC trans. PRMI on REFL33. Don't know why I lost lock.
Conclusion: Don't have an answer.
Working notes: Lockloss while going to arm powers of 7ish from 6ish. Not POP angular, POP22 didn't go low.
Conclusions: This one wasn't from POP22 going too low, but again, I don't see anything other than 16 and 24Hz stuff.
I am still staring at / trying to figure out the latter 4 locklosses posted earlier. But, I have just included the transmission QPD angular output signals to the frames, so we should be able to look at that with locklosses tonight.
To get the lockloss plots: in ..../scripts/LSC/LocklossData/ , first run ./FindLockloss.sh <gps time> . This just pulls the TRX and TRY data, and doesn't save it, so it is pretty quick. Adjust the gps time until you capture the lockloss in your plot window. Then run ./LockLossAutoPlot.sh <gps time> to download and save the data. Since it has become so many channels, it first makes a plot with all of the error and control signals, and then it makes a plot with the power levels and angular signals. The data folder is just called <gps time>. I have started also including a text file called notes inside of the folder, with things that I notice in the moment, when I lose lock. Don't use .txt for the suffix of the notes file, since the ./PlotLockloss.py <folder name> script that will plot data after the fact tries to plot all .txt files. I have also been appending the folder name with keywords, particularly _notInteresting or _unknown for either obvious lockloss causes or mysterious lockloss cases.
./FindLockloss.sh <gps time>
./LockLossAutoPlot.sh <gps time>
./PlotLockloss.py <folder name>
Here is a plot of when the arm powers went pretty high from last night. CARM and DARM were on ALS comm and diff, PRMI was on REFL33 I&Q. I set the CARM offset so that I was getting some full arm resonances, and it goes back and forth over the resonance.
The Y axes aren't perfect when I zoom, but the maximum TRX value was 98 in this plot, while the max TRY value was 107.
MICH_OUT was hitting its digital rails sometimes, and also it looks like PRCL and MICH occasionally lost lock for very brief periods of time.
Glitch-like events in PRCL_OUT are at the edges of these mini-locklosses. I don't know why POPDC has glitch-y things, but we should see if that's real.
Okay, I've zoomed in a bit, and have found that, interestingly, I see that both POP22 and POP110 decrease, then increase, then decrease again as we pass through full resonance. This happens in enough places that I'm pretty sure we're not just going back and forth on one side of the resonance, but that we're actually passing through it. Q pointed out that maybe our demod phase angle is rotating, so I've made some zoom-in plots to see that that's not a significant effect. I plot the I and Q phases individually, as well as the total=sqrt(I**2 + Q**2), along with TRY (since the increases and decreases are common to both arms, as seen in the plot above).
For POP 22:
For POP 110:
I also plot the MICH and PRCL error signals along with TRY and POP22 total. You can see that both MICH and PRCL were triggered off about 0.1msec after POP22 total this it's first super low point. Then, as soon as POP22 becomes large enough, they're triggered back on, which happens about 1.5msec later. (The triggering was actually on POP22I, not POP22tot, but the shapes are the same, and I didn't want to overcrowd my plots).
I am not sure if we consistently lose sideband signal in the PRC more on one side of the CARM resonance than the other, but at least POP22 and POP110 are both lower on the farther side of this particular peak. I want to think about this more in relation to the simulations that we've done. One of the more recent things that I see from Q is from September: elog 10502, although this is looking specifically at the REFL signals at 3f, not the 2f circulating PRCL power as a function of CARM offset.
From the measured OLTF, the dynamics of the damped suspension was inferred by calculating H_damped = H_pend / (1+OLTF).
Here H_pend is a pendulum transfer function. For simplicity, the DC gain of the unity is used. The resonant frequency of the mode
is estimated from the OLTF measurement. Because of inprecise resonant frequency for each mode, calculated damped pendulum
has glitches at the resonant frequency. In fact measurement of the OLTF at the resonant freq was not precise (of course). We can
just ignore this glitchiness (numerically I don't know how to do it particularly when the residual Q is high).
Here is my recommended values to have the residual Q of 3~5 for each mode.
MC1 SUS POS current 75 -> x3 = 225
MC1 SUS PIT current 7.5 -> x2 = 22.5
MC1 SUS YAW current 11 -> x2 = 22
MC1 SUS SD current 300 -> x2 = 600
MC2 SUS POS current 75 -> x3 = 225
MC2 SUS PIT current 20 -> x0.5 = 10
MC2 SUS YAW current 8 -> x1.5 = 12
MC2 SUS SD current 300 -> x2 = 600
MC3 SUS POS current 95 -> x3 = 300
MC3 SUS PIT current 9 -> x1.5 = 13.5
MC3 SUS YAW current 6 -> x1.5 = 9
MC3 SUS SD current 250 -> x3 = 750
This is the current setting in the end.
MC1 SUS POS 150
MC1 SUS PIT 15
MC1 SUS YAW 15
MC1 SUS SD 450
MC2 SUS POS 150
MC2 SUS PIT 10
MC2 SUS YAW 10
MC2 SUS SD 450
MC3 SUS POS 200
MC3 SUS PIT 12
MC3 SUS YAW 8
MC3 SUS SD 500
Similar to what Jenne did the other night, I kept the PRFPMI arm DoFs locked on ALS, in hopes to check out the RF error signals.
I was able to stably sit at nominally zero offset in both CARM and DARM, tens of minutes at a time, and the PRMI could reacquire without a fuss. Arm powers would rest between 10 and 20, intermittently exhibiting the "buzzing" behavior that Jenne mentioned when passing through resonance. 100pm CARM offset means arm powers of around 10, so since our ALS RMS is on this order, this seems ok. I saw TRX get as high as 212 counts, which is just about the same that I've simulated as the maximum power in our IFO.
To get this stable, I turned off all boosts on MICH and PRCL except PRCL FM6, and added matrix elements of 0.25 for TRX and TRY in the trigger line for the PRMI DoFs. The logic for this is that if the arm powers are higher than 1, power recycling is happening, so we want to keep things above the trigger down value of 0.5, even if POP22 momentarily drops.
I also played around a bit with DARM offsets. We know from experience that the ALS IR resonance finding is not super precise, and thus zero in the CARM FM is not zero CARM offset when on ALS. The same obviously holds for DARM, so I moved the DARM offset around, and could see the relative strengths of flashes change between the arms as expected.
I've written down some GPS times that I'm going to go back and look at, to try to back out some information about our error signals.
Lastly, there may be something undesirable happening with the TRX QPD; during some buzzing, the signal would fluctuate into negative values and did not resemble the TRY signal as it nominally would. Perhaps the whitening filters are acting up...
[Steve, Diego, Manasa]
Since the beatnotes have disappeared, I am taking this as a chance to put the FOL setup together hoping it might help us find them.
Two 70m long fibers now run along the length of the Y arm and reach the PSL table.
The fibers are running through armaflex insulating tubes on the cable racks. The excess length ~6m sits in its spool on the top of the PSL table enclosure.
Both the fibers were tested OK using the fiber fault locator. We had to remove the coupled end of the fiber from the mount and put it back in the process. So there is only 8mW of end laser power at the PSL table after this activity as opposed to ~13mW. This will be recovered with some alignment tweaking.
After the activity I found that the ETMY wouldn't damp. I traced the problem to the ETMY SUS model not running in c1iscey. Restarting the models in c1iscey solved the problem.
AP Armaflex tube 7/8" ID X 1" wall insulation for the long fiber in wall mounted cable trays installed yesterday.
The 6 ft long sections are not glued. Cable tied into the tray pressed against one an other, so they are air tight. This will allow us adding more fibers later.
Atm2: Fiber PSL ends protection added on Friday.
Two 70m long fibers are now running through armaflex insulating tubes along the X arm on the cable racks. The excess length of the fiber sits in its spool on top of the PSL enclosure.
Fibers were checked after this with the fiber fault locator (red laser) and found OK.
Steve had me measure the RIN of a JDSU HeNe laser. I used a PDA520, and measured the RIN after the laser had been running for about an hour, which let the laser "settle" (I saw the low frequency RIN fall after this period).
Here's the plot and zipped data.
Steve: brand new laser with JDSU 1201 PS
EDIT: some images look bad on the elog, and the notebook is parsed, which is is bad. Almost everything posted here is in the compressed file attachment.
As we've been discussing, we want to reduce the laser's jitter effect on the QPDs of the OpLevs, without losing sensitivity to angular motion of the mirror; the current setup is roughly described in this picture:
The idea is to place an additional lens (or lenses) between the mirror and the QPD, as shown in the proposed setup in this picture:
I did some ray tracing calculations to find out how the system would change with the addition of the lens. The step-by-step calculations are done at the several points shown in the pictures, but here I will just summarize. I chose to put the telescope at a variable relative distance x from the QPD, such that x=0 at the QPD, and x=1 at the mirror.
Here are the components that I used in the calculations:
I used a 3x3 matrix formalism in order to have easier calculations and reduce everything to matrix multiplications; that because the tilted mirror has an annoying addictive term, which I could get rid of:
Therefore, n the results the third line is a dummy line and has no meaning.
For the first case (first schematic), we have, for the final r and Theta seen at the QPD:
In the second case, we have a quite heavy output, which depend also on x and f:
Now, some plots to help understand the situation.
What we want if to reduce the angular effect on the laser displacement, without sacrificing the sensitivity on the mirror signal. I defined two quantities:
Beta is the laser jitter we want to reduce, while Gamma is the mirror signal we don't want to lose. I plotted both of them as a function of the position x of the new lens, for a range of focal lengths f. I used d1 = d2 = 2m, which should be a realistic value for the 40m's OpLevs.
Plot of Beta
Plot of Gamma
Even if it is a bit cluttered, it is useful to see both of the same plot:
Plot of Beta & Gamma
Apart from any kind of horrific mistakes that I may have done in my calculations, it seems that for converging lenses our signal Gamma is always reduced more than the jitter we want to suppress. For diverging lenses, the opposite happens, but we would have to put the lens very near to the mirror, which is somehow not what I would expect. Negative values of Beta and Gamma should mean that the final values at the QPD level are on the opposite side of the axis/center of symmetry of the QPD with respect to their initial position.
I will stare at the plots and calculations a bit more, and try to figure out if I missed something obvious. The Mathematica notebook is attached.
I stared a bit longer at the plots and thanks to Eric's feedback I noticed I payed too much attention to the comparison between Beta and Gamma and not enough attention to the fact that Beta has some zero-crossings...
I made new plots, focusing on this fact and using some real values for the focal lengths; some of them are still a bit extreme, but I wanted to plot also the zero-crossings for high values of x, to see if they make sense.
If we are not interested in the sign of our signals/noises (apart from knowing what it is), it is maybe more clear to see regions of interest by plotting Beta and Gamma in absolute value:
I don't know if putting the telescope far from the QPD and near the mirror has some disadvantage, but that is the region with the most benefit, according to these plots.
The plots shown so far only consider the coefficients of the various terms; this makes sense if we want to exploit the zero-crossing of Beta's coefficient and see how things work, but the real noise and signal values also depend on the Alpha and Theta themselves. Therefore I made another kind of plot, where I put the ratio r'(Alpha)/r'(Theta) and called it Tau. This may be, in a very rough way, an estimate of our "S/N" ratio, as Alpha is the tilt of the mirror and Theta is the laser jitter; in order to plot this quantity, I had to introduce the laser parameters r and Theta (taken from the Edmund Optics 1103P datasheet), and also estimate a mean value for Alpha; I used Alpha = 200 urad. In these plots, the contribute of r'(r) is not considered because it doesn't change adding the telescope, and it is overall small.
In these plots the dashed line is the No Telescope case (as there is no variable quantity), and after the general plot I made two zoomed subplots for positive and negative focal lengths.
If these plot can be trusted as meaningful, they show that for negative focal lengths our tentative "S/N" ratio is always decreasing which, given the plots shown before, it does little sense: although for these negative f Gamma never crosses zero, Beta surely does, so I would expect one singular value each.
Take-away for the night: We need to do some more fine-tuning of the PRCL and MICH loops when we have arm resonance.
Koji sat with me for the first part of the night, and we looked back at the data from last week (elog 10727), as well as some fresh data from tonight. Looking at the spectra, we noticed that last week, and early in the evening today, I had a fairly broad peak centered around ~51Hz. We are not at all sure where this is coming from. The PRMI was locked on REFL 33 I&Q, and CARM and DARM were both on ALS comm and diff. This peak would repeat-ably come and go when I changed the CARM offset. At high arm powers (above a few tens? I don't know where exactly), the peak would show up. Move off resonance, and the peak goes away. However, later in the night, after an IFO realignment, I wasn't able to reproduce this effect. So. We aren't sure where it comes from, but it is visible only in the CARM spectra, so there's some definite feedback funny business going on.
Anyhow, after that, since I couldn't reproduce it, I went on to trying to hold the PRMI at high arm powers, but wasn't so successful. I would reduce the CARM offset, and instead of a 50Hz peak, I would get broadband noise in the PRMI error signals, that would eventually also couple in to the CARM (but not DARM) error signal, and I would lose PRMI lock. I measured the PRCL and MICH transfer functions while the arms were at some few units of power, and found that while MICH was fine, PRCL was losing too much phase at 100Hz, so I took away the FM3 boost. This helped, but not enough. I had 1's in the triggering matrix for TRX and TRY to both PRCL and MICH, so that even if POP22 went low, if the arms were still locked then the PRMI wouldn't lose lock unnecessarily, but I was still having trouble. In an effort to get around this, I transitioned PRMI over to REFL 165 I&Q.
While the arms were held around powers of 2ish, I readjusted the REFL 165 demod phase. I found it set to 150 deg, but 75 deg is better for PRMI locking with the arms. For either acquiring or transitioning from REFL33, I would use REFL165I * -1.5 for PRCL, and REFL 165Q * 0.75 for MICH. (Actually, I was using -2 for REFL165I->PRCL, and +0.9 for REFL165Q->MICH, but I had to lower the servo gains, so doing some a posteriori math gives me -1.5 and +0.75 for what my matrix elements should have been, if I wanted to leave my servo gains at 2.4 for MICH and -0.02 for PRCL.) I don't always acquire on REFL165, and if it's taking a while I'll go back to putting 1's in the REFL33 I&Q matrix elements and then make the transition.
With PRMI on REFL 165 I&Q, I no longer had any trouble keeping the PRMI locked at arbitrarily high arm powers. I was still using 1*POP22I + 1*TRX + 1*TRY for triggering PRCL and MICH. My thresholds were 50 up, 0.1 down. The idea is that even if POP goes low (which we've seen about halfway up the CARM resonance), if we're getting some power recycling and the arms are above 1ish, then that means that the PRMI is still locked and we shouldn't un-trigger anything. I didn't try switching over to POP110 for triggering, because POP22 was working fine.
Earlier in the night, Koji and I had seen brief linear regions in POX and POY, as well as some of the REFL signals when we passed quickly through the CARM resonance. I don't have plots of these, but they should be easy to reproduce tomorrow night. Koji tried a few times to blend in some POY to the CARM error signal, but we were not ever successful with that. But, since we can see the PDH-y looking regions, there may be some hope, especially if Q tells us about his super secret new CESAR plan.
Okay, I'm clearly too tired to be writing, but here are some plots. The message from these is that the PRMI loops are causing us to fluctuate wildly in arm transmission power. We should fix this, since it won't go away by getting off of ALS. The plots are from a time when I had the PRMI locked on REFL165, and CARM and DARM were still on ALS comm and diff. All 3 of these colored plots have the same x-axis. They should really be one giant stacked plot.
Also, bonus plot of a time when the arm powers went almost to 200:
At Rana's request, I've made an in-situ measurement of the RIN of all of our OpLevs. PSL shutter closed, 10mHz BW. The OpLevs are not neccesarily centered, but the counts on darkest quadrant on each QPD is not more than a factor of a few lower than the brightest quadrant; i.e. I'm confident that the beam is not falling off.
I have not attached that raw data, as it is ~90MB. Instead, the DTT template can be found in /users/Templates/OL/ALL-SUM_141125.xml
Here are the mean and std of the channels as reported by z avg 30 -s, (in parenthesis, I've added the std/mean to estimate the RMS RIN)
z avg 30 -s,
SUS-BS_OLSUM_IN1 1957.02440999 1.09957708641 (5.62e-4)
SUS-ETMX_OLSUM_IN1 16226.5940104 2.25084766713 (1.39e-4)
SUS-ETMY_OLSUM_IN1 6755.87203776 8.07100449176 (1.19e-3)
SUS-ITMX_OLSUM_IN1 6920.07502441 1.4903816992 (2.15e-4)
SUS-ITMY_OLSUM_IN1 13680.9810547 4.71903560692 (3.45e-4)
SUS-PRM_OLSUM_IN1 2333.40523682 1.28749988092 (5.52e-4)
SUS-SRM_OLSUM_IN1 26436.5919596 4.26549117459 (1.61e-4)
Dividing each spectrum from DTT by these mean values gives me this plot:
ETMY is the worst offender here...
Just to get our day started right, we tweaked up the alignment of the Ygreen to the Yarm (after IR alignment), and also touched up the X beatnote alignment on the PSL table. Ran the LSC offsets script, and then started locking.
All of the locking tonight has been based on CARM and DARM held on ALS comm/diff, and PRMI held on REFL165. Today, CARM was actuated using MC2. No special reason for the switch from ETMs. The AS port is noticeably darker when using REFL165 instead of REFL33.
Around 12:33am(ish), we were able to hold the arms at powers of about 100, for almost a minute. The fluctuations were at least 50% of that value, but the average was pretty high. Exciting.
Q and I tried a few times to engage the AO path while the arms were held at these high powers. Q hopefully remembers what the gain and sign values were where we lost lock. We didn't pursue this very far, since I was seeing the 50Hz oscillation that Koji and I saw the other day. I increased the CARM gain from 6 to 10, and that seemed to help significantly. Also, messing with the PRMI loops a bit helped. Q increased the pole frequency in FM 5 for both MICH and PRCL from 2k to 3k. While he had Foton open, he made sure that all of the LSC DoF filters use the z:p notation.
I then did a few trials of trying to transition CARM over to normalized REFL11I. Now that I'm typing, it occurs to me that I should have checked REFL11's demod phase. Ooops. Anyhow, using the phase that was in there, I turned on a cal line pushing on ETMs CARM, and found that using -0.002*REFL11I / (TRX + TRY) was the right set of elements. I also put an offset of 0.05 into the CARM CESAR RF place, and started moving. I tried several times, but never got past about 30% normalized REFL11 and 70% ALS comm.
During these trials, Q and I worked also on tweaking up the PRMI lock. As mentioned last night, PRCL FM3 eats too much phase (~30deg at 100Hz!), so I don't turn that on ever. But, I do turn on FM1 (which is new tonight), FM2, 6, 8 and 9. FM8 is a flat gain of 0.6 that I use so I can have higher gain to make acquisition faster, but immediately turn the gain down to keep the loop in the center of the phase bubble. MICH needed a lowpass, so in addition to FM2, I am now also triggering FM 8, which is a 400Hz lowpass that was already in there.
Now, my MICH gain is 2.4, with +0.75*REFL165Q, and PRCL gain is -0.02 with -3*REFL165I. Triggering for both MICH and PRCL is 1*POP22I + 5*TRX with 50 up, 0.1 down.
In my latest set of locks, I have been losing lock semi-regularly due to a 100Hz oscillation in either the PRCL or MICH loops. If I watch the spectra, most times I take a step in CARM offset reduction, I get a broad peak in both the MICH and PRCL error signals. Most of the time, I stay locked, and the oscillation dies away. Sometimes though it is large enough to put me out of lock. I'm not sure yet where this is coming from, but I think it's the next thing that needs fixing.
Here is a shot of the spectra just as one of these 100Hz oscillations shows up. The dashed traces are the nominal error signals when I'm sitting at some CARM offset, and the solid traces are just after a step has been made. The glitch is only happening in the PRMI, not CARM and DARM.
We have done several things this evening, which have incrementally helped the lock stability. We are still locking CARM and DARM on ALS, and PRMI on REFL165.
Something that has been bothering me the last few days is that early in the evening, I would be able to get to very high arm powers, but later on I couldn't. I think this has to do with setting the contrast at the AS port separately for the sideband versus the carrier. I had been minimizing the AS port power with the arms held off resonance, PRMI locked. But, this is mostly sideband. If instead I optimize the Michelson fringes when the arms are held with ALS at arm powers of 1, and PRM is still misaligned, I end up with much higher arm powers later. Some notes about this though: most of this alignment was done with the arm cavity mirrors, specifically the ETMs, to get the nice Michelson fringes. When the PRM is restored and the PRMI locked, the AS port contrast doesn't look very good. However, when I leave the alignment alone at this point, I get up to arm powers above 100, whereas if I touch the BS, I have trouble getting above 50.
Around GPS time 1101094920, I moved the DARM offset after optimizing the CARM offset. We were able to see a pretty nice zero crossing in AS55, although that wasn't at the same place as the ALS diff zero offset (close though). At this time, the arm powers got above 250, and TRY claimed almost 200. These are the plots below, first as a wide-view, then zoomed in. During this time, PRCL still has a broadband increase in noise when the arm powers are high, and CARM is seeing a resonance at a few tens of Hz. But, we can nicely see the zerocrossing in AS55, so I think there's hope of being able to transition DARM over.
Now, the same data, but zoomed in more.
During the 40m meeting, we had a few ideas of directions to pursue for locking:
ETMX sus damping restored and PMC locked manually.
After the activity I found that the ETMY wouldn't damp. I traced the problem to the ETMY SUS model not running in c1iscey. Restarting the models in c1iscey solved the problem.
X-arm AP Armaflex tube insulation is cable tightened into cable tray. Only turning 6 ft sections are taped together.
Remaining things to do: install ends protection tubing
After its' several days of rest, it is time to wake up the IFO.
With that, it's time for a new week of locking, and trying to catch up with the big kids at the sites.
Our first RGA scan since May 27, 2014 elog10585
The Rga is still warming up. It was turned on 3 days ago as we recovered from the second power outage.
Tweaked up the input alignment to the PMC. Now we're at 0.785.
After Koji and I reset the transmission normalizations last Friday, he did some alignment work that increased the Yarm power. So, I had set the transmission normalization when we weren't really at full Yarm power. Today I reset the normalization so that instead of ~1.2, the Y transmission PDs read ~1.0.
I've uploaded a note at T1400735 about a new implementation of CESAR ESCOBAR ideas I've been working on. Please send me any and all feedback, comments, criticisms!
Using the things I talk about in there, I was able to have a time domain simulation of a 40m arm cavity transition through three error signals, without hardcoding the gains, offsets, or thresholds for using the signals. Some results look like this:
I'm going to be trying this out on the real IFO soon...
I updated the medm C1ASS page for the Arm scripts:
ON : same as before
FREEZE OUTPUTS: calls new FREEZE_DITHER.py script, which sets Common Gain and LO Amplitudes to 0, therefore freezing the current output values
START FROM FROZEN OUTPUTS: calls new UNFREEZE_DITHER.py script, which sets Commong Gain and LO Amplitudes as in the DITHER_ASS_ON.py script, but no burt restore is performed
OFFLOAD OFFSETS: it's the old "SAVE OFFSETS", calls the WRITE_ASS_OFFSET.py script
OFF: same as before
StripTool: same as before
First, random notes:
Koji suggested last week that we put a cavity pole filter into the ALS error signals, and then compensate for that in the CARM and DARM servos. The idea is that any RF signals we want to transfer to will have some kind of frequency dependence, and at the final zero CARM offset that will be a simple cavity pole.
I put a pole at 200 Hz, with a zero at 6 kHz into the LSC-ALS[X,Y] filter banks in FM1, and then also put a zero at 200 Hz with a pole at 6 kHz into both the CARM and DARM servos at FM7. Ideally I wouldn't have the 6kHz in there, but the compensation filter in the CARM/DARM servos needs a pole somewhere, so I put in the zero in the ALS signals so that they match. Foton thinks that multiplying the two filters should give a flat response, to within 1e-6dB and 1e-6 deg.
We can lock CARM and DARM on ALS with the new filters, but it seems to be not very stable. We've measured transfer functions in both configurations, and between 50-500Hz, there is no difference (i.e., our matching filters are matching, and cancelling each other out). We sometimes spontaneously lose lock when we're just sitting somewhere with the new configuration, and we cannot run any find IR resonance scripts and stay locked. We've tried the regular old script, as well as Diego's new gentler script. We always fail with the regular script during the coarse scan. With Diego's script, we made it through the coarse scan, but spontaneously lost lock while the script was calculating the location of the peak. So, we determine that there is something unstable about the new configuration that we don't understand. Turning off all the new filters and going back to the old configuration is just as robust as always. Confusing.
Tonight I started testing a new method for the fine scan:
Where did these 200Hz, 6kHz come from?
I wonder what are the correct filters to be incorporated in the filter banks for the cav pole compensarion.
1. ALS Common and Diff have the cavity pole for the green (fcav_GR)
2. IR DARM has the cavity pole of the arms for IR (fcav_IR_DARM)
3. IR CARM (REFL, POP, POX, or POY) has the double cavity pole (fcav_IR_CARM)
1. T(ITM_GR) = 1.094%, T(ETM_GR) = 4.579% => F=108.6 (cf. https://wiki-40m.ligo.caltech.edu/Core_Optics)
L = 37.8 m (cf. http://nodus.ligo.caltech.edu:8080/40m/9804)
=> fcav_GR = c /( 4 L F) = 18.3 kHz ... ignore
2. T(ITM_IR) = 1.384%, T(ETM_IR) = 13.7ppm => F=450.4
=> fcav_IR_DARM = 4.40 kHz
3. The common cavity pole is lower than fcav_IR by factor of power recycling gain.
=> fcav_IR_CARM = fcav_IR / (P_TR * T_PRM)
P_TR is normalized for the locked arm cavity with the PRM misaligned.
T_PRM is 5.637%
e.g. for the TR of 100, fcav_IR_CARM = 4.40/(100*0.05637) = 780Hz
(IR CARM) o--|
+--[CARM 780Hz zero / ??? pole]
(ALSX) o--| |-[ALS C 780Hz pole]----|
| M |
(ALSY) o--| |-[ALS D 4.40kHz pole]--|
+--[DARM 4.40kHz zero / ??? pole]
(IR DARM) o--|
???Hz pole is to ensure the servo filters does not have infinite gain at f=infinite, but in practice we probably can ignore it as long as it is provided by a roll-off filter
(IR CARM) o--|
+--[CARM 780Hz zero / ??? pole]
(ALSX) o--| |-[ALS C 780Hz pole]----|
| M |
(ALSY) o--| |-[ALS D 4.40kHz pole]--|
+--[DARM 4.40kHz zero / ??? pole]
(IR DARM) o--|
The other night (before the holidays), I tried ALS offset tuning with IR POX/POY signals and it worked pretty good.
I didn't need to tune the offset after the scanning script stopped.
Once we are at the foot hill of the main resonance, I ran something like
ezcaservo -r C1:LSC-POX11_I_MON C1:LSC-ALSX_OFFSET -g -0.003 &
ezcaservo -r C1:LSC-POY11_I_MON C1:LSC-ALSY_OFFSET -g -0.003 &
(... I am writing this with my memory. I could be wrong but conceptually the commands looked like these)
It seems that the old Matlab servers went down a week or so early, so I have updated the Matlab license information in
per the instructions on https://www.imss.caltech.edu/content/updating-matlab-license-file
EDIT: Q did this also for the control room iMac
We would like the option of feeding back the POP beam position fluctuations to the PRM to help stabilize the PRC since we don't have oplevs for PR2 and PR3. However, we cannot just use the DC QPD because that beam spot will be dominated by carrier light as we start to get power recycling.
The solution that we are trying as of today is to look at yaw information of just the RF sidebands. (Yaw is worse than pitch, although it would be nice to also control pitch). I have placed a razor blade occluding about half of the POP beam in front of the POP PD (which serves POPDC, POP22 and POP110). I also changed the ASS model so that I could use this signal to feed back to the PRM. Loop has been measured, and in-loop spectra shows some improvement versus uncontrolled.
Optical table work:
The POP beam comes out of the vacuum system and is steered around a little bit, then about 50% goes to the DC QPD. Of the remaining, some goes to the Thorlabs PD (10CF I think) and the rest goes to the POP camera. For the bit that goes to the Thorlabs PD, there is a lens to get the beam to fit on the tiny diode.
There was very little space between the steering mirror that picks off the light for this PD, and the lens - not enough to put the razor blade in. The beam after the lens is so small that it's much easier to occlude only half of the beam in the area before the lens. (Since we don't know what gouy phase we're at, so we don't know where the ideal spot for the razor is, I claim that this is a reasonable place to start.)
I swapped out the old 50mm lens and put in a 35mm lens a little closer to the PD, which gave me just enough room to squeeze in the razor blade. This change meant that I had to realign the beam onto the PD, and also that the demod phase angles for POP22 and POP110 needed to be checked. To align the beam, before placing the razor blade, I got the beam close enough that I was seeing flashes in POPDC large enough to use for a PRMI carrier trigger. The PRMI carrier was a little annoying to lock. After some effort, I could only get it to hold for several seconds at a time. Rather than going down a deep hole, I just used that to roughly set the POP22 demod phase (I -phase maximally negative when locked on carrier, Q-phase close to zero). Then I was able to lock the PRMI sideband by drastically reducing the trigger threshold levels. With the nice stable sideband-locked PRMI I was able to center the beam on the PD.
After that, I introduced the razor blade until both POPDC and POP22 power levels decreased by about half.
Now, the POP22 threshold levels are set to up=10, down=1 for both MICH and PRCL, DoF triggers and FM triggers.
ASS model work:
POP22 I and POP110 I were already going to the ASS model (where ASC lives) for the PRCL ASS dither readbacks. So, I just had to include them in the ASC block, and increased the size of the ASC input matrix. Now you can select either POP QPD pit, POP QPD yaw, POP221 or POP110I to go to either PRCL yaw, PRCL pit, CARM yaw or CARM pit.
Compiled, installed and restarted the ASS model.
Engaging the servo:
I took reference spectra of POP QPD yaw and POP 22, before any control was applied. The shapes looked quite similar, but the overall level of POP22 was smaller by a factor of ~200. I also took a reference spectra of the POP QPD in-loop signal using the old ASC loop situation.
Q looked at Foton for me, and said that with the boost on, the UGF needed to be around 9 or 10 Hz, which ended up meaning a servo gain of +2.5 (the old POP QPD yaw gain was -0.063). We determined that we didn't know why there was a high-Q 50Hz notch in the servo, and why there is not a high frequency rolloff, so right now the servo only uses FM1 (0:2000), FM6 (boost at 1Hz and 3Hz) and FM7 (BLP40).
The in-loop residual isn't quite as good with POP22 as for the QPD, but it's not bad.
Here's the loop:
And here's the error spectra. Pink solid and light blue solid are the reference traces without control. Pink dashed is the QPD in-loop. Red and blue solid are the QPD and POP22 when POP22 is used as the error signal. You can definitely see that the boosts in FM6 have a region of low gain around 1.5Hz. I'm not so sure why that wasn't a problem with the QPD, but we should consider making it a total 1-3Hz bandpass rather than a series of low-Q bumps. Also, even though the POP22 UGF was set to 9 Hz, we're not seeing any suppression above about 4Hz, and in fact we're injecting a bit of noise between 4-20Hz, which needs to be fixed still.
Earlier this afternoon, while locking PRMI, I saw a big peak at 1883.48 Hz. This comes closest to the PRM's 627.75 Hz *3, so I infer that it is the 3rd order harmonic of the PRM violin mode.
While putting this in, I noticed that my addition of ETM filters the other day (elog 10746) had gotten deleted. Koji pointed out that Foton can do this - it allows you to create and save filters that are higher than 20th order, but secretly it deletes them. I went into the filter archive and recovered the old ETM filters, and split things up. I have now totally reorganized the filters, and I have made every single optic (ETMs, ITMs, PRM, SRM, BS, MC2) all the same.
FM1 is BS 1st and 2nd harmonics, and FM6 directly below that is a generic 3rd order notch that is wide enough that it encompases 3*BS.
FM2 is the PRM 1st and 2nd order, and FM7 below it is the PRM 3rd order.
FM3 is the SRM 1st order, FM4 is the ETMs' 1st order, and FM5 is the MC2 1st and 2nd order filters.
All of these filters are triggered on if any degree of freedom is triggered. They all have a ramp time of 3 sec. We may want to consider having separate trigger options for each optic, so that we're not including the PRM notch on the ETMs, for example, and vice versa.
When all of these filters are on, according to Foton we lose 5.6 degrees of phase at 100 Hz.
After some housekeeping (ASS is wonky, alignment of X green beat was bad, tuning of demod angles, fm gains for REFL165), we were able to bring the PRFPMI up to arm powers of 8 very stably.
We were keeping an eye on the DARM OLG, to make sure the gain was correct. We then saw a bump around 120Hz. Here is the bump.
Changing CARM offset changes its amplitude. Maybe it's a DARM optical spring. It didn't occur to me until after we lost lock that we could have tweaked the DARM offset to move it around if this was the case.
Unfortunately, due to some unexplained locklosses, we weren't able to get back into a state to measure this more... which is annoying. During that stable lock, Jenne stated that PRCL and DARM noises were looking particularly good.
We may want to tweak the way we handle the transmission PD handoff; maybe we want to force the switch at a certain place in the carm_up script, so that we're not flipping back and forth during an IR handoff; I think this may have been responsible for a lock loss or two.
We were sitting around arm powers of 6, and that loop measurement had finished. I was about to go down to arm powers of 5ish, but we lost lock. I'm not sure why. There's some slow stuff going on in some of the servos, but nothing jumps out at me as a loop oscillation. It does however kind of look like the PRMI lost lock just before the arm powers went down? Perhaps this somehow triggered a lockloss?
The time is 1101721675.
Wide view plots:
We're stopping for tonight because ETMX is back to its lame-o jumping around. I went in and squished the cables, but it's still happening.
Also, the FSS PC drive has been high the last few minutes (only starting after we quit for the night). When the MC re-locks, it sounds like an ocean wave dying out as the noise goes down a little bit. But, after a few minutes, it'll get mad again and unlock the MC.
Also, also, I noticed this on Monday with Diego, but the LSC-ALS[x,y] filter module gains sometimes mysteriously get set to zero. WTF? Eric and I have both independently checked, and we cannot find a single script in the scripts directory with the string "LSC-ALS", so we aren't deliberately changing those. Does anyone know what might be going on here?
[Jenne, Q, Diego]
I don't know why, but everything in EPICS-land froze for a few minutes just now. It happened yesterday that I saw, but I was bad and didn't elog it.
Anyhow, the arms stayed locked (on IR) for the whole time it was frozen, so the fast things must have still been working. We didn't see anything funny going on on the frame builder, although that shouldn't have much to do with the EPICS service. The seismic rainbow on the wall went to zeros during the freeze, although the MC and PSL strip charts are still fine.
After a few minutes, while we were still trying to think of things to check, things went back to normal. We're going to just keep locking for now....
We looked at the spectra of POX and POY during IR lock, and Q saw a peak at 1285 in POX only. We're actuating on the ETMs, so it must be an ETMX violin mode, although it doesn't match the others that are in the table.
Anyhow, I added it to FM9. While I was doing that, I realized that yesterday I had forgotten to put back the 3rd order ETM violin notch, so that is also in FM9.
OMG, today sucked alignment-wise. Like, wow.
I think that the problem with the ASS is with the input pointing part of the system. I found that if I disable the TTs for the Yarm (iin practice, the outputs are held at zero), I could run the Yarm ASS at full gain of 1, and it would do an awesome job. The first time I did this, I by-hand optimized the TTs by running the test mass loops to make them follow the input pointing. After that, I haven't touched the TT pointing at all, and we've just been running the test mass loops for the Yarm ASS. The Xarm seems to not have this problem (or at least not as drastically), so I left it as it was, touching BS as well as ITMX and ITMY, although the gain still needs to be about 0.3.
I feel pretty good about the IFO alignment now, although it is slightly different than it has been. The transmitted arm powers are higher than they were before I changed the ASS procedure, and there seems to be a little less power fluctuation with alignment. Q points out that I don't have concrete evidence that this is a good alignment, but it feels right.
It was a significant enough change that I had to go down to the Yend to realign the green to the new arm axis. Xgreen we did with the remote PZTs. I also realigned both of the beatnotes on the PSL table.
While I was on the PSL table, I quickly touched up the PMC alignment.
The biggest problem, the one that sucked up more hours and energy than I'd like to admit, is ETMX's jumping. So frustrating. Sometimes it is time-coincident with engaging the LSC, sometimes not. I thought that it might be because there are too many violin filters, but even if I turn off all violin filters to ETMX, it jumped a few times while the cavity was locked. Sometimes it moves when the cavity is just locked and seems happy, sometimes it moves when nothing is resonating except for the green. It takes a few minutes to recover the alignment enough to lock, and then it'll jump again a few minutes later. I haven't gone down to squish the cables today, although I did it yesterday and that didn't seem to do anything.
We had a few hours of time when it wasn't jumping, so we tried a few times to lock the IFO. The last several times we have lost lock because the PRC loop rang up. We measured the loop at low-ish arm powers, but it kept losing lock at higher powers before we could measure. At least 3 times, the PRC lockloss took out CARM and DARM too.
Anyhow, it has been a long day of not accomplishing anything interesting, but hopefully the IFO will feel better tomorrow.
Apparently, some time ago Larry Wallace installed a new, fast ethernet switch in the old nodus rack. Q and I have just now moved nodus' GC ethernet cable over to the new switch. Dan Kozak is going to use this faster connection to make the data flow over to the cluster not so lag-y.
Attached is the timeline for Frequency Offset Locking related activities. All activities will be done mostly in morning and early afternoon hours.
AC maintenance is scheduled from 8am till 11am tomorrow morning.
We looked into the configuration and settings that the frequency counters (FC) and Domenica (the R pi to which the FCs talk to) were left at . After poking around for a few hours, we were able to readout the FC output and see it on StripTool as well.
We have made a list of modifications that should be done on Domenica and to the readout scripts to make the FC module automated and user-friendly.
I will prepare a user manual that will go on the wiki once these changes are made.
I started working on the scripts/FOL directory (I did a backup before tampering around!):
As a result, as soon as the Raspberry Pi completes its boot process, the two beatnote channels are immediately available.