The PSL HEPA stopped working while it was running at 80%. I have closed the PSL enclosure.
Steve is working to fix this.
The Variac burned out and it was replaced. Each unit was checked out individually. HEPA -north is still noisy at full speed.
Earlier today, I did some simulations that suggested that PRC lengths on the order of a cm from our current estimated one could result in degenerate PRCL and MICH signals in REFL165 at 3nm CARM offset. I attempted more demod-angle derived cavity PRC length measurements with REFL11 and REFL55, but they weren't consistent with each other...
In any case, adding dual recycling, even with a SRC length off by 1cm in either direction, doesn't seem to exhibit the same possibility, so I spent some time tonight seeing if I could make any progress towards DRMI locking.
I was able to lock SRY using AS55 in a very similar manner to PRY, after adjusting the AS55 demod angle to get the error signal entirely in I. I used this configuration to align the SRM to the previously aligned BS and ITMY. Oddly, I was not able to do anything with SRX as I had hoped; the error signal looks very strange, looking more like abs(error signal).
I then was able to lock the SRMI on AS55 I & Q, the settings have been saved in the IFO configure screen. I've used AS55Q for PRMI locking with a gain of -0.2, so I started with that; the final gain ended up being -0.6. PRMI/PRY gain for prcl is something like 0.01, so since I used a gain of 2 for locking SRX, I started the SRCL gain around 0.02, the final gain ended up being -0.03. I basically just guessed a sign for AS110 triggering. Once I lucked upon a rough lock, I excited the PRM to tune the AS55 angle a few degrees; it was luckily quite close already from the SRY adjustment. AS110 needed a bigger adjustment to get the power into I. (AS55: -40.25->-82.25, AS110: 145->58, but I put AS55 back for PRMI)
I briefly tried locking the DRMI, but I was really just shooting in the dark. I went back and measured various sensing amplitudes/angles in SRMI and PRMI configurations; I'm hoping that I may be able to simulate the right gains/angles for eventual DRMI locking.
I just installed cdsutils r351 at /ligo/apps/linux-x86_64/cdsutils. It should be available on all workstations.
It includes a bunch of bug fixes and feature improvements, including the step stuff that Rana was complaining about.
Cdsutils r361 installed, for the "avg" updates. aLOG
The script is moving forward and we feel we are close, however we still have a couple of issues, which are:
1) some python misbehaviour between the system environment and the anaconda one; currently we call bash commands within the python script in order to avoid using the ezca library, which is the one complaining;
2) the fine scan is somewhat not so robust yet, need to investigate more; the main suspects are the wavelet parameters given to the algorithm, and the Offset and Ramp parameters used to perform the scan.
Here is an example of a best case scenario, with 20s ramp and 500 points:
Today we took some measurements of transfer functions and power spectra of suspensions of the MC* mirrors (open loop), for all the DOFs (PIT, POS, SIDE, YAW); the purpose is to evaluate the Q factor of the resonances and then improve the local damping system.
[Koji, Jenne, Diego]
Summary: We really don't have any MICH signal in REFL 165. Why is still a mystery.
We made several transfer function measurements while PRMI was locked on REFL33 with the arms held off resonance, and compared those to the case where the ETMs are misaligned. We fine-tuned the REFL165 demod phase looking at the transfer function between 10-300 Hz (using bandpassed white noise injected in the MICH FF filter bank and looking at REFL165Q), rather than just a single line. We did that at CARM offset of 3 counts (ALS locked), and then saw that as we reduced the CARM offset, the coherence between MICH injection and REFL165Q just goes down. Any signal that is there seems to be dominated by PRCL.
So, we're not sure why having the arms eats the MICH 165 signal, but it does. Everyone should dream tonight about how this could happen.
Koji suggested that if the signal is just lost in the noise, perhaps we could increase our modulation depth for 55MHz (currently at 0.26, a pretty beefy number already). Alternatively, if instead the problem is that the MICH signal has rotated to be in line with the PRCL signal, there may be no hope (also, why would this happen?).
Anyhow, we'd like to understand why we don't have any MICH signal in REFL165 when the arm cavities are involved, but until we come up with a solution we'll stick with REFL33 and see how far that gets us.
The only really worthwhile plot that I've got saved is the difference in these transfer functions when PRMI-only locked and PRMI+arms locked. Green is PRMI-only, with the demod phase optimized by actuating on PRM and minimizing the peak in the Q signal. Blue is PRMI with the arms held off resonance using the ALS signals, with the demod phase set again, in the same way. We were expecting (at least, hoping) that the blue transfer function would have the same shape as the green, but clearly it doesn't. The dip that is around 45 Hz can be moved by rotating the demod phase, which changes how much PRCL couples into the Q phase. Weird. At ~3nm we had somewhat reasonable coherence to RELF165Q, and were able to pick -102deg as the demod phase where the dip just disappears. However, as the CARM offset is reduced, we lost coherence in the transfer functions.
I ran 3 BNC cables from the SP table to 1X7 rack so that we can have 16 bit channels for the Ontrak PD that will be used to test oplev lasers. The BNC cables are plugged to the Ch 29, 30 & 31 that were already created for this purpose (elog 10488)
Taking into account the ellipticity of the input beam, the available lenses and the space restrictions (lens can be placed only between z= 8 to 28cm), I calculated the best possible coupling efficiency (using 'a la mode').
The maximum possible mode overlap that can be obtained is 58.6% (matlab code and plot attached)
modematching = 0.58632
Optimized Path Component List:
label z (m) type parameters
----- ----- ---- ----------
L1 0.0923 lens focalLength: 0.0750
I used the above configuration and was able to obtain ~52% coupling.
Input power = 250mW
Output power with absorptive ND 1.0 = 13 mW
I used the absorptive ND filter before the lens to keep the coupled output power within the range of fiber power meter and also avoid scattering of enormous amount of uncoupled light all over the table.
I have attached the screenshot of the out of loop ALS noise before opening the table (BLUE) and after closing down (MAGENTA). The beat note frequency and amplitude before and after were (14.4MHz/-9.3dBm) and (20.9MHz/-10 dBm).
Short report: Further frustrated by 165 tonight. The weird thing is, the procedure I'm trying with the arms held off on ALS (i.e. excitation line in MICH and PRCL, adjust relative gains to make the signs and magnitudes mach, ezcastep over) works flawlessly with the ETMs misaligned. One can even acquire SB PRMI lock on 165 I&Q, with 80-90 degrees of demod angle between MICH and PRCL. The only real difference in REFL55 settings for misaligned vs. ALS-offset arms is an extra factor of two in the FM gains to maintain the same UGF, so I hoped that the matrix elements for 165 with misaligned arms would hold for ALS-offset arms.
Alas, no such fortune. I still have no clear explanation for why we can't get MICH on 165Q with the arms held off on ALS.
I also gave a quick try to measuring the PRCL->REFL55 demod phase difference between carrier and sideband lock (with arms misaligned), and got something on the order of 55 degrees, which really just makes me think I wasn't well set up / aligned, rather than actually conveying information about the PRC length...
Today I started looking into the WFS problem and improvement, after being briefed by Koji and Nicholas. I started taking some measurements of open loop transfer functions for both PIT and YAW for WFS1, WFS2 and MC2_TRANS. For both WFS1 and 2 there is a peak in close proximity of the region with gain>1, and the phase margin is not very high. Tomorrow I will make measurements of the local damping open loop transfer functions, then we'll think how to improve the sensors' behaviour.
I took some spectra of the error signals and MC2 Trans RIN with the loops off (blue) and on (red) during the current conditions of daytime seismic noise.
Last night the sensing matrix for IMC WFS&QPD were measured.
C1:IOO-MC(1, 2, 3)_(ASCPIT, ASCYAW)_EXC were excited at 5.01Hz with 100 count
The output of the WFS1/WFS2/QPD were measured. They all looked well responding
i.e. Pitch motion shows pitch error signals, Yaw motion shows yaw error signals.
The below is the transfer function from each suspension to the error signals
MC1P MC2P MC3P
-3.16e-4 1.14e-2 4.62e-3 -> WFS1P
5.43e-3 8.22e-3 -2.79e-3 -> WFS2P
-4.03e-5 -3.98e-5 -3.94e-5 -> QPDP
MC1Y MC2Y MC3Y
-6.17e-4 6.03e-4 1.45e-4 -> WFS1Y
-2.43e-4 4.57e-3 -2.16e-3 -> WFS2Y
7.08e-7 2.40e-6 1.32e-6 -> QPDY
Taking the inverse of these matrices, the scale was adjusted so that the dc response.
I looked at what are the situations that make ETMX lose alignment.
This is not occur all that often this morning; less than 10 times in may be the last 4 hours of poking the X arm. I found that the bad behavior of ETMX also exists in certain other cases apart from the case when we enable LSC.
(I) Even the MISALIGN and RESTORE scripts for the suspensions make the suspension behave bad. The RESTORE script while in the process of bringing back the suspension to the place where it was, kicks it to some place else sometimes (even with LSC disabled)
(II) The suspension also gets kicked while realigning ETMX manually using sliders at 10^-3 (pace of 2-3 steps at a time).
I am suspecting something wrong right at the coil inputs and gains of the suspension.
Also, I recollect that we haven't done a check on the X arm LSC limiters and filters ramping times like it was done for the Y arm ( Elog 9877 ). We should do this check to be sure that we are not seeing a mixed puddle of problems from 2 sources.
Okay, now ETMX's badness is a show-stopper. I'm not sure why, but after this last lockloss, ETMX won't stay put. Right now (as opposed to earlier tonight) it seems to only be happening when I enable LSC pushing on the SUS. ETMX is happy to sit and stay locked on TEM00 green while I write this entry, but if I go and try to turn on the LSC it'll be wacky again. Daytime work.
Anyhow, this is too bad, since I was feelin' pretty good about transitioning DARM over to AS55.
I had a line on (50 counts at 503.1 Hz pushing differentially on the ETMs), and could clearly see the sign flip happen in normalized AS55Q between arm powers of 4 and 6. The line also told me that I needed a matrix element of negative a few x10^-4 in the AS55Q -> DARM spot. Unfortunately, I was missing a zero (so I was making my matrix element too big by a factor of 10) in my ezcastep line, so both times I tried to transition I lost lock.
So. I think that we should put values of 0.5 into the power normalization for our test case (I was using SRCL_IN1 as my tester) since that's the approximate value that the DCtrans uses, and see what size AS55Q matrix element DARM wants tomorrow (tonight was 1.6-3 x 10^-4, but with 1's in the normalization matrix). I feel positive about us getting over to AS55.
Also, Q is (I assume) going to work some more tomorrow on PRMI->REFL165, and Diego is going to re-test his new IR resonance finding script. Manasa, if you're not swamped with other stuff, can you please see if you can have a look at ETMX? Maybe don't change any settings, but see what things being turned on makes ETMX crazy (if it's still happening in the morning).
ETMX is misbehaving again. I went to go squish his cable at the rack and at the satellite box, but it still happened at least once.
Anecdotally and without science, it seems to happen when ETMX is being asked to move a "big" amount. If I move the sliders too quickly (steps of 1e-3, but holding down the arrow key for about 1 second) or if I offload the ASS outputs when they're too large (above 10ish?), ETMX jumps so that it's about 50 urad off in yaw according to the oplev (sometimes right, more often left), and either 0 or 50urad off in pitch (up if right in yaw, down if left in yaw).
So far, by-hand slowly offloading the ASS outputs using the sliders seems to keep it happy.
I would ask if this is some DAC bit flipping or something, but it's happening for outputs through both the fast front ends (ASS offloading) and the slow computers (sliders moved too fast). So. I don't know what it could be, except the usual cable jiggling out issue.
Anyhow, annoying, but not a show stopper.
I'm not sure why, but c1iscex did not want to do an mxstream restart. It would complain at me that "* ERROR: mx_stream is already stopping."
Koji suggested that I reboot the machine, so I did. I turned off the ETMX watchdog, and then did a remote reboot. Everything came back nicely, and the mx_stream process seems to be running.
* ERROR: mx_stream is already stopping.
I spent some time trying to debug our inability to get MICH onto REFL165Q while the arms are held off with ALS, to no real success.
I set up our usual repeatable situation of PRMI on 33 I&Q, arms held off with ALS. I figured that it may help to first sideband lock on REFL55, since 165 is looking for the f2 sidebands and maybe there is some odd offset between the locking points for f1 and f2 or other weirdness.
REFL 55 settings:
Demod angle 98->126 (was previously set for PRY locking)
PRCL = 0.5 * REFL55 I (UGF of ~200 Hz) (FM gain unchanged from REFL33 situation of -0.02)
MICH = 0.125 * REFL55 Q (UGF of ~60Hz) (same FM gain as 33)
Some REFL55 offset adjusting had to be done in order to not disturb the 33-initiated lock when handing off.
I also adjusted POP110 phase to zero the Q when locked, and switched the triggering over to 110I
The PRMI can acquire lock like this with arms held off with ALS, no problem.
Here, I tried to hop over to 165. PRCL was no problem, needing a +1 on 165I. However, I had no success in handing off MICH. I twiddled many knobs, but none that provably helped.
I saw indications that the sensing angle in 165 is small (~20deg), which is not consistent with current knowledge of the cavity lengths. We last interferometrically measured the PRC length by letting the PRMI swing and looking at sideband splitting in POP110. At LLO, they did a length measurement by looking at demod angle differences in PRMI carrier vs. sideband locking. (alog8562) This might be worth checking out...
Since I obtained a poor coupling efficiency from the earlier setup, I went back to calculate the coupling efficiency of the current setup.
For the current setup, I took the average of the x and y waist of the input beam and calculated the distance at which the input beam diameter would match the (fiber +collimator) beam diameter.
Average waist = 40.2um @-3.3mm from face of doubling crystal
(Fiber PM980 + Collimator f=2.0mm) beam waist = 205um
Distance(z) at which the input beam waist is 205um = 11.9cm
The closest available lens was f = 12.5cm. So I used it to couple the input beam by placing it at z ~12.5cm on a micrometer stage.
Since this gave only 10% coupling, I went back to calculate (using 'a la mode') the best possible coupling that can be obtained taking into consideration the ellipticity of the beam.
The maximum obtainable coupling (mode overlap) is 14.5% which is still poor.
Optimized Path Component List:
10% seems like a pretty bad coupling efficiency, even for a single lens. I know that the NPRO itself isn't so elliptical as that. Where is the other 230 mW going? random scattering?
Given that this is such an invasive process and, since its so painful to lose a whole night of locking due to end table business, I suggest that you always measure the out-of-loop ALS noise at the end of the end table work. Just checking that the green laser is locked to the arm is not sufficient to prove that the end table work won't prevent us from locking the interferometer.
We should insist on this anytime someone works on the optics or electronics at EX or EY. Don't have enough time to do an out-of-loop ALS spectrum? Then don't work at the end tables at all that day. We've got PZT alignment and mode matching work to do, as well as the rebuild of the EX table enclosure, so this is a good discipline to pick up now.
The Y end aux laser light leaking after the doubling crystal has been coupled into the 70m long PM fiber.
Input power = 250mW; Output after 70m = 20mW
The poor efficiency is partially due to the ellipticity of the beam itself and partially due to the compromise I had to make using a single lens to couple the light into the fiber (given the limitations in space). But 20mW should be more than sufficient for a beat note setup.
Light propagates as follows after the doubling crystal:
Doubler ---> Harmonic Separator (45deg) ---> Lens (f=12.5cm) --> Steering mirror (Y1) --> Fiber collimator ( Thor labs CFC-2X-C) --> FIber end
I will update photos of the setup shortly.
I have left the 70m fiber in its spool sitting at the Y end and blocked the light before the last Y1 steering mirror in the above setup. So it should be safe.
Through the course of the work, I disabled the ETMY oplev and enabled it before closing the enclosure. I also reduced the AUX laser power and brought it back up after the work.
I did a check to see if the arms are locking in both IR and green and they did.
We spent the afternoon working on the new scan for IR resonance script. It is getting much closer, although we need to work on a plan for the fine scanning at the end - so far, the result from the wavelet thing mis-estimates the true peak phase, and so if we jump to where it recommends, we are only at about half of the arm resonance. So, in progress, but moving forward.
Tonight we repeated the process of reducing the CARM offset and measuring the DARM loop gain as we went. I'm not sure if I just had the wrong numbers yesterday, or if the gains are changing day-by-day. The gains that it wanted at given arm buildups were constant throughout this evening, but they are about a factor of 2 higher than yesterday. If they really do change, we may need to implement a UGF servo for DARM. New gains are in the carm_cm_up script.
We also actually saved our DARM loop measurements as a function of CARM offset (as indicated by arm buildups). The loop stays the same through arm powers of 4. However, once we get to arm powers of 6, the magnitude around 100 Hz starts to flatten out, and we get some weird features in the phase. It's almost like the phase bubble has a peak growing out of it. I saw these yesterday, and they just keep getting more pronounced as we go up to arm powers of 7, 8 and 9 (where we lost lock during the measurement). The very last point in the power=9 trace was just before/during the lockloss, so I don't know if we trust it, or if it is real and telling us something important. But, I think that it's time to see about getting both CARM and DARM onto a different set of error signals now that we're at about 100pm.
Not sure why, but Pianosa was frozen. Also couldn't ssh or ping. So, I hard power cycled it.
After the second of the two recent power outages, the outlet powering Chiara's external drive for local backups didn't come back. The modification to the backup script I made correctly identified that the drive wasn't mounted, and happily logged its absence and didn't try to stuff the internal drive with a copy of itself. However, I hadn't checked the logs to see if the backups were proceeding until today... maybe I should set up an email alert for these, too.
I plugged the external drive into a live outlet, and mounted the 40mBackup drive with: sudo udisks --mount /dev/sdc1, which is a helpful command that puts the drive at /media/40mBackup as it should be, based on the drive label.
sudo udisks --mount /dev/sdc1
The /cvs/cds backup is now proceeding, to make up for lost time.
I changed the carm_cm_up.sh script so that it requires fewer human interventions. Rather than stopping and asking for things like "Press enter to confirm PRMI is locked", it checks for itself. The sequence that we have in the up script works very reliably, so we don't need to babysit the first several steps anymore.
Another innovation tonight that Q helped put in was servoing the CARM offset to get a certain arm power. A failing of the script had been that depending on what the arm power was during transition over to sqrtInvTrans, the arm power was always different even if the digital offset value was the same. So, now the script will servo (slowly!!) the offset such that the arm power goes to a preset value.
The biggest real IFO progress tonight was that I was able to actually measure the CARM and DARM loops (thanks ChrisW!), and so I discovered that even though we are using (TRX-TRY)/(TRX+TRY) for our IR DARM error signal, we needed to increase the digital gain for DARM as the CARM offset was reduced. For ALS lock and DC trans diff up to arm powers of 3, we use the same ol' gain of 6. However, between 3 - 6, we need a gain of 7. Then, when we go to arm powers above 6 we need a gain of 7.5. I was also measuring the CARM loop at each of these arm powers (4, 6, 7, 8, 9), but the gain of 4 that we use for sqrtInvTrans was still fine.
So, the carm_cm_up script will do everything that it used to without any help (unless it fails to find IR resonance for ALS, or can't lock the PRMI, in which case it will ask for help), and then once it gets to these servo lines to slowly increase the arm power and increase the DARM gain, it will ask you to confirm before each step is taken. The script should get you all the way to arm powers of 9, which is pretty much exactly 100pm according to Q's Mist plot that is posted.
The CARM and DARM loops (around the UGFs) don't seem to be appreciably changing shape as I increase the arm powers up to 9 (as long as I increase the DARM loop gain appropriately). So, we may be able to go a little bit farther, but since we're at about 100pm, it might be time to look at whether we think REFL11 or REFLDC is going to be more promising in terms of loop stability for the rest of the way to resonance.
Here are some plots from this evening.
First, the time I was able to get to and hold at arm powers of 9. I have a striptool to show the long time trends, and then zooms of the lockloss. I do not see any particular oscillations or anything that strikes me as the cause for the lockloss. If anyone sees something, that would be helpful.
This next lockloss was interesting because the DARM started oscillating as soon as the normalization matrix elements were turned on for DARM on DC transmissions. The script should be measuring values and putting in matrix elements that don't change the gain when they are turned on, but perhaps something didn't work as expected. Anyhow, the arm powers were only 1ish at the time of lockloss. There was some kind of glitch in the DARM_OUT (see 2nd plot below, and zoom in 3rd plot), but it doesn't seem to have caused the lockloss.
Merging of threads.
ChrisW figured out that it looks like the problem with the frame builder is that it's having to wait for disk access. He has tweaked some things, and life has been soooo much better for Q and I this evening! See Chris' elog at elog 10632.
In the last few hours we've had 2 or maybe 3 times that I've had to reconnect Dataviewer to the framebuilder, which is a significant improvement over having to do it every few minutes.
Also, Rossa is having trouble with DTT today, starting sometime around dinnertime. Ottavia and Pianosa can do DTT things, but Rossa keeps getting "test timed out".
Dan Kozak is rsync transferring /frames from NODUS over to the LDAS grid. He's doing this without a BW limit, but even so its going to take a couple weeks. If nodus seems pokey or the net connection to the outside world is too tight, then please let me and him know so that he can throttle the pipe a little.
The recently observed daqd flakiness looks related to this transfer. It appears to still be ongoing:
nodus:~>ps -ef | grep rsync
controls 29089 382 5 13:39:20 pts/1 13:55 rsync -a --inplace --delete --exclude lost+found --exclude .*.gwf /frames/trend
controls 29100 382 2 13:39:43 pts/1 9:15 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10975 131.
controls 29109 382 3 13:39:43 pts/1 9:10 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10978 131.
controls 29103 382 3 13:39:43 pts/1 9:14 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10976 131.
controls 29112 382 3 13:39:43 pts/1 9:18 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10979 131.
controls 29099 382 2 13:39:43 pts/1 9:14 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10974 131.
controls 29106 382 3 13:39:43 pts/1 9:13 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10977 131.
controls 29620 29603 0 20:40:48 pts/3 0:00 grep rsync
Diagnosing the problem:
I logged into fb and ran "top". It said that fb was waiting for disk I/O ~60% of the time (according to the "%wa" number in the header). There were 8 nfsd (network file server) processes running with several of them listed in status "D" (waiting for disk). The daqd logs were ending with errors like the following suggesting that it couldn't keep up with the flow of data:
[Wed Oct 22 18:58:35 2014] main profiler warning: 1 empty blocks in the buffer
[Wed Oct 22 18:58:36 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1098064730 to 1098064731
This all pointed to the possibility that the file transfer load was too heavy.
Reducing the load:
The following configuration changes were applied on fb.
Edited /etc/conf.d/nfs to reduce the number of nfsd processes from 8 to 1:
Ran "ionice" to raise the priority of the framebuilder process (daqd):
controls@fb /opt/rtcds/rtscore/trunk/src/daqd 0$ sudo ionice -c 1 -p 10964
And to reduce the priority of the nfsd process:
controls@fb /opt/rtcds/rtscore/trunk/src/daqd 0$ sudo ionice -c 2 -p 11198
I also tried punishing nfsd with an even lower priority ("-c 3"), but that was causing the workstations to lag noticeably.
After these changes the %wa value went from ~60% to ~20%, and daqd seems to die less often, but some further throttling may still be in order.
This is looking very useful. It will be useful if you can upload some python code somewhere so that I can muck with it.
I would guess that the right way to determine the trans RMS is just to use the single arm lock RIN and then apply that as RIN (not pure TR RMS) to the TR signals before doing the sqrt operation.
The first half of our evening was spent working on CARM and DARM in PRFPMI, and then we moved on to the PRMI part.
I moved the DARM ALSdiff -> TransDiff transition to be after the CARM ALScomm -> SqrtInvTrans transition in the carm_cm_up script. After I did that, I succeeded every time (at least 10? We did it many times) to get both CARM and DARM off of the ALS signals.
We tried for a little while looking at transitioning to REFL11 normalized by the sum of the transmissions, but we kept losing lock. We also several times lost lock at arm powers of a few, when we thought we weren't touching the IFO for any transitions. Looking at the lockloss time series did not show any obvious oscillations in any of the _IN1 or _OUT channels for the length degrees of freedom, so we don't know why we lost lock, but it doesn't seem to be loop oscillations caused by changing optical gain. Also, one time, I tried engaging Rana's "Lead 350" filter in FM7 of the CARM filter bank when we were on sqrtInvTrans for CARM, and the arm powers were around a few, but that caused the transmission signals to start to oscillate, and after one or two seconds we lost lock. We haven't tried the phase lead filter again, nor have we tried the Boost2 that is in FM8.
We increased the REFL11 analog gain from 0dB to 12dB, and then reset the dark offsets, but still weren't able to move CARM to normalized REFL11. Also, I changed the POP22 demod phase from 159 degrees to 139 degrees. This seems to be where the signal is maximized in the I-phase, while the arms are held off resonance, and also partway up the resonance peak.
We then decided that we should go back to the PRMI situation before trying to reduce the CARM offset further. We can robustly and quickly lock the PRMI on REFL33 while the arms are held off resonance with ALS. So, we have been trying to acquire on REFL33 I&Q, and then look at switching to REFL 165 I&Q. It seems pretty easy to get PRCL over to REFL165 I (while leaving MICH on REFL33 I). For REFL33, both matrix elements are +1. For PRCL on REFL165, the matrix element is -0.08. We have not successfully gotten MICH over to REFL 165 ever this evening.
We went back and set the REFL165 I&Q offsets so that the outputs after the demod phase were both fluctuating around 0. I don't know if they were around +/-100 because our dark offsets were bad or what, but we thought this would help. We were still able to get PRCL transitioned no problem, but even after remeasuring the MICH REFL33 vs. REFL165 relative gains, we still can't transition MICH. It seems like it's failing when the REFL33Q matrix element finally gets zeroed out, so we're not really getting enough signal in REFL165Q, or something like that, and throughout the rest of the transition we were depending entirely on REFL33Q.
This is our first time touching tables for Frequency Offset Locking.
The goal was to couple the 1064nm that leaks after the SHG crystal and couple it into the fiber before we run it along the length of the arm.
The fiber has been mounted at the end but there is no light coupled into the fiber as yet.
In the process, the following were done:
1. ETMY oplev servo disabled. This was enabled after the work.
2. NPRO laser power was reduced so that nothing gets burnt accidently while putting things on the table. This was also reset after the work.
The arms could be locked to green and IR after the work. So I am hoping today's work will not affect locking.
BUT, what we really need (instead of just the DC sweeps) is the DC sweep with the uncertainty/noise displayed as a shaded area on the plot, as Nic did for us in the pre-CESAR modelling.
I've taken a first stab at this. Through various means, I've made an estimation of the total noise RMS of each error signal, and plotted a shaded region that shows the range of values the error signal is likely to take, when the IFO is statically sitting at one CARM offset.
I have not included any effects that would change the RMS of these signals in a CARM-offset dependent way. Since this is just a rough first pass, I didn't want to get carried away just yet.
For the transmission PDs, I measured the RMS on single arm lock. I also measured the incident power on the QPDs and thorlabs PDs for an estimate of shot noise, but this was ridiculously smaller than the in-loop RIN. I had originally though of just plotting sensing noise for the traces (i.e. dark+shot), because the amount of seismic and frequency noise in the in-loop signal obviously depends on the loop, but this gives a very misleading, tiny value. In reality we have RIN from the PRC due to seismic noise, angular motion of the optics, etc., which I have not quantified at this time.
So: for this first, rough, pass, I am simply multiplying the single transmission noise RMSs by a factor of 10 for the coupled RMS. If nothing else, this makes the SqrtInv signal look plausible when we actually practically find it to be plausible.
For the REFL PDs, I misaligned the ITMs for a prompt PRM reflection for a worst-case shot noise situation, and took the RMS of the spectra. (Also wrote down the dark RMSs, which are about a factor of 2 lower). I then also multiplied these by ten, to be consistent with the transmission PDs. In reality, the shot noise component will go down as we approach zero CARM offset, but if other effects dominate, that won't matter.
Enough blathering, here's the plot:
Now, in addition to the region of linearity/validity of the different signals, we can hopefully see the amount of error relative to the desired CARM offset. (Or, at least, how that error qualitatively changes over the range of offsets)
This suggests that we MAY be able to hop over to a normalized RF signal; but this is a pretty big maybe. This signal has the response of the quotient of two nontrivial optical plants, which I have not yet given much thought to; it is probably the right time to do so...
I realized today that I had been plotting the wrong thing for all of my transfer functions for the last few weeks!
The "CARM offsets" were correct, in that I was moving both ETMs, so all of the calculations were correct (which is good, since those took forever). But, in the plots I was just plotting the transfer function between driving ETMX and the given photodiode. But, since just driving a single ETM is an admixture of CARM and DARM, the plots don't make any sense. Ooops.
In these revised plots (and the .mat file attached to this elog), for each PD I extract from sigAC the transfer function between driving ETMX and the photodiode. I also extract the TF between driving ETMY and the PD. I then sum those two transfer functions and divide by 2. I multiply by the simple pendulum, which is my actuator transfer function to get to W/N, and plot.
The antispring plots don't change in shape, but the spring side plots do. I think that this means that Rana's plots from last week are still true, so we can use the antispring side of TRX to get down to about 100 pm.
Here are the revised plots:
Last night I measured our RAM offsets and looked at how those affect the PRMI situation. It seems like the RAM is not creating significant offsets that we need to worry about.
Words here about data gathering, calibration and calculations.
Step 1: Lock PRMI on sideband, drive PRM at 675.13Hz with 100 counts (675Hz notches on in both MICH and PRCL). Find peak heights for I-phases in DTT to get calibration number.
Step 2: Same lock, drive ITMs differentially at 675.13Hz with 2,000 counts. find peak heights for Q-phases in DTT to get calibration number.
Step 3: Look up actuator calibrations. PRM = 19.6e-9/f^2 meters/count and ITMs = 4.68e-9/f^2 meters/count. So, I was driving PRM about 4pm, and the ITMs about 20pm.
Step 4: Unlock PRMI, allow flashes, collect time series data of REFL RF siganls.
Step 5: Significantly misalign ITMs, collect RAM offset time series data.
Step 6: Close PSL shutter, collect dark offset time series data.
Step 7: Apply calibration to each PD time series. For each I-phase of PDs, calibration is (PRM actuator / peak height from step 1). For each Q-phase of PDs, calibration is (ITM actuator / peak height from step 2).
Step 8: Look at DC difference between RAM offset and dark offset of each PD. This is the first 4 rows of data in the summary table below.
Step 9: Look at what peak-to-peak values of signals mean. For PRCL, I used the largest pk-pk values in the plots below. For MICH I used a calculation of what a half of a fringe is - bright to dark. (Whole fringe distance) = (lambda/2), so I estimate that a half fringe is (lambda/4), which is 266nm for IR. This is the next 4 rows of data in the table.
Step 10: Divide. This ratio (RAM offset / pk-pk value) is my estimate of how important the RAM offset is to each length degree of freedom.
Plots (Left side is several PRMI flashes, right side is a zoom to see the RAM offset more clearly):
I very tentatively declare that this particular daqd crapfest is "resolved" after Jenne rebooted fb and daqd has been running for about 40 minutes now without crapping itself. Wee hoo.
I spent a while yesterday trying to figure out what could have been going on. I couldn't find anything. I found an elog that said a previous daqd crapfest was finally only resolved by rebooting fb after a similar situation, i.e. there had been an issue that was resolved, daqd was still crapping itself, we couldn't figure out why so we just rebooted, daqd started working again.
So, in summary, totally unclear what the issue was, or why a reboot solved it, but there you go.
Looks like I spoke too soon. daqd seems to be crapping itself again:
controls@fb /opt/rtcds/caltech/c1/target/fb 0$ ls -ltr logs/old/ | tail -n 20
-rw-r--r-- 1 4294967294 4294967294 11244 Oct 17 11:34 daqd.log.1413570846
-rw-r--r-- 1 4294967294 4294967294 11086 Oct 17 11:36 daqd.log.1413570988
-rw-r--r-- 1 4294967294 4294967294 11244 Oct 17 11:38 daqd.log.1413571087
-rw-r--r-- 1 4294967294 4294967294 13377 Oct 17 11:43 daqd.log.1413571386
-rw-r--r-- 1 4294967294 4294967294 11481 Oct 17 11:45 daqd.log.1413571519
-rw-r--r-- 1 4294967294 4294967294 11985 Oct 17 11:47 daqd.log.1413571655
-rw-r--r-- 1 4294967294 4294967294 13219 Oct 17 13:00 daqd.log.1413576037
-rw-r--r-- 1 4294967294 4294967294 11150 Oct 17 14:00 daqd.log.1413579614
-rw-r--r-- 1 4294967294 4294967294 5127 Oct 17 14:07 daqd.log.1413580231
-rw-r--r-- 1 4294967294 4294967294 11165 Oct 17 14:13 daqd.log.1413580397
-rw-r--r-- 1 4294967294 4294967294 5440 Oct 17 14:20 daqd.log.1413580845
-rw-r--r-- 1 4294967294 4294967294 11352 Oct 17 14:25 daqd.log.1413581103
-rw-r--r-- 1 4294967294 4294967294 11359 Oct 17 14:28 daqd.log.1413581311
-rw-r--r-- 1 4294967294 4294967294 11195 Oct 17 14:31 daqd.log.1413581470
-rw-r--r-- 1 4294967294 4294967294 10852 Oct 17 15:45 daqd.log.1413585932
-rw-r--r-- 1 4294967294 4294967294 12696 Oct 17 16:00 daqd.log.1413586831
-rw-r--r-- 1 4294967294 4294967294 11086 Oct 17 16:02 daqd.log.1413586924
-rw-r--r-- 1 4294967294 4294967294 11165 Oct 17 16:05 daqd.log.1413587101
-rw-r--r-- 1 4294967294 4294967294 11086 Oct 17 16:21 daqd.log.1413588108
-rw-r--r-- 1 4294967294 4294967294 11097 Oct 17 16:25 daqd.log.1413588301
controls@fb /opt/rtcds/caltech/c1/target/fb 0$
The times all indicate when the daqd log was rotated, which happens everytime the process restarts. It doesn't seem to be happening so consistently, though. It's been 30 minutes since the last one. I wonder if it somehow correlated with actual interaction with the NDS process. Does some sort of data request cause it to crash?
We've seen this before, but we need to figure out why POP22 decreases with decreased CARM offset. If it's just a demod phase issue, we can perhaps track this by changing the demod phase as we go, but if we are actually losing control of the PRMI, that is something that we need to look into.
In other news, nice work Q!
I've been able to repeatedly get off of ALS and onto (TRY-TRX)/(TRY+TRX). Nevertheless, lock is lost between arm powers of 10 and 20.
I do the transition at the same place as the CARM->SqrtInv transition, i.e. arm powers about 1.0 Jenne started a script for the transition, and I've modified it with settings that I found to work, and integrated it into the carm_cm_up script. I've also modified carm_cm_down to zero the DARM normalization elements.
I was thwarted repeatedly by the frequent crashing of daqd, so I was not able to take OLTFs of CARM or DARM, which would've been nice. As it was, I tuned the DARM gain by looking for gain peaking in the error signal spectrum. I also couldn't really get a good look at the lock loss events. Once the FB is behaving properly, we can learn more.
Turning over to difference in transmission as an error signal naturally squashes the difference in arm transmissions:
I was able to grab spectra of the error and control signals, though I did not take the time to calibrate them... We can see the high frequency sensing noise for the transmission derived signals fall as the arm power increases. The low frequency mirror motion stays about the same.
So, it seems that DARM was not the main culprit in breaking lock, but it is still gratifying to get off of ALS completely, given its out-of-loop-noise's strong dependence on PSL-alignment.
In my last CARM loop modelling, all of the plots are phony, so don't trust them. The invbilinear function inside of StefanB's onlinefilter.m was making bogus s-domain representations of the digital filter coefficients.
So now I've just plotted the frequency response directly from the z-domain SOS coeffs using MattE's readFilterFile.m and FotonFilter.m.
Conclusions are less rosy. The anti-spring side is still easier to compensate than the spring side, but it starts to get hopeless below ~130 pm of offset, so there we really need to try to get to REFL_11/(TRX+TRY), pending some noise analysis.
** In order to get the 80 and 40 pm loops to be more stable I've put in a tweak filter called Boost2 (FM8). As you can see, it kind of helps for 80 pm, but its pretty hopeless after that.
I think these are all very helpful and interesting plots. We should see some better performance using either of the DC DARM signals.
Otherwise, the DC sweeps mistakenly indicate that many channels are good, whereas they really have an RMS noise larger than 100 pm due to low power levels or normalization by a noisy signal.
I've added (TRX-TRY)/(TRX+TRY) to the DC DARM sweep plots, and it looks like an even better candidate. The slope is closer to linear, and it has a zero crossing within ~10pm of the true DARM zero across the different CARM offsets, so we might not even need to use an intentional DARM offset.
I've been trying to figure out why daqd keeps crashing, but nothing is fixed yet.
I commented out the line in /etc/inittab that runs daqd automatically, so I could run it manually. Each time I run it ( with ./daqd -c ./daqdrc while in c1/target/fb), it churns along fine for a little while, but eventually spits out something like:
[Thu Oct 16 11:43:54 2014] main profiler warning: 1 empty blocks in the buffer
[Thu Oct 16 11:43:55 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:56 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:57 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:58 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:59 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:00 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:01 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:02 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1097520250 to 1097520257
FATAL: exception not rethrown
I looked for time disagreements between the FB and the frontends, but they all seem fine. Running ntpdate only corrected things by 5ms. However, looking through /var/log/messages on FB, I found that ntp claims to have corrected the FB's time by ~111600 seconds (~31 hours) when I rebooted it on Monday.
Maybe this has something to do with the timing that the FB is getting? The FE IOPs seem happy with their sync status, but I'm not personally currently aware of how the FB timing is set up.
On Monday, Jamie suggested checking out the situation with FB's RAID. Searching the elog for "empty blocks in the buffer" also brought up posts that mentioned problems with the RAID.
I went to the JetStor RAID web interface at http://192.168.113.119, and it reports everything as healthy; no major errors in the log. Looking at the SMART status of a few of the drives shows nothing out of the ordinary. The RAID is not mounted in read-only mode either, as was the problem mentioned in previous elogs.
The daqd process on the frame builder looks like it is segfaulting again. It restarts itself every few minutes.
The symptoms remind me of elog 9530, but /frames is only 93% full, so the cause must be different.
Did anyone do anything to the fb today? If you did, please post an elog to help point us in a direction for diagnostics.
Q!!!! Can you please help? I looked at the log files, but they are kind of mysterious to me - I can't really tell the difference between a current (bad) log file and an old (presumably fine) log file. (I looked at 3 or 4 random, old log files, and they're all different in some ways, so I don't know which errors and warnings are real, and which are to be ignored).
The first thing I looked at tonight was locking the PRMI on REFL 165.
I locked the PRMI (no arms), and checked the REFL 165 demod phase. I also found the input matrix configuration that allowed me to acquire PRMI lock directly on REFL165. After locking the arms on ALS, I tried to lock the PRMI with REFL 165 and failed. So, I rechecked the demod phase and the relative transfer functions between REFL 165 and REFL 33. The end of the story is that, even with the re-tuned demod phase for CARM offset of a few nanometers, I cannot acquire PRMI lock on REFL 165, nor can I transition from REFL 33 to REFL 165. We need to revisit this tomorrow.
For the PRMI-only case, I ended up using 0.1's in the input matrix, and I added an FM 1 to the MICH filter bank that is a flat gain of 2.2, and then I had it trigger along with FM2.
I turned this FM1 off (and no triggering) while trying to transition from REFL33 to REFL165 in the PRFPMI case, but that didn't help. I think that maybe I need to remeasure my transfer functions or something, because I could put values into the REFL165 columns of the input matrix while REFL33 was still 1's, but I couldn't remove (even if done slowly) the REFL33 matrix elements without losing lock of the PRMI. So, we need to get the input matrix elements correct.
I also recorded some time series for a quick RAM investigation that I will work on tomorrow.
I left the PRM aligned, but significantly misaligned both ITMs to get data at the REFL port of the RAM that we see. I also aligned the PRMI (no arms) and let it flash so that I can see the pk-pk size of our PDH signals. I need to remember to calibrate these from counts to meters.
Raw data is in /users/jenne/RAM/ .
I have not tried any new DARM signals, since PRMI wasn't working with 3f2.
We should get to that as soon as we fix the PRMI-3f2 situation.
We're summarizing the discussions of the last few days as to the game plan for locking.
I've done some preliminary modeling to see if there is a good candidate for an IR DARM control signal that is available before the AS55 sign flip. From a DC sweep point of view, ASDC/(TRX+TRY) may be a candidate for further exploration.
As a reminder, both Finesse and MIST predict a sign flip in the AS55 Q control signal for DARM in the PRFPMI configuration, at a CARM offset of around 118pm.
The CARM offset where this sign flip occurs isn't too far off of where we're currently losing lock, so we have not had the opportunity to switch DARM control off of ALS and over to the quieter IR RF signal of AS55.
Here are simulated DC DARM sweep plots of our current PRFPMI configuration, with a whole bunch of potential signals that struck me.
Although the units of most traces are arbitrary in each plot (to fit on the same scale), each plot uses the same arbitrary units (if that makes any sense) so slopes and ratios of values can be read off.
In the 300 and 120pm plot, you can see that the zero crossing of AS55 is at some considerable DARM offset, and normalizing by TRX doesn't change much about that. "Hold on a second," I hear you say. "Your first plots said that the sign flip happens at around 120pm, so why does the AS55 profile still look bad at 50pm?!" My guess is that, probably due to a combination of Schnupp and arm length asymmetry, CARM offsets move where the peak power is in the DARM coordinate. This picture makes what I mean more clear, perhaps:
Thus, once we're on the other side of the sign flip, I'm confident that we can use AS55 Q without much problem.
Now, back to thoughts about an interim signal:
ASDC by itself does not really have the kind of behavior we want; but the power out of AS as a fraction of the ARM power (i.e. ASDC/TRX in the plot) seems to have a rational shape, that is not too unlike what the REFLDC CARM profile looks like.
Why not use POPDC or REFLDC? Well, at the CARM offsets we're currently at, POPDC is dominated by the PRC resonating sidebands, and REFLDC has barely begun to decline, and at lower CARM offsets, they each flatten out before the peak of the little ASDC hill, and so don't do much to improve the shape. Meanwhile, ASDC/TRX has a smooth response to points within some fraction of the DARM line width in all of the plots.
Thus, as was discussed at today's meeting, I feel it may be possible to lock DARM on ASDC/(TRX+TRY) with some offset, until AS55 becomes feasible.
(In practice, I figure we would divide by the sum of the powers, to reduce the influence of the DARM component of just TRX; we don't want to have DARM/DARM in the error signal for DARM)
Two caveats are:
[Code and plots live in /svn/trunk/modeling/PRFPMI_radpressure]
I have plotted measured data from last night (elog 10607) with a version of the result from Rana's simulink CARM loop model (elog 10593).
The measured data that was taken last night (open circles in plots) is with an injection into MC2 position, and I'm reading out TRX. This is for the negative side of the digital CARM offset, which is the one that we can only get to arm powers of 5ish.
The modeled data (solid lines in plots) is derived from what Rana has been plotting the last few days, but it's not quite identical. I added another excitation point to the simulink model at the same place as the "CARM OUT" measurement point. This is to match the fact that the measured transfer functions were taken by driving MC2. I then asked matlab to give me the transfer function between this new excitation point (CARM CTRL point) and the IN1 point of the loop, which should be equivalent to our TRX_OUT. So, I believe that what I'm plotting is equivalent to TRX/MC2. The difference between the 2 plots is just that one uses the modeled spring-side optical response, and the other uses the modeled antispring-side response.
I have zoomed the X-axis of these plots to be between 30 Hz - 3 kHz, which is the range that we had coherence of better than 0.8ish last night in the measurements. The modeled data is all given the same scale factor (even between plots), and is set so that the lowest arm power traces (pink) line up around 150 Hz.
I conclude from these plots that we still don't know what side of the CARM resonance we are on.
I have not plotted the measurements from the positive side of the digital CARM offset, because those transfer functions were to sqrtInvTRX, not plain TRX, whereas the model only is for plain TRX. There should only be an overall gain difference between them though, no phase difference. If you look at last night's data, you'll see that the positive side of the CARM offset measured phase has similar characteristics to the negative offset, i.e. the phase is not flat, but it is roughly flat in both modeled cases, so even with that data, I still say that we don't know what side of the CARM resonance we are on.
I have modified the Dataviewer launcher (which runs when you either click the icon or type "dataviewer" in the terminal).
A semi-old problem was that it would open in the file /users/Templates, but our dataviewer templates start in /users/Templates/Dataviewer_Templates. Now this is the folder that dataviewer opens into. This was not related to the upgrade to Ubuntu 12, but will be overwritten any time someone does a checkout of the /ligo/apps/launchers folder.
A problem that is related to the Ubuntu 12 situation, which we had been seeing on Ottavia and Pianosa for a few weeks, was that the variable NDSSERVER was set to fb:8088, which is required for cdsutils to work. However, dataviewer wants this variable to be set to just fb. So, locally in the dataviewer launcher script, I set NDSSERVER=fb. NB: I do not export this variable, because I don't want to screw up the cdsutils. This may need to be undone if we ever upgrade our Dataviewer.
The He-Ne laser oplev setup was swapped with a fiber-coupled diode laser from W Bridge. The laser module and its power supply are sitting on a bench in the east side of the SP table.
Here are the same plots, but the legend also includes the arm power that we expect at that CARM offset.
Here is what the arm powers look like as a function of CARM offset according to Optickle. Note that the cyan trace's maximum matches what Q has simulated in Mist with the same high losses. For illustration I've plotted the single arm power, so that you can see it's normalized to 1. Then, the other traces are the full PRFPMI buildup, with various amounts of arm loss. The "no loss" case is with 0ppm loss per ETM. The "150 ppm loss" case is with 150 ppm of loss per ETM. The "high loss" case is representative of what Q has measured, so I have put 500 ppm loss for ETMX and 150 ppm loss for ETMY.
And, the transfer functions (all these, as with all TFs in the last week, use the "high loss" situation with 500ppm for ETMX and 150ppm for ETMY).