I tried to restart c1ioo becuase I can't live without him.
I couldn't ssh or ping c1ioo, so I did hardware reboot.
c1ioo came back, but now ADC/DAC stats are all red.
c1ioo was OK until 3am when I left the control room last night. I don't know what happened, but StripTool from zita tells me that MC lock went off at around 4pm.
c1ioo was still all red on the CDS status screen, so I tried a couple of things.
mxstreamrestart (which aliases on the front ends to sudo /etc/init.d/mx_stream restart) didn't help
sudo /etc/init.d/mx_stream restart
sudo shutdown -r now didn't change anything either....c1ioo came back with red everywhere and 0x2bad on the IOP
sudo shutdown -r now
eventually doing as Jamie did for c1sus in elog 6742, rtcds stop all, then rtcds start all fixed everything. Interestingly, when I tried rtcds start iop, I got the error
Cannot start/stop model 'iop' on host c1ioo, so I just tried rtcds start all, and that worked fine....started with c1x03, then c1ioo, then c1gcv.
rtcds stop all
rtcds start all
rtcds start iop
Cannot start/stop model 'iop' on host c1ioo
LSCoffsets script, and any others depending on tdsavg will not work until this is fixed.
LSCoffsets is working again.
tdsavg (now, but didn't used to) needs "LIGONDSIP=fb" to be specified. Jamie just put this in the global environment, so tdsavg should just work like normal again.
Also, the rest of the LSCoffsets script (really the subcommand offset2) was tsch syntax, so I created offset3 which is bash syntax.
Now we can use LSCoffsets again.
We will just open the BS chamber.
- PRM flipping
- PR2, PR3 flipping
- PRC suspensions
- Cipping check in PRC
What do you mean by PR2, PR3 flipping? They are (supposed to be) flat mirrors, so obviously they should be installed correctly, but they won't change the mode matching in a huge way if they're backwards, right?
For the PRM, I recommend checking (a) the arrow inscribed on the thinner side of the optic and (b) that the arrow *actually* points to the HR side. I'm pretty sure I installed all the optics with the arrow pointing away from the OSEMs, but I never did a thorough check that the arrow always actually pointed to the HR coated side. I don't remember any optics where I said "hmmm, that's funny, the arrow is pointing backwards", but nor did I write down that I had checked.
Also, hopefully the PRM is correct. If however it's not, that means that all of the magnets are glued onto the HR side, and we'll have to redo all of the magnet gluing. The guiderods should be fine, but all 6 magnets would need redoing. If we were very, very careful and didn't break any of the magnets off of the dumbbells, it's a 24 hour turnaround due to drying time. Since inevitably we break magnets away from dumbbells, conservatively we should think about a 48 hour turnaround.
No significant change was found inside the vacuum. We still see clipping at the Faraday, and also, we saw clipping by BS coil holder. Using PZT1, we could make it better, but this might be causing PRC problem -- BS is inside the PRC, too.
Yuta just told Jamie and I that when he and Koji were looking at things yesterday, they saw that the beam spot was roughly at the center of the PRM, but was clipping on the lower OSEM holder plate on the BS. This indicates that the beam spot on the BS is much too low. The easiest way I can see this happening is poor pitch pointing with the tip tilts, which we unfortunately don't have active control over.
Recall elog 3425, where I mentioned some pretty bad pitch pointing after a TT was moved from the cleanroom, to the chamber, back to the cleanroom. I think that we may need to check the pitch pointing at the chamber before installing tip tilts in the future.
Many photos were taken by many different people....most of the fuzzy ones are by yours truely (doing a reach-around to get to hard-to-reach places), so sorry about that.
I put all the photos from yesterday and today into 6 new albums on Picasa: https://picasaweb.google.com/foteee
The album titles are generally descriptive, and I threw in a few comments where it seemed prudent.
Big note: The tip tilt on the ITMX table does, in fact, have the arrow pointing in the correct direction. Photo is in the TT album from today.
All the front-ends are showing 0x4000 status and have lost communication with the frame builder. It looks like the timing skew is back again. The fb is ahead of real time by one second, and strangely nodus is ahead of real time by something like 5 seconds! I'm looking into it now.
I was bad and didn't read the elog before touching things, so I did a daqd restart, and mxstream restart on all the front ends, but neither of those things helped. Then I saw the elog that Jamie's working on figuring it out.
I was trying to use a new BLRMs c-code block that the seismic people developed, instead of Mirko's more clunky version, but putting this in crashed c1sus.
I reverted to a known good c1pem.mdl, and Jamie and I did a reboot, but c1sus is still funny - none of the models are actually running.
rtcds restart all - all the models are happy again, c1sus is fine.
But, we still need to figure out what was wrong with the c-code block.
Also, the BLRMS channels are listed in a Daq Channels block inside of the (new) library part, so they're all saved with the new CDS system which became effective as of the upgrade. (I made the Mirko copy-paste BLRMS into a library part, including a DAQ channels block before trying the c-code. This is the known-working version to which I reverted, and we are currently running.)
The reason I started looking at BLRMS and c1sus today was that the BLRMS striptool was totally wacky. I finally figured out that the pemepics hadn't been burt restored, so none of the channels were being filtered. It's all better now, and will be even better soon when Masha finishes updating the filters (she'll make her own elog later)
The c1oaf model hasn't been running for a few days (since the leap second problems we were having last week). I had looked into it, but finally figured it out (with Jamie's help) today.
The BURT restore has to be given to the model during startup, but for whatever reason it wasn't BURT restoring until *after* the model had already failed to start. The symptoms were: no 'heartbeat' for the oaf model, no connection to the fb, NO SYNC on the GDS screen, 0x4000. the BURT restore button was green, which threw me off the scent, but that's just because it did, in fact, get set, just way too late.
I ended up looking in the dmesg of the lsc computer, and the last set of stuff was several lines of "[3354303.626446] c1oaf: Epics burt restore is 0". Nothing else was written after that. Jamie pointed out that this meant the BURT restore wasn't getting sent before the model unloaded itself and decided not to run.
[3354303.626446] c1oaf: Epics burt restore is 0
The solution: restart the model, and manually click the BURT restore button as soon as you're able (after everything comes back from being white). We used to have to do this, but then there was a "fix", which apparently isn't super robust and failed for the oaf (even though it used to work just fine). Bugzilla report submitted.
Masha is moving the seismometers, so they are all off right now. Were they on, they would see a bunch of noise from the guy outside the 40m front door who is installing a safety shower.
Masha and Yaakov - this is an excellent opportunity for you guys to test out your triangulation stuff! Also, it might give a lot of good data times for the learning algorithms.
Maybe you should also put out the 3 accelerometers that Yaakov isn't using (take them off their cube, so they can be placed separately), then you'll have 6 sensors for vertical motion. Or you can leave the accelerometers as a cube, and have 4 3-axis sensors (3 seismometers + accelerometer set).
Yuta and I bought some new BS mounts so that we could use the 4th port of the beamsplitters which are combining the PSL green and the arm transmitted beam, just before the Beat PD for each arm. I just placed the Yarm one, and have aligned the light onto both the Beat PD and the Trans DC PD.
I'll do the Xarm after lunch.
Somehow we got an excitation going on BS OpLevs, and even though Yuta thought he might have accidentally started it, he couldn't find where, so we couldn't stop it.
Since we don't have anything written on the wiki, I ended up calling Joe to remind me how to clear test points:
controls@allegra:~ 0$ diag -l -z
diag> tp clear * *
test point cleared
diag> awg clear 21.*
diag -l -z
diag> tp clear * *
test point cleared
diag> awg clear 21.*
The tp clear clears all test points everywhere. The awg clear wouldn't let me do a universal clear, so I just did #21, which is the SUS model. So if you want to kill excitations on a specific model, you must find it's DCU ID (it's in the top right corner of the GDS status screens for each model).
I assume it's the rock tumbler, although it could be something else, but the MC has had trouble staying locked yesterday and today (yesterday Yaakov and I went over there and they were doing stuff almost constantly - it's super loud over there), and today even the PMC has fallen out of lock twice. I just relocked it again, since it went out of lock just after Journal club started.
Anyhow, I think this will be good data for Masha, and then also for the Masha+Yaakov triangulation project.
The PMC was unlocked earlier this morning, for ~20min, presumably from the rocks next door. I relocked it.
Then, a few min ago, the PMC suddenly decided that it wouldn't lock with a transmission greater than ~0.7 . I found that the laser temp adjust on the FSS screen was at -1.9ish. I put it back to zero, and now the PMC locks happily again. I think we got into a PSL mode-hopping temperature region on accident.
The names of the DoF filters in the ASS loop were wrong. The filters themselves were correct (low pass filters at super low freq, for the Lock-in), but the names were backward.
Our convention is to name filters "poles:zeros", but they had been "z:p". The names of FM1 in all the DoF filter banks are now fixed.
I was trying to lock and look at the ASS for the Yarm, but the transmitted power was oscillating very near 1Hz. Eventually I looked at the mode cleaner, and it was also moving around at a similar frequency. I took spectra of the ETMY SUS damping feedback signals, and they (POS, PIT, YAW) saw this 1Hz motion too (see attached plots...same data, one is a zoom around 1Hz).
As a first place to start, I turned off the WFS, which immediately stopped the MC oscillation. Turning the WFS back on, the oscillation didn't come back. I'm not sure what happened to make the WFS bad, but I perhaps had the ASS dither lines on (I've had them on and off, so I'm not sure), although turning off the dither lines didn't make the WFS any better.
As an aside, the MC refl with the WFS off was ~1.5, rather than the ~0.5 with the WFS on, which means that the PSL beam and the MC axis are not well matched.
The script ....../scripts/ASS/MC/mcassMCdecenter takes ~17 minutes to run. The biggest time sink is measuring a no-offset-added-to-coil-gains set, in between each measurement set with the coil gain offsets. This is useful to confirm that the nominal place hasn't changed over the time of the measurement, but maybe we don't need it. I'm leaving it for now, but if we want to make this faster, that's one of the first things to look at.
spot positions in mm (MC1,2,3 pit MC1,2,3 yaw):
[3.5716862287669224, 3.937869278443594, 2.9038058599576595, -6.511822708584913, -0.90364583591421999, 4.8221820002404279]
There doesn't seem to be any spot measurement stuff for any other optics, so I'm going to try to replicate the MC spot measuring script for the Michelson to start.
I learned a little bit of python scripting while looking at the videoswitch script, and I made a video medm screen.
Each monitor has a dropdown menu for all the common cameras we use (medm only lets you put a limited # of lines on a dropdown menu...when we want to add things like OMCR or RCT, we'll need to add another dropdown set)
Each monitor also has a readback to tell you what is on the TV. So far, the quads only say "QUAD#", not what the 4 components are.
I put a set of epics inputs in the PEM model, under a subsystem with top-names VID to represent the different monitors. The readbacks on the video screen look at these, with the names corresponding to the numbers listed in the videoswitch script. The videoswitch script now does an ezcawrite to these epics inputs so that even if you change the monitors via command line, the screen stays updated.
For example, since MC2F's camera is plugged in to Input #1 of the video matrix, if you type "./videoswitch MON1 MC2F", the script will write a "1" to the channel "C1:VID-MON1", and the screen will show "MC2F" in the Mon1 cartoon.
This required a quick recompile of the PEM model, but not the framebuilder since these were just epics channels.
There is also a dropdown menu for "Presets", which right now only include my 2 arm locking settings.
All of the dropdowns just call an iteration of the videoswitch script.
The MC unlocked ~20 min ago, correlated with 2 consecutive earthquakes in Mexico. The MC came back fine after a few minutes, but the WFS never engaged. I turned them on by hand. I think that Yuta mentioned once that he also had to turn the WFS on by hand. There may be a problem in the unlock/relock catching that needs to be looked at, to make sure the WFS come back on automatically.
Also, someone (Masha and I) should look at the seismic BLRMS. I have suspected for a few days that they're not telling us everything that we want to know. Usually, if there's an earthquake close enough / big enough that it pops the MC out of lock, it is clear from the BLRMS that that's what happened, but right now it doesn't look like much of anything....just kind of flat for hours.
Jamie and I were doing some locking, and we found that the Yarm green wasn't locking. It would flash, but not really stay locked for more than a few seconds, and sometimes the green light would totally disappear. If the end shutter is open, you can always see some green light on the arm transmission cameras. So if the shutter is open but there is nothing on the camera, that means something is wrong.
I went down to the end, and indeed, sometimes the green light completely disappears from the end table. At those times, the LED on the front of the laser goes off, then it comes back on, and the green light is back. This also corresponds to the POWER display on the lcd on the laser driver going to ~0 (usually it reads ~680mW, but then it goes to ~40mW). The laser stays off for 1-2 seconds, then comes back and stays on for 1-2 minutes, before turning off for a few seconds again.
Koji suggested turning the laser off for an hour or so to see if letting it cool down helps (I just turned it off ~10min ago), otherwise we may have to ship it somewhere for repairs :(
We looked at the different outputs of the MC servo board to make sure they make some kind of sense. As per my elog 6625, the names of the channels were wrong, but we wanted to confirm that we have something sensible.
Currently, OUT1 of the servo board is called "MC_F" and the SERVO out is called "MC_SERVO". We looked at the spectrum of each, and the transfer function between them.
You can see that in addition to a 2kHz pole, MC_L also seems to have a 10-100 zero-pole pair.
Also, while cleaning things up in the models, I fixed the names of these MCL/MCF channels. OUT1 is now called MC_L, and is connected to ADC0_0, and SERVO is called MC_F and is connected to ADC0_6. Both MC_L and MC_F go to the RFM, and thence on to the OAF. MC_L (which used to be mis-named MC_F) still goes both to the MCS model for actuation on MC2, and to the OAF for MC-OAF-ing. Right now MC_F is unused in the OAF model, but we can change that later if we want.
I turned the Yend laser back on....it hasn't turned itself off yet, but I'm watching it. As long as we leave the shutter open, we can watch the C1:ALS-Y_REFL_DC value to see if there's light on the diode.
I added a subblock to the IOO model, and gave it a top_names of PSL, so the channels show up as C1:PSL-......
So far, there are just 2 channels acquired, C1:PSL-FSS_MIXER and C1:PSL-FSS_FAST, since those were already connected to the ADC. Those signals are both on the DAQ OUT of the FSS board in the rack. They are DQ channels now too.
So far, there are just 2 channels acquired, C1:PSL-FSS_MIXER and C1:PSL-FSS_FAST, since those were already connected to the ADC. Those signals are both on the DAQ OUT of the FSS board in the rack. They are DQ channels now too.
So there was a problem with the channel name C1:PSL-FSS_FAST, which conflicts with an existing slow channel. This was causing daqd to fail to start (shockingly, with an appropriate error message!). I renamed the channel to be C1:PSL-FSS_NPRO until we come up with something better.
After the change everthing worked and fb came back.
I wanted to check that the calibration of the MC ASS lockins was sensible, before trusting them forevermore.
To measure the calibration, I took a 30sec average of C1:IOO-MC_ASS_LOCKIN(1-6)_I_OUT with no misalignment.
Then step MC1 pitch by 10% (add 0.1 to the coil output gains). Remeasure the lockin outputs.
2.63 / (Lockin1noStep - Lockin1withStep) = calibration.
Repeat, with Lockin2 = MC2 pit, lockin3 = MC3 pit, and lockins 4-6 are MC1-3 yaw.
The number 2.63 comes from: half the side of the square between all 4 magnets. Since our offsets are in pitch and yaw, we want the distance between the line connecting the lower magnets and the center line of the optic, and similar for yaw. Presumably if all of the magnets are in the correct place, this number is the same for all magnets. The optics are 3 inches in diameter. I assume that the center of each magnet is 0.9mm from the edge of the optic, since the magnets and dumbbells are 1.9mm in diameter. Actually, I should probably assume that they're farther than that from the edge of the optic, since the edge of the dumbbell ~touches the edge of the flat surface, but there's the bevel which is ~1mm wide, looking normal to the surface of the optic. Anyhow, what I haven't done yet (planned for tomorrow...) is to figure out how well we need to know all of these numbers.
We shouldn't care more than ~100um, since the spots on the optics move by about that much anyway.
For now, I get the following #'s for the calibration:
Lockin1 = 7.83
Lockin2 = 9.29
Lockin3 = 8.06
Lockin4 = 8.21
Lockin5 = 10.15
Lockin6 = 6.39
The old values were:
C1:IOO-MC_ASS_LOCKIN1_SIG_GAIN = 7
C1:IOO-MC_ASS_LOCKIN2_SIG_GAIN = 9.6
C1:IOO-MC_ASS_LOCKIN3_SIG_GAIN = 8.3
C1:IOO-MC_ASS_LOCKIN4_SIG_GAIN = 7.8
C1:IOO-MC_ASS_LOCKIN5_SIG_GAIN = 9.5
C1:IOO-MC_ASS_LOCKIN6_SIG_GAIN = 8.5
The new values measured tonight are pretty far from the old values, so perhaps it is in fact useful to re-calibrate the lockins every time we try to measure the spot positions?
... it was not possible to get the cavity locked in green. So we decided to do the first measurement with infrared locked only.
When we sat down to align the Yarm to the green, the green light was happy to flash in the cavity, but wouldn't lock, even after Jan had tweaked the mirrors such that we were flashing the TEM00 mode. When we went down to the end to investigate, the Universal PDH box was saturating both negative and positive. Turning down the gain knob all the way to zero didn't change anything, so I put it back to 52.5. Curiously, when we unplugged the Servo OUT monitor cable (which was presumably going to the rack to be acquired), the saturation happened much less frequently. I think (but I need to look at the PDH box schematic) that that's just a monitor, so I don't know why that had to do with anything, but it was repeatable - plug cable in, almost constant saturation....unplug cable, almost no saturation.
Also, even with the cable unplugged, the light wouldn't flash in the cavity. When I blocked the beam going to the green REFL PD (used for the PDH signal), the light would flash.
Moral of the story - I'm confused. I'm going to look up the PDH box schematic before going back down there to investigate.
The short version:
Rana and Koji pointed out to us that the MCR camera view was still not good. There were 2 problems:
(1) Diagonal stripes through the beam spot. Yuta and I saw this a week or 2 before he left, but we were bad and didn't elog it, and didn't investigate. Bad grad students.
(2) Clipping of the left side of the beam (as seen on the monitors). This wasn't noticed until Yaakov's earlier camera work, since the clipped part of the beam wasn't on the monitor.
The fix for #1 was to partially close the iris which is the first "optic" the beam sees on the AP table after leaving the vacuum.
The "fix" for #2 was that the wrong beam has been going to the camera for an unknown length of time. We picked the correct beam, and all is well again.
We moved the 10% BS that splits the main beam into the (MC REFL PD) path and the (MCR camera + WFS) path. It looked like the transmission through there was close to the edge of the BS. We didn't think that this was causing the clipping that we saw on the camera (since when we stepped MC1 in Yaw, the beam spot had to move a lot before we saw any clipping), but it seemed like a good idea to make the beam not near the edge of the optic, especially since, being a 2" optic, there was plenty of room, and we were only using ~half of the optic. We didn't touch anything else in the WFS path, since that looks at the transmission through this BS, but we had to realign the beam onto MC REFL.
The long version:
(1) The fix for #1 was to partially close the iris which is the first "optic" the beam sees on the AP table after leaving the vacuum. It looks like that's why the iris was there in the first place. When we found it this evening, the iris was totally open, so my current theory is that someone was on the AP table doing something, and accidentally bumped the handle for the iris, then left it completely open when they realized that they had touched it. I think Steve had been doing something on the AP table around then, but since Yuta and I didn't elog our observation (bad grad students!), I can't correlate it with any of Steve's elogs. We were not able to find exactly where this "glow" that the iris is used to obscure comes from, but we traced it as far as the viewport, so there's something going on inside.
(2) The "fix" for #2 was that the wrong beam has been going to the camera for an unknown length of time. We picked the correct beam, and all is well again.
We spent a long time trying to figure out what was going on here. Eventually, we moved the camera around to different places (i.e. right before the MC REFL PD, with some ND filters, and then we used a window to pick off a piece of the beam right as it comes out of the vacuum before going through the iris, put some ND filters, then the camera). We thought that right before the MC REFL PD was also being clipped, indicating that the clipping was happening in the vacuum (since the only common things between the MC REFL PD path and the camera path are the iris, which we had removed completely, and a 2" 10% beam splitter. However, when we looked at a pickoff of the main beam before any beamsplitters, we didn't see any evidence of clipping. I think that when we had the camera by MC REFL, we could have been clipping on the ND filters that I had placed there. I didn't think to check that at the time, and it was kind of a pain to mush the camera into the space, so we didn't repeat that. Then we went back to the nominal MCR camera place to look around. We discovered that the Y1 which splits the camera path from the WFS path has a ghost beam which is clipping on the top right side (as you look at the face of the optic) of the optic, and this is the beam that was going to the camera (it's a Y1 since we only want a teensy bit of light to go to the camera, the rest goes to the WFS). There is another beam which is the main beam, going through the center of the optic, which is the one which also reflects and heads to the WFS. This is the beam which we should have on the camera. Yaakov put the camera back in it's usual place, and put the correct beam onto the center of the camera. We did a check to make sure that the main beam isn't clipping, and when I step MC1 yaw, the beam must move ~1.5mm before we start to see any clipping on the very edge of the beam. To see / measure this, we removed the optic which directs the beam to the camera, and taped an IR card to the inside of the black box. This is ~about the same distance as to the nominal camera position, which means that the beam would have to move by 1.5mm on the camera to see any clipping. The MC REFL PD is even farther from MC1 than our IR card, so the beam has to fall off the PD before we see the clipping. Thus, I'm not worried about any clipping for this main beam. Moral of the story, if you made it this far: There wasn't any clipping on any beams going to either the WFS or the MC REFL PD, only the beam going to the camera.
The PMC was locking right the way, but it's transmission would not go up. Finally I get it back up by moving the "sticky" DC Gain slider up and down a few times.
The FSS was -2.9, and the PMC won't lock happily unless you bring this back to 0. The symptom that this is happening is that the PMC reflection camera is totally saturated, but the PMC still looks like it's locked on 00.
Jamie went out to look at IP POS, and the beam was *way* off. Even though our alignment is still rough, we're kind of close right now, so Jamie put the beam back on the QPD, but we need to recenter IPPOS after we get good alignment.
I was looking into why we don't have any light on the PSL pointing QPDs, and it turns out that it has been this way since ~June 29th 2012. I need to look back in the elog to see what was going on on the PSL table that day, but I suspect it has something to do with Yuta and I, working on the beat setup, since this is all very near that area.
Attached is a plot of the loss of signal on the QPDs.
We lost IP POS on the same day as we lost the PSL pointing. See 2nd attachment. The _S_Calc is the sum, and it almost looks like the light got near the edge of the diode and just kept falling off until it was gone. The sum started getting lower on May 16th, and then was gone on June 29th.
So far I've gone back as far as Jan 2012, but I still haven't found any data where we *did* have light on IP ANG. Sad.
UPDATE, UPDATE (like P.P.S.): June 29th was the day of the vent...see elog 6895.
There was one NOT MARKED SOS with two broken magnets on its face. This is labeled ???
While I'm not sure what specific optic this is, I think it's an older optic. (a) All of the new optics we got from Ramin were enscribed with their #. (b) This optic appears to have a short arrow scribe line (about the length of the guiderod), and then no scribe line (that I could see through the glass dish) on the other side. The new optics all have a long arrow scribe line, ~1/2 the full width of the optic, and have clear scribe lines on the opposite side.
As part of trying to figure out what is going on with the ASS, I wanted to figure out what filters are installed on which lockins.
Each "DoF"(1-6) has a zpk(0.1,0.0001,1)gain(1000), which is a lowpass with 60dB of gain at DC, and unity gain at high frequencies.
For the lockins, since there are so many, I made a spreadsheet to keep track of them (attached).
So, what's the point? The point is, I think that all of the LOCKIN_I filter modules should be the same, with a single low pass filter. The Q filter banks don't matter, since we don't use those signals, and the signals are grounded inside the model. The phase of each lockin was / should be tuned such that all of the interesting signal goes to I, and nothing goes to Q. The SIG filter modules seem okay, in that they're all the same, except for their band pass frequency. I just need to check to see what frequency the ASS scripts are trying to actuate at, to make sure we're bandpassing the correct things.
We are trying to figure out what the story is with the ASS, and in order to make it more human parse-able, we cleaned up the c1ass.mdl.
So far, we have made no changes to how the signals are routed. The local oscillators from each lockin still get summed, and go directly to the individual optics, and the demodulated signals from each lockin go through the sensing matrix, the DoF filters, then the control output matrix, and then on to the individual optics. So far, so good.
Much of the cleanup work involved making a big library part, which is then called once for PIT and once for YAW in the ass top level, rather than have 2 code-copies, which give Jamie conniptions. Inside the library part GoTo and From tags were used, rather than having all the lines cross over one another in a big spaghetti mess.
One of the big actual changes to the ass was some name changes. Rather than having mysterious "ASS-LOCKIN(1-30)", they are now named something like "ASS-PIT_LOCKIN_ETMY_TRY", indicating that this is in the pitch set, actuating on ETMY, and looking at TRY for the demodulated signal. The "DOF" channels are similar to what they were, although we would like to change them in the future.....more on this potential change later. Previously they were "ASS-DOF(1-10)", but now they are "ASS-PIT_DOF(1-5)" and "ASS-YAW_DOF(1-5)". This channel naming, while it makes things make more sense, and is easier to understand, means that all of the ASS scripts need to be fixed. However, they all needed updating / upgrading anyway, so this isn't the end of the world.
This channel name fixing also included updating names of IPC (shmem/dolphin/rfm things) blocks, which required matching changes in the SUS, RFM and LSC models. All 4 models (ASS, SUS, RFM, LSC) were recompiled and installed. They all seem fine, except there appears to be a dolphin naming mismatch between OAF and SUS that showed up when the SUS was recompiled, which presumably it hadn't been in a while. We need to figure this out, but maybe not tonight. Den, if you have time, it would be cool if you could take a look at the OAF and SUS models to make sure the names match when sending signals back and forth.
We also had a long chat about the deeper meaning of the ASS.
What should we be actuating on, and what should we be sensing? A potential thought is to rename our DOF channels to actual DoF names: input axis translation, input axis angle, cavity axis translation, cavity axis angle. Then actuate the dither lines on a cavity degree of freedom, sense the influence on TRX, TRY and an LSC PD (as is currently done), then actuate on the cavity degree of freedom.
Right now, it looks like the actuation is for individual optics, the sensing is the influence on TRX, TRY and an LSC PD, then actuate on a cavity degree of freedom. So the only change with the new idea is that we actuate in the DoF basis, not the optics basis. So the Lockin local oscillators would go through the control output matrix. This makes more sense in my head, but Jamie and I wanted to involve more people in the conversation before we commit.
The next question would be: how do we populate the control output matricies? Valera (or someone) put something in there, but I couldn't find anything on the elog about how those values were measured/calculated/came-to-be. Any ideas? If we want to dither and then push on isolated degrees of freedom, we need to know how much moving of which optics affects each DoF. Is this something we should do using only geometry, and our knowledge of optic placements and relative distances, or is this measurable?
Jamie re-redid the ASS model a few hours ago.
I have just compiled it, and restarted c1ass. (The model from last night is currently called c1ass3.mdl) I had to delete and re-put inthe goto and from tags for the LSC signal coming in from the shmem. For some reason, it kept claiming that the inputs using the from tags were not connected, even when I redid the connections. Finally deleting and dragging in new goto and from tags made the model happy enough to compile. Whatever. I'm going to let Jamie do the svn-ing, since he's the one who made the changes. Before I had figured out that it was the tags, I was concerned that the shmem was unhappy, so there was no signal connecting to the input of the goto tag, and that was somehow bad....anyhow, I recompiled the LSC model to re-create the shmem sender, but that had no effect, since that wasn't the problem.
The change from last night is that now the library parts are by DoF. There is only one matrix in each library part, before the servo filters. Now we can DC-actuate on a single mode (ETM or ITM, pitch or yaw), and see how it affects all 4 sensors (the demodulated signals from the lockins). We need to measure the sensing matrix to go from the several sensors to the servo input.
I wanted to try out the ASS tonight, but I wanted some kinds of screens thrown together so I would know what I was doing. Turns out screens take longer than I thought. Am I surprised? Not really.
They're probably at the ~85% mark now, but I should be able to try out the ASS tomorrow I think.
As we do not have legs for Trillium, I was advised to use shims to adjust the levels. However, they produce extra resonance at ~30 Hz + harmonics. Coherence is lost at these frequencies.
Brian Lantz / Dan Clark are looking around their lab to see if they forgot to ship the feet with the T-240. They had taken the feet off to put it in a pod.
I was trying to load some filters into the ASS so that I can try it out, but for some reason the filter banks aren't working - clicking the on/off buttons doesn't do anything, filters (which exist in the .txt file generated by foton) don't load.
I've emailed cds-announce to see if anyone has any ideas.
The PRM was pointing totally the wrong way, so there was no light on the oplev PD. I restored the PRM, turned the gains back to (0.15, -0.3) as per Yuta's elog 6952, and it seems just fine to me.
I want to check the data from last night / the weekend to see when the mispointing happened, but dataviewer can't connect to the fb, since Jamie is still working his magic. I'm pretty sure I restored all of the optics after Eric finished playing with MICH Friday night, but it's possible that I forgot one, I suppose. If it wasn't me, then I'm curious when it happened.
When the network / fb went bad this afternoon, I had been working on the ASS model, shortening the names of the filter banks to fix the problem from elog 7092. I wanted to finish working on that, so the ASS model is now rebuilt with slightly shorter names in the filterbanks (which fixes the problem of the filter banks not working).
I mentioned this to Jamie the other day, but here's the error that you get when the GoTo/From tags aren't working:
>>rtcds make c1ass
### building c1ass...
Parsing the model c1ass...
IPC 9 8 is C1:LSC-ASS_LSC
IPC 9 8 is ISHME
IPC 10 9 is C1:RFM-LSC_TRX
IPC 10 9 is IPCIE
IPC 11 10 is C1:RFM-LSC_TRY
IPC 11 10 is IPCIE
INPUT XARM_LSC_in is NOT connected
INPUT YARM_LSC_in is NOT connected
***ERROR: Found total of ** 2 ** INPUT parts not connected
make: *** [c1ass] Error 255
make: *** [c1ass] Error 1
I don't know why these tags weren't working, but there was a GoTo tag on the output of the LSC shmem block, and then Froms on each of the XARM_LSC_in and YARM_LSC_in. The other day I played around with a bunch of different things (grounding inputs, terminating outputs, whatever), but finally replacing the tags with identical ones freshly taken from CDS_PARTS made it happy.
I wrote new setup, on and off scripts for the arm ass. They take the arm as an argument, so it's the same script for both arms. Scripts are in ...../scripts/ASS/ , and have been checked in to the 40m svn.
So far the on script doesn't really do anything, since I haven't chosen values for the CLKGAINs of the lockins. The old values were 30 for lockins 12, 14, 27, 29 and 250 for lockins 7, 9, 22, 24. Unfortunately, I have no memory of which lockin means what in the old numbered system. I'll have to look that up somehow. Or, just dither the optics using some value and look at the spectrum to see the resulting SNR and just pick something that gives me reasonable SNR.
I modified the ASS model slightly:
* Added an overall gain to the ASS_DOF2 library part, between the matrix and the servo inputs so we can do soft startups. Self - remember that the main ASS screen needs to be modified to reflect this!
* Rearranged the order that the demodulated signals go into the matrix. I hadn't paid attention, and the old ordering had the transmission (TRX/TRY) demod signals interleaved with the LSC demod signals. I've changed it to be all the TR signals, then all the LSC signals. I think this makes more sense, since we will use these inputs separately, so now they're on different halves of the matrix.
Previously, medmrun didn't accept arguments to pass along to the script it was going to run. Jamie has graciously taken a moment from fixing the computer disaster to help me update the medmrun script.
Now the ASS scripts are call-able from the screen.
Jan and Manasa are going to elog about their work later, but it involved putting a BS/window/some kind of pick off in front of the MC Trans QPD, so the total light on the MC Trans QPD is now ~16000 rather than ~26000 counts. I changed the threshold in the MC autolocker to 5000, so now the MC Trans PD must see at least 5000 counts before the autolocker will engage the boosts, WFS, etc. Actually, this threshold I believe should have been some several thousand value, but when I went in there, it was set to 500 counts, for low power MC mode during a vent. It had never gotten put back after the vent to some higher, nominal value.
I turned on the ASS, without closing the loops, to try to measure the sensing matrix.
The Yarm was locked (Eric did a nice job earlier - he'll ELOG ABOUT IT before he goes home!), and I used an LO CLKGAIN of 300 on all of the TRY Lockins. Then I put on and took away a 10% offset in pitch, but it's almost impossible to see the difference.
The attached is a truly awful screenshot, but you can kind of see what's going on. The big steps are me increasing the LO gain, but around "0" on the x-axis I changed the pitch offset from 10% away to nominal. Since there are such big oscillations, the change is basically non-existent. Grrrr. I'll look at it again tomorrow, since I have an exiting bike ride home ahead of me....
From the log, I couldn't understand what has been done.
The procedure we should perform is
Then you can start measuring the sensing matrix. At which part did the attempt fail?
Cavity started out aligned pretty well, but not 100%. Transmission was ~0.8 . Perhaps this was part of the problem.
I realize now that you mention it, it was totally amateur hour of me to only look at the lockin outputs on StripTool (plus POY and TRY on Dataviewer), and not look at TRY on DTT...or any spectra at all. Not so intelligent. I could see some fluctuation of TRY on Dataviewer that corresponded to me turning on the oscillators, as well as the spot wiggling on the camera view of ETMYT a teeny bit.
When applying a 10% misalignment to ETMY Pit (by adding 0.1 to the Pit components of the output matrix, as is done in the MC spot position calibration), I could see that there was a small jump in the StripTool trace, but it was much smaller than the ambient fluctuations of the output.
I just looked back and realized that I must have forgotten to add my screenshot, but it's saved on a desktop on Rossa. It would be better if I had attached the data, but from that you see that the average of the lockin output signal didn't change very much in the last several minutes of the measurement, but the fluctuations (no misalignment offsets) are pretty big, maybe ~10% or 15% the size of the signal. Then when I added the misalignment to one mirror (ETMY PIT), there is a very small jump in the lockin signal, but it is much, much smaller than the size of the ambient fluctuations. Perhaps a long average would result in a "real" value, but by looking by eye, I can't see a discernible difference in the average value of the lockin outputs.
My plan is to do as you say, dithering all 4 optics, and misaligning a single optic's single DoF (Pit or Yaw), and seeing how that misalignment affected each of the sensors (the lockin outputs). Then put that DoF back to nominal, and misalign a different DoF, rinse and repeat.
Okay, so this is a little stream-of-consciousness-y, and you're going to think I'm really dumb, but I just realized that I haven't set the phase of the lockin demodulators yet. So I think I need to dither the optics, and go through each of the sensors, and adjust the phase until the peak in TRY in DTT is maximized for the I phase, and minimized for the Q phase (since we use the I-output). Bah. Bad Jenne.
Once you've got C1:LSC-TRY_OUT as large as possible, you've locked the cavity.
Both the transfer function and the coherence look good above roughly 30 Hz, but do not look correct at low frequencies. There's also a roll-off in the measured transfer function around 200 Hz, while in the model the magnitude of the transfer function drops only after the corner frequency of the cavity, around several kHz. I have attached a plot of the roughly analogous transfer function from the DARM control loop model (the gains are very large due to the large arm cavity gain and the ADC conversion factor of 2^16/(20 V) ). The measured and the modeled transfer functions are slightly different in that the model does not include the individual mirrors, while the excitation was imposed on ITMY for the measurement.
The next steps are to figure out what's happening in DTT with the transfer function and coherence at low frequencies, and to understand the differences between the model and the measurement.
The cavity is actually "locked" as soon as the feedback loop is successfully closed. One easy-to-spot symptom of this is that, as you mentioned elsewhere in your post, TRY is a ~constant non-zero, rather than spikey (or just zero). Once you've maximized TRY, you've got the cavity locked, and the alignment optimized.
We didn't get to this part of "The Talk" about the birds, the bees, and the DTTs, but we'll probably need to look into increasing the amplitude of the excitation by a little bit at low frequency. DTT has this capability, if you know where to look for it.
It would be great to see the model and your measurement overlayed on the same plot - they're easier to compare that way. You can export the data from DTT to a text file pretty easily, then import it into Matlab and plot away. Can you check and maybe repost your measured plots? I think they might have gotten attached as text files rather than images. At least I can't open them.
Koji pointed out that I was being silly, and rather than actually misaligning the optics (by, say, changing their IFO Align sliders) I was changing the location of the actuation node by changing the coil output gains. Now I see nice signals at the I_OUT of each of the demodulators (so far I've only looked at the YARM).
I've measured and inverted the matrix by taking the nominal values of the demodulator outputs when the optics are all by-hand optimally aligned, then one-by-one misaligning an optic's angle (pitch or yaw), and looking at the demod outputs that result. Repeat with each misalignment DoF for each of the 4 rows of the matrix. Then I set the pit/yaw coupling elements of the matrix to zero. Then invert the matrix, put it in, and see what happens. So far, the yaw DoFs converged to zero, but the pitch ones didn't. I'll play with it more and think some more tomorrow.
Simplant for ETMX was left on, so I didn't have control of ETMX. Not cool. The IFO should be left in it's 'regular' state (all optics restored to saved alignments, no simplant, LSC/ALS/ASS loops off) if you're not actively working on it.
What this did point out, however, is that we need a big ol' indicator on the IFO_ALIGN / LSC / Watchdog / Overview screens to indicate that simplant is on for a particular optic, or whatever simplant might be controlling that takes away 'regular' control. I probably would have continued being frustrated and confused for a lot longer if Eric didn't mention that simplant could have been left on. Thanks Eric!
Symptoms, which perhaps would have eventually pointed me to simplant, were that there was some weird moving beam on the AS camera that was flashing fabry-perot fringes, and the POX signal looked like junk. After some looking around, I noticed that ETMX, while it claimed to have all the damping loops on, and the oplev on, was swinging a lot (rms levels of 4 - 7, rather than the usual < 2 ). I said something out loud, and Eric suggested looking at Simplant. After putting Simplant back to Reality, things are back to normal.