Top tab categolies:
HEPA filter was running at 90% of max. I reduced it to 20%. Acoustic noise moved down
The range of MCL oscillations has also decreased but fluctuations in the frequency range 10-100 are still present.
MCL is much more stable now.
The main page of connection error monitor can be a scheme of computers and models with connections. If there is an error in connection, the corresponding arrows turns from green to red.
Each arrow is a line to a more detailed information about the channels.
The PMC behavior is not changed.
Mic in the PSL showed that fluctuations in the MCL in the frequency range 10 - 100 Hz are due to acoustic noise. I've measured MCL, MCL / PSL mic coherence 2 times with interval 300 seconds.
Surprisingly, acoustic noise level did not change but MC sometimes is more sensitive to acoustic noise, sometimes less.
I wrote auto-locker for PMC. It is called autolocker_pmc, located in the scripts directory, svn commited. I connected it to the channel C1:PSL-PMC_LOCK. It is currently running on rosalba. MC autolocker runs on op340m, but I could not execute the script on that machine
./autolock_pmc: Stale NFS file handle.
I did several tests, usually, the script locks PMC is a few seconds. However, if PMC DC output has been drift significantly, if might take longer as discussed below.
if autolocker if enabled, monitor PSL-PMC_PMCTRANSPD channel
if TRANS is less then 0.4, start locking:
disengage PMC servo by enabling PMC TEST 1
change PSL-PMC_RAMP unless TRANS is higher then 0.4 (*)
engage PMC servo by disabling PMC TEST 1
else sleep for 1 sec
(*) is tricky. If RAMP (DC offset) is specified then TRANS will be oscillating in the range ( TRANS_MIN, TRANS_MAX ). We are interested only in the TRANS_MAX. To make sure, we estimate it right, TRANS channel is read 10 times and the maximum value is chosen. This works good.
Next problem is to find the proper range and step to vary DC offset RAMP. Of coarse, we can choose the maximum range (-7, 0) and minimum step 0.0001, but it will take too long to find the proper DC offset. For that reason autolocker tries to find a resonance close to the previous DC offset in the range (RAMP_OLD - delta, RAMP_OLD + delta), initial delta is 0.03 and step is 0.003. It resonance is not found in this region, then delta is multiplied by a factor of 2 and so on. During this process RAMP range is controlled not to be wider then (-7, 0).
The might be a better way to do this. For example, use the gradient descent algorithm and control the step adaptively. I'll do that if this realization will be too slow.
I've disabled autolocker_pmc for the night.
I just sat down in the control room, and discovered the PMC (and everything else) unlocked. I relocked the PMC, but the MC wasn't coming back. After a moment of looking around, I discovered that the WFS were on, and railing. I ran the "turn WFS off" script, and the MC came back right away, and the WFS came on as they should.
We need to relook at the WFS script, or the MC down script, to make sure that any time the MC is unlocked, no matter why it unlocked, the WFS output is off and the filter histories are cleared.
The only script that can currently take this action is the MC autolocker. If that is disabled first and the PMC unlocks later, the WFS will not be turned off. During the last round of discussions we had about the autolocker script, sometime last Nov, we decided that too much automation is not desirable and that the autolocker should be kept as simple as possible.
We tried to locate the sixteen analog output channels we need to control the four tip-tilts (four coils on each). We have only 8 available channels on the C1SUS machine.
So we will have to plug-in a new DAC output card on one of the machines and it would be logical to do that on the C1IOO machine as the active tip-tilts are conceptually part of the IOO sub-system. We have to procure this card if we do not already have it. We have to make an interface between this card output and a front panel on the 1X2 rack. We may have to move some of the sub-racks on the 1X2 rack to accommodate this front panel.
We checked out the availability of cards (De-whitening, Anti-imaging, SOS coil drivers) yesterday. In summary: we have all the cards we need (and some spares too). As the De-whitening and Anti-imaging cards each have 8 channels, we need only two of each to address the sixteen channels. And we need four of the SOS coil drivers, one for each tip-tilt. There are 9 slots available on the C1IOO satellite expansion chassis (1X1 rack), where these eight cards could be accommodated.
There are two 25 pin feed-thoughs, where the PZT drive signals currently enter the BS chamber. We will have to route the SOS coil driver outputs to these two feed-throughs.
Inside the BS chamber, there are cables which carry the PZT signals from the chamber wall to the the table top, where they are anchored to a post (L- bracket). We need a 25-pin-to-25-pin cable (~2m length) to go from the post to the tip-tilt (one for each tip-tilt). And then, of course, we need quadrapus cables (requested from Rich) which fit inside each tip-tilt to go to the BOSEMs.
I am summarising it all here to give an overview of the work involved.
Align Ygreen beam - JENNE, YUTA
Arm cavity sweeps, mode scan - JENNE, YUTA
ASS doesn't run on Ubuntu! or CentOS Fix it! - YUTA, JENNE, JAMIE's help
Input matricies, output filters to tune SUS. check after upgrade. - JENNE
POX11 whitening is not toggling the analog whitening??? - JAMIE, JENNE, KOJI
Decide on plots for 40m Summary page - DEN, STEVE, JENNE, KOJI, JAMIE, YUTA, SURESH, RANA, DUNCAN from Cardiff/AEI
Look into PMC PZT drift - PZT failing? Real MC length change? - JENNE, KOJI, YUTA
THE FULL LIST:
cd /opt/rtcds/caltech/c1/burt/autoburt/today/ -
New self checking emergency back up lights were installed at 13 locations in the 40m lab.
I locked the PMC and the MC followed instantly.
I've put 5 EM172 microphones close together and measured there signal and coherence. They are plugged in to accelerometer channels.
Then I've suspended microphones around the MC - 2 at MC2, 2 at MC1,3 and 1 at PSL. The amplifier box is above STS readout box.
Microphone close to PSL gave a strong coherence with MC_F, as we already saw it using Blue Bird Microphone.
ACC_MC2_XY channels <=> MC2 microphones
ACC_MC1_XY channels <=> MC1,3 microphones
ACC_MC1_Z channel <=> PSL microphones
I fixed the problem Jamie pointed out in elog #6657 and #6659.
What I did:
1. Created the following template files in /opt/rtcds/userapps/trunk/sus/c1/medm/templates/ directry.
To open these files, you have to define $(OPTIC) and $(DCU_ID).
For SUS_SINGLE_TO_COIL_X_X.adl, you also have to define $(FILTER_NUMBER), too. See SUS_SINGLE_TO_COIL_MASTER.adl.
2. Fixed the following screens so that they open SUS_SINGLE.adl.
Now same as pianosa and rosalba. I'll upgrade allegra on Friday.
I've created transmission error monitors in rfm, oaf, sus, lsc, scx, scy and ioo models. I tried to get data from every channel transmitted through PCIE and RFM. I also included some shared memory channels.
The medm screen is in the EF STATUS -> TEM. It shows 16384 for the channels that come from simulation plant. Others are 0, that's fine.
Actually, it looks like we're not quite done here. All the paths in the SUS_SINGLE screen need to be updated to reflect the move. We should probably make a macro that points to /opt/rtcds/caltech/c1/screens, and update all the paths accordingly.
Hey, folks. Please remember to commit all changes to the SVN in a timely manor. If you don't, multiple commits will get lumped together and we won't have a good log of the changes we're making. You might also end up just loosing all of your work. SVN COMMIT when you're done! But please don't commit broken or untested code.
pianosa:release 0> svn status | grep -v '^?'
Very nice, Yuta! Don't forget to commit your changes to the SVN. I took the liberty of doing that for you. I also tweaked the file a bit, so we don't have to specify IFO and SYS, since those aren't going to ever change. So the arguments are now only: OPTIC=MC1,DCU_ID=36. I updated the sitemap accordingly.
Yuta, if you could go ahead and modify the calls to these screens in other places that would be great. The WATCHDOG, LSC_OVERVIEW, MC_ALIGN screens are ones that immediately come to mind.
And also feel free to make cool new ones. We could try to make simplified version of the suspension screens now being used at the sites, which are quite nice.
The typical sign of a dying gas laser is that it glows for a few minutes only. The power supplies are fine.
Two new JDS - Uniphase 1103P lasers ( NT64-104 ) arriving on Monday, May 21
Yesterday I swapped in new He/Ne laser with output power 3.5 mW The return spot on qpd is large ~6mm in diameter and 20,500 counts
The spot size reduction require similar layout as ETMX oplev.
I've started to create channels and an medm screen to monitor the errors that occur during the transmission through the RFM model. The screen will show the amount of lost data per second for each channel.
Not all channels are ready yet. For created channels, number of errors is 0, this is good.
We need more organized MEDM screens. Let's use macro.
What I did:
1. Edited /opt/rtcds/userapps/trunk/sus/c1/medm/templates/SUS_SINGLE.adl using replacements below;
sed -i s/#IFO#SUS_#PART_NAME#/'$(IFO)$(SYS)_$(OPTIC)'/g SUS_SINGLE.adl
sed -i s/#IFO#SUS#_#PART_NAME#/'$(IFO)$(SYS)_$(OPTIC)'/g SUS_SINGLE.adl
sed -i s/#IFO#:FEC-#DCU_ID#/'$(IFO):FEC-$(DCU_ID)'/g SUS_SINGLE.adl
sed -i s/#CHANNEL#/'$(IFO):$(SYS)-$(OPTIC)'/g SUS_SINGLE.adl
sed -i s/#PART_NAME#/'$(OPTIC)'/g SUS_SINGLE.adl
2. Edited sitemap.adl so that they open SUS_SINGLE.adl with arguments like
instead of opening ./c1mcs/C1SUS_MC1.adl.
3. I also fixed white blocks in the LOCKIN part.
Now you don't have to generate every suspension screens. Just edit SUS_SIGNLE.adl.
Things to do:
- fix every other MEDM screens which open suspension screens, so that they open SUS_SINGLE.adl
- make SUS_SINGLE.adl more cool
I rebooted rossa and the X sever has stalled.
I've soldered EM172 microphones to BNC connectors to get data from them.
Then I've build an amplifier for them. The circuit is
I've build 6 such circuits inside 1 box. It needs +15 V to A3 and GND to A2. A1 power channel is not used.
LISO model for this scheme was created and simulation results were compared to measurement of each channel
Measured noise curve (green) is the SR785 own noise.
ETMX oplev had 6 mm diameter beam on the qpd. I relayed the beam path with 2 lenses to get back 3 mm beam on the qpd
BRC 037 -100 Bi _concave lens and PCX 25 200 VIS do the job. Unfortunately the concave lens has the AR 1064.
The uncoated bi-concave lens was replaced by AR coated one: KBC 037 -100 AR.14 resulting 35% count increase on qpd
Aluminum posts and SS clamps for green glass traps in vacuum will be out of the shops by June 6, 2012 the latest
Guralp 1 is on the south side of IOO chamber
It doesn't look all that different, but the first image didn't have that much lit up in it to begin with.
This is totally cool! You can see that the OSEM lights are almost entirely gone in the subtracted image.
Can you switch to trying with one of the *TM*F cameras? (ITMXF, ITMYF, ETMYF, ETMXF) They tend to have more background, so there should be a more dramatic subtraction. Den or Suresh should be able to lock one of the arms for you.
I acquired 2 raw frames of MC2 using "/users/mjenson/sensoray/sdk_2253_1.2.2_linux/capture -n -s 720x480 -f 1", one while the laser was off the mode cleaner and another while it was on:
I then used "/users/mjenson/sensoray/sdk_2253_1.2.2_linux/imsub/display-image.py" to generate bitmaps of the raw images, which I then subtracted using the Python Imaging Library to generate a new image:
It doesn't look all that different, but the first image didn't have that much lit up in it to begin with. I should be able to write a script that does all of this without needing to generate new files in between acquisition and subtraction.
ETMX sus damping restored
ETMY sus damping restored.
I've compared offline Wiener filtering with online static + adaptive filtering for MC_F with GUR1_XYZ and GUR2_XYZ as witness signals
Note: online filter works up to 32 Hz (AI filter at 32 Hz is used). There is no subtraction up from this frequency, just MC_F was measured in different times for online and offline filtering. This difference in MC_F in frequency range 20-100 Hz showed up again as before with microphone testing. One can see it in 1 minute. Smth is noisy.
1. FIR -> IIR conversion produces error. Now I'm using VECTFIT approximation with 16 poles (splitting into 2 filter banks), this not enough. I tried to use 50 and split them into 5 filter banks, but this scheme is not working: zpk -> sos conversion produces error and the result filter works completely wrong.
2. Actuator TF. VECTFIT works very good here - we have only 1 resonance. However, it should be measured precisely.
3. Account for AA and AI filters that rotate the phase at 1-10 Hz by ~ 10 degrees.
No leaving the OAF running until you're sure (sure-sure, not kind of sure, not pretty sure, but I've enabled guardians to make sure nothing bad can happen, and I've been sitting here watching it for 24+ hours and it is fine) that it works okay.
OAF (both adaptive and static paths) were left enabled, which was kicking MC2 a lot. Not quite enough that the watchdog tripped, but close. The LSCPOS output for MC2 was in the 10's or 100's of thousands of counts. Not okay.
This brings up the point though, which I hadn't actively thought through before, that we need an OAF watchdog. An OAF ogre? But a benevolent ogre. If the OAF gets out of control, it's output should be shut off. Eventually we can make it more sophisticated, so that it resets the adaptive filter, and lets things start over, or something.
But until we have a reliable OAF ogre, no leaving the adaptive path enabled if you're not sitting in the control room. The static path should be fine, since it can't get out of control on it's own.
Especially no leaving things like this enabled without a "I'm leaving this enabled, I'll be back in a few hours to check on it" elog!
Already for the second time today all computers loose connection to the framebuilder. When I ssh to framebuilder DAQD process was not running. I started it
controls@fb ~ 130$ sudo /sbin/init q
Just to be clear, "init q" does not start the framebuilder. It just tells the init process to reparse the /etc/inittab. And since init is supposed to be configured to restart daqd when it dies, it restarted it after the reloading of /etc/inittab. You and Alex must have forgot to do that after you modified the inittab when you're were trying to fix daqd last week.
daqd is known to crash without reason. It usually just goes unnoticed because init always restarts it automatically. But we've known about this problem for a while.
But I do not know what causes this problem. May be this is a memory issue. For FB
Mem: 7678472k total, 7598368k used, 80104k free
Practically all memory is used. If more is needed and swap is off, DAQD process may die.
This doesn't really mean anything, since the computer always ends up using all available memory. It doesn't indicate a lack of memory. If the machine is really running out of memory you would see lots of ugly messages in dmesg.
ADC 3 INPUT 4 (#3 in the c1pem model if you count from 0) is bad. It adds DC = ~1 V to the signal as well as noise. I plugged in GUR2 channels to STS1 channels (7-9).
I'm combining the IFO check-up list (elog 6595) and last week's action items list (elog 6597). I thought about making it a wiki page, but this way everyone has to at least scroll past the list ~1/week.
Feel free to cross things out as you complete them, but don't delete them. Also, if there's a WHO?? and you feel inspired, just do it!
Dither-align arm to get IR on actuation nodes, align green beam - JENNE
Arm cavity sweeps, mode scan - JENNE
ASS doesn't run on Ubuntu! or CentOS Fix it! - JENNE, JAMIE's help
OAF comparison plot, both online and offline, comparing static, adaptive and static+adaptive - DEN
However, this did not help, C1RFM did not start. I decided to restart all models on C1SUS machine in hope that C1RFM uses some other models and can't connect to them but this suspended C1SUS machine.
This happened because of the code bug -
// If PCIE comms show errors, may want to add this cache flushing
if(ipcInfo[ii].netType == IPCIE)
clflush_cache_range (&(ipcInfo[ii].pIpcData->dBlock[sendBlock][ipcIndex].data), 16); // & was missing - Alex fixed this
After this bug was fixed and the code was recompiled, C1:OAF_MCL_IN is OK, no errors occur during the transmission C1:OAF-MCL_ERR=0.
So the problem was in the PCIE card that could not send such amount of data and the last channel (MCL is the last) was corrupted. Now, when Alex added cache flushing, the problem is fixed.
Den and Alex left things not-burt restored, and Den mentioned to me that it might need doing.
I burt restored all of our epics.snaps to the 1am today snapshot. We lost a few hours of striptool trends on the projector, but now they're back (things like the BLRMS don't work if the filters aren't engaged on the PEM model, so it makes sense).
I added PCIE memory cache flushing to c1rfm model by changing 0 to 1 in /opt/rtcds/rtscore/release/src/fe/commData2.c on line 159, recompiled and restarted c1rfm.
However, this did not help, C1RFM did not start. I decided to restart all models on C1SUS machine in hope that C1RFM uses some other models and can't connect to them but this suspended C1SUS machine. After reboot encounted the same C1SUS -> FB communication error and fixed it in the same was as in the previous case of C1SUS reboot. This happens already the second time (out of 2) after C1SUS machine reboot.
I changed /opt/rtcds/rtscore/release/src/fe/commData2.c back, recompiled and restarted c1rfm. Now everything is back. C1RFM -> C1OAF is still bad.
Den noticed this, and will write more later, I just wanted to sum up what Alex said / did while he was here a few minutes ago....
From my point of view during rfm -> oaf transmission through Dolphin we loose a significant part of the signal. To check that I've created MEDM screen to monitor the transmission errors in the OAF model. It shows how many errors occurs per second. For MCL channel this number turned out to be 2046 +/- 1. This makes sense to me as the sampling rate is 2048 Hz => then we actually receive only 1-3 data points per second. We can see this in the dataviewer.
C1:OAF-MCL_IN follows C1:IOO-MC_F in the sense that the scale of 2 signals are the same in 2 states: MC locked and unlocked. It seems that we loose 2046 out of 2048 points per second.
Is this enhancement of spectrum caused by the lock? Or by the actuation?
If this is also seen with approximately same amount of actuation to PRM POS,
this is just a suspension problem.
If this is only seen with the PRM locked, this is somehow related to the opt-mechanical coupling.
c1iscey is much less happy - neither the IOP nor the scy model are willing to talk to fb. I might give up on them after another few minutes, and wait for some daytime support, since I wanted to do DRMI stuff tonight.
Yeah, giving up now on c1iscey (Jamie....ideas are welcome). I can lock just fine, including the Yarm, I just can't save data or see data about ETMY specifically. But I can see LSC data, so I can lock, and I can now take spectra of corner optics.
This is the mx_stream issue reported previously. The symptom is that all models on a single front end loose contact with the frame builder, as opposed to *all* models on all front end loosing contact with the frame builder. That indicates that the problem is a common fb communication issue on the single front end, and that's all handled with mx_stream.
ssh'ing into c1iscey and running "sudo /etc/init.d/mx_stream restart" fixed the problem.
A few things tonight. Locked both arms simultaneously (IR only). Locked MICH, Locked PRMI, although it doesn't like staying locked for more than a minute or so, and not always that long.
Locking PRCL was possible by getting rid of the power normalization. We need to get some triggering going on for the power norm. I think it's a good idea for after the cavity is locked, but when PRCL is not locked, POP22 is ~0, so Refl33/Pop22 is ~inf. The PRCL loop kept railing at the Limit that was set. Getting rid of the power normalization fixed this railing.
I took some spectra of PRM's oplev, while PRMI was locked, and unlocked. The PRM is definitely moving more when the cavity is locked. I'm not sure yet what to do about this, but the result was repeatable many times (~6 or 7 over an hour or so). The OpLev spectra when PRMI was locked didn't depend too strongly on the PRM's alignment, although I think that's partly because I wasn't able to really get the PRM to optimal alignment. I think POP22I is supposed to get to 7 or so...last week with Koji it was at least flashing that high. But tonight I couldn't get POP22I above 4, and most of the time it wouldn't go above 3. As I was aligning PRM and the circulating SB power increased, POP22I fluctuations increased significantly, then the cavity unlocked. So maybe this is because as I get closer, PRM gets more wiggly. I tried playing 'chicken' with it, and took spectra as I was aligning PRM (align, get some improvement, stop to take spectra, then align more, stop to take spectra....) but usually it would fall out of lock after 1-2 iterations of this incremental alignment and I'd have to start over. When it relocked, it usually wouldn't come back to the same level of POP22I, which was kind of disappointing.
In the PDF attached, pink and light blue are when the PRMI is locked, and red and dark blue are no PRCL feedback. The effect is more pronounced with Pitch, but it's there for both Pitch and Yaw.
Also, I need to relook at my new restore/misalign scripts. They were acting funny tonight, so I'm taking back my "they're awesome, use them without thinking about it" certification.
UPDATE UPDATE: Genius me just checked the FE status screen again. It was fine ~an hour ago when I sat down to start interferometer-izing for the night, but now the SUS model and both of the ETMY computer models are having problems connecting to the fb. *sigh*
Restarted SUS model - it's now happy.
c1iscey is much less happy - neither the IOP nor the scy model are willing to talk to fb. I might give up on them after another few minutes, and wait for some daytime support, since I wanted to do DRMI stuff tonight.
Upgrades suck. Or at least making everything work again after the upgrade.
On the to-do list tonight: look at OSEM sensor and OpLev spectra for PRM, when PRMI is locked and unlocked. Goal is to see if the PRM is really moving wildly ("crazy" as Kiwamu always described it) when it's nicely aligned and PRMI is locked, or if it's an artifact of lever arm between PRM and the cameras (REFL and AS).
However, I can't get signals on DTT. So far I've checked a bunch of signals for SUS-PRM, and they all either (a) are just digital 0 or (b) are ADC noise. Lame.
Steve's elog 5630 shows what reasonable OpLev spectra should look like: exactly what you'd expect.
Attached below is a small sampling of different SUS-PRM signals. I'm going to check some other optics, other models on c1sus, etc, to see if I can narrow down where the problem is. LSC signals are fine (I checked AS55Q, for example).
UPDATE: SRM channels are same ADC noise. MC1 channels are totally fine. And Den had been looking at channels on the RFM model earlier today, which were fine.
ETMY channels - C1:SUS-ETMY_LLCOIL_IN1 and C1:SUS-ETMY_SUSPOS_IN1 both returned "unable to obtain measurement data". OSEM sensor channels and OpLev _PERROR channel were digital zeros.
ETMX channels were fine
Errors are probably really happening.... c1oaf computer status 4-bit thing: GRGG. The Red bit is indicating receiving errors. Probably the oaf model is doing a sample-and-hold thing, sampling every time (~1 or 2 times per sec) it gets a successful receive, and then holding that value until it gets another successful receive.
Den is adding EPICS channels to record the ERR out of the PCIE dolphin memory CDS_PART, so that we can see what the error is, not just that one happened.
Alex restarted oaf model: sudo rmmod c1oaf.ko, sudo insmod c1oaf.ko . Clicked "diag reset" on oaf cds screen several times, nothing changed. Restarted c1oaf again, same rmmod, insmod commands.
Den, Alex and I went into the IFO room, and looked at the LSC computer, SUS computer, SUS I/O chassis, LSC I/O chassis and the dolphin switch that is on the back of the rack, behind the SUS IO chassis. All were blinking happily, none showed symptoms of errors.
Alex restarted the IOP process: sudo rmmod c1x04, sudo insmod c1x04. Chans on dataviewer still bad, so this didn't help, i.e. it wasn't just a synchronization problem. oaf status: RRGG. lsc status: RGGG. ass status: RGGG.
sudo insmod c1lsc.ko, sudo insmod c1ass.ko, sudo insmod c1oaf.ko . oaf status: GRGG. lsc status: GGGG. ass status: GGGG. This means probably lsc needs to send something to oaf, so that works now that lsc is restarted, although oaf still not receiving happily.
Alex left to go talk to Rolf again, because he's still confused.
Comment, while writing elog later: c1rfm status is RRRG, c1sus status is RRGG, c1oaf status is GRGG, both c1scy and c1scx are RGRG. All others are GGGG.