I've put 5 EM172 microphones close together and measured there signal and coherence. They are plugged in to accelerometer channels.
Then I've suspended microphones around the MC - 2 at MC2, 2 at MC1,3 and 1 at PSL. The amplifier box is above STS readout box.
Microphone close to PSL gave a strong coherence with MC_F, as we already saw it using Blue Bird Microphone.
ACC_MC2_XY channels <=> MC2 microphones
ACC_MC1_XY channels <=> MC1,3 microphones
ACC_MC1_Z channel <=> PSL microphones
I fixed the problem Jamie pointed out in elog #6657 and #6659.
What I did:
1. Created the following template files in /opt/rtcds/userapps/trunk/sus/c1/medm/templates/ directry.
To open these files, you have to define $(OPTIC) and $(DCU_ID).
For SUS_SINGLE_TO_COIL_X_X.adl, you also have to define $(FILTER_NUMBER), too. See SUS_SINGLE_TO_COIL_MASTER.adl.
2. Fixed the following screens so that they open SUS_SINGLE.adl.
Now same as pianosa and rosalba. I'll upgrade allegra on Friday.
I've created transmission error monitors in rfm, oaf, sus, lsc, scx, scy and ioo models. I tried to get data from every channel transmitted through PCIE and RFM. I also included some shared memory channels.
The medm screen is in the EF STATUS -> TEM. It shows 16384 for the channels that come from simulation plant. Others are 0, that's fine.
I just sat down in the control room, and discovered the PMC (and everything else) unlocked. I relocked the PMC, but the MC wasn't coming back. After a moment of looking around, I discovered that the WFS were on, and railing. I ran the "turn WFS off" script, and the MC came back right away, and the WFS came on as they should.
We need to relook at the WFS script, or the MC down script, to make sure that any time the MC is unlocked, no matter why it unlocked, the WFS output is off and the filter histories are cleared.
Actually, it looks like we're not quite done here. All the paths in the SUS_SINGLE screen need to be updated to reflect the move. We should probably make a macro that points to /opt/rtcds/caltech/c1/screens, and update all the paths accordingly.
Hey, folks. Please remember to commit all changes to the SVN in a timely manor. If you don't, multiple commits will get lumped together and we won't have a good log of the changes we're making. You might also end up just loosing all of your work. SVN COMMIT when you're done! But please don't commit broken or untested code.
pianosa:release 0> svn status | grep -v '^?'
Very nice, Yuta! Don't forget to commit your changes to the SVN. I took the liberty of doing that for you. I also tweaked the file a bit, so we don't have to specify IFO and SYS, since those aren't going to ever change. So the arguments are now only: OPTIC=MC1,DCU_ID=36. I updated the sitemap accordingly.
Yuta, if you could go ahead and modify the calls to these screens in other places that would be great. The WATCHDOG, LSC_OVERVIEW, MC_ALIGN screens are ones that immediately come to mind.
And also feel free to make cool new ones. We could try to make simplified version of the suspension screens now being used at the sites, which are quite nice.
The typical sign of a dying gas laser is that it glows for a few minutes only. The power supplies are fine.
Two new JDS - Uniphase 1103P lasers ( NT64-104 ) arriving on Monday, May 21
Yesterday I swapped in new He/Ne laser with output power 3.5 mW The return spot on qpd is large ~6mm in diameter and 20,500 counts
The spot size reduction require similar layout as ETMX oplev.
I've started to create channels and an medm screen to monitor the errors that occur during the transmission through the RFM model. The screen will show the amount of lost data per second for each channel.
Not all channels are ready yet. For created channels, number of errors is 0, this is good.
We need more organized MEDM screens. Let's use macro.
What I did:
1. Edited /opt/rtcds/userapps/trunk/sus/c1/medm/templates/SUS_SINGLE.adl using replacements below;
sed -i s/#IFO#SUS_#PART_NAME#/'$(IFO)$(SYS)_$(OPTIC)'/g SUS_SINGLE.adl
sed -i s/#IFO#SUS#_#PART_NAME#/'$(IFO)$(SYS)_$(OPTIC)'/g SUS_SINGLE.adl
sed -i s/#IFO#:FEC-#DCU_ID#/'$(IFO):FEC-$(DCU_ID)'/g SUS_SINGLE.adl
sed -i s/#CHANNEL#/'$(IFO):$(SYS)-$(OPTIC)'/g SUS_SINGLE.adl
sed -i s/#PART_NAME#/'$(OPTIC)'/g SUS_SINGLE.adl
2. Edited sitemap.adl so that they open SUS_SINGLE.adl with arguments like
instead of opening ./c1mcs/C1SUS_MC1.adl.
3. I also fixed white blocks in the LOCKIN part.
Now you don't have to generate every suspension screens. Just edit SUS_SIGNLE.adl.
Things to do:
- fix every other MEDM screens which open suspension screens, so that they open SUS_SINGLE.adl
- make SUS_SINGLE.adl more cool
I rebooted rossa and the X sever has stalled.
I locked the PMC and the MC followed instantly.
I've soldered EM172 microphones to BNC connectors to get data from them.
Then I've build an amplifier for them. The circuit is
I've build 6 such circuits inside 1 box. It needs +15 V to A3 and GND to A2. A1 power channel is not used.
LISO model for this scheme was created and simulation results were compared to measurement of each channel
Measured noise curve (green) is the SR785 own noise.
ETMX oplev had 6 mm diameter beam on the qpd. I relayed the beam path with 2 lenses to get back 3 mm beam on the qpd
BRC 037 -100 Bi _concave lens and PCX 25 200 VIS do the job. Unfortunately the concave lens has the AR 1064.
The uncoated bi-concave lens was replaced by AR coated one: KBC 037 -100 AR.14 resulting 35% count increase on qpd
Aluminum posts and SS clamps for green glass traps in vacuum will be out of the shops by June 6, 2012 the latest
Guralp 1 is on the south side of IOO chamber
It doesn't look all that different, but the first image didn't have that much lit up in it to begin with.
This is totally cool! You can see that the OSEM lights are almost entirely gone in the subtracted image.
Can you switch to trying with one of the *TM*F cameras? (ITMXF, ITMYF, ETMYF, ETMXF) They tend to have more background, so there should be a more dramatic subtraction. Den or Suresh should be able to lock one of the arms for you.
I acquired 2 raw frames of MC2 using "/users/mjenson/sensoray/sdk_2253_1.2.2_linux/capture -n -s 720x480 -f 1", one while the laser was off the mode cleaner and another while it was on:
I then used "/users/mjenson/sensoray/sdk_2253_1.2.2_linux/imsub/display-image.py" to generate bitmaps of the raw images, which I then subtracted using the Python Imaging Library to generate a new image:
It doesn't look all that different, but the first image didn't have that much lit up in it to begin with. I should be able to write a script that does all of this without needing to generate new files in between acquisition and subtraction.
ETMX sus damping restored
ETMY sus damping restored.
I've compared offline Wiener filtering with online static + adaptive filtering for MC_F with GUR1_XYZ and GUR2_XYZ as witness signals
Note: online filter works up to 32 Hz (AI filter at 32 Hz is used). There is no subtraction up from this frequency, just MC_F was measured in different times for online and offline filtering. This difference in MC_F in frequency range 20-100 Hz showed up again as before with microphone testing. One can see it in 1 minute. Smth is noisy.
1. FIR -> IIR conversion produces error. Now I'm using VECTFIT approximation with 16 poles (splitting into 2 filter banks), this not enough. I tried to use 50 and split them into 5 filter banks, but this scheme is not working: zpk -> sos conversion produces error and the result filter works completely wrong.
2. Actuator TF. VECTFIT works very good here - we have only 1 resonance. However, it should be measured precisely.
3. Account for AA and AI filters that rotate the phase at 1-10 Hz by ~ 10 degrees.
No leaving the OAF running until you're sure (sure-sure, not kind of sure, not pretty sure, but I've enabled guardians to make sure nothing bad can happen, and I've been sitting here watching it for 24+ hours and it is fine) that it works okay.
OAF (both adaptive and static paths) were left enabled, which was kicking MC2 a lot. Not quite enough that the watchdog tripped, but close. The LSCPOS output for MC2 was in the 10's or 100's of thousands of counts. Not okay.
This brings up the point though, which I hadn't actively thought through before, that we need an OAF watchdog. An OAF ogre? But a benevolent ogre. If the OAF gets out of control, it's output should be shut off. Eventually we can make it more sophisticated, so that it resets the adaptive filter, and lets things start over, or something.
But until we have a reliable OAF ogre, no leaving the adaptive path enabled if you're not sitting in the control room. The static path should be fine, since it can't get out of control on it's own.
Especially no leaving things like this enabled without a "I'm leaving this enabled, I'll be back in a few hours to check on it" elog!
Already for the second time today all computers loose connection to the framebuilder. When I ssh to framebuilder DAQD process was not running. I started it
controls@fb ~ 130$ sudo /sbin/init q
Just to be clear, "init q" does not start the framebuilder. It just tells the init process to reparse the /etc/inittab. And since init is supposed to be configured to restart daqd when it dies, it restarted it after the reloading of /etc/inittab. You and Alex must have forgot to do that after you modified the inittab when you're were trying to fix daqd last week.
daqd is known to crash without reason. It usually just goes unnoticed because init always restarts it automatically. But we've known about this problem for a while.
But I do not know what causes this problem. May be this is a memory issue. For FB
Mem: 7678472k total, 7598368k used, 80104k free
Practically all memory is used. If more is needed and swap is off, DAQD process may die.
This doesn't really mean anything, since the computer always ends up using all available memory. It doesn't indicate a lack of memory. If the machine is really running out of memory you would see lots of ugly messages in dmesg.
ADC 3 INPUT 4 (#3 in the c1pem model if you count from 0) is bad. It adds DC = ~1 V to the signal as well as noise. I plugged in GUR2 channels to STS1 channels (7-9).
I'm combining the IFO check-up list (elog 6595) and last week's action items list (elog 6597). I thought about making it a wiki page, but this way everyone has to at least scroll past the list ~1/week.
Feel free to cross things out as you complete them, but don't delete them. Also, if there's a WHO?? and you feel inspired, just do it!
Dither-align arm to get IR on actuation nodes, align green beam - JENNE
Arm cavity sweeps, mode scan - JENNE
ASS doesn't run on Ubuntu! or CentOS Fix it! - JENNE, JAMIE's help
Input matricies, output filters to tune SUS. check after upgrade. - JENNE
POX11 whitening is not toggling the analog whitening??? - JAMIE, JENNE, KOJI
OAF comparison plot, both online and offline, comparing static, adaptive and static+adaptive - DEN
THE FULL LIST:
cd /opt/rtcds/caltech/c1/burt/autoburt/today/ -
However, this did not help, C1RFM did not start. I decided to restart all models on C1SUS machine in hope that C1RFM uses some other models and can't connect to them but this suspended C1SUS machine.
This happened because of the code bug -
// If PCIE comms show errors, may want to add this cache flushing
if(ipcInfo[ii].netType == IPCIE)
clflush_cache_range (&(ipcInfo[ii].pIpcData->dBlock[sendBlock][ipcIndex].data), 16); // & was missing - Alex fixed this
After this bug was fixed and the code was recompiled, C1:OAF_MCL_IN is OK, no errors occur during the transmission C1:OAF-MCL_ERR=0.
So the problem was in the PCIE card that could not send such amount of data and the last channel (MCL is the last) was corrupted. Now, when Alex added cache flushing, the problem is fixed.
Den and Alex left things not-burt restored, and Den mentioned to me that it might need doing.
I burt restored all of our epics.snaps to the 1am today snapshot. We lost a few hours of striptool trends on the projector, but now they're back (things like the BLRMS don't work if the filters aren't engaged on the PEM model, so it makes sense).
I added PCIE memory cache flushing to c1rfm model by changing 0 to 1 in /opt/rtcds/rtscore/release/src/fe/commData2.c on line 159, recompiled and restarted c1rfm.
However, this did not help, C1RFM did not start. I decided to restart all models on C1SUS machine in hope that C1RFM uses some other models and can't connect to them but this suspended C1SUS machine. After reboot encounted the same C1SUS -> FB communication error and fixed it in the same was as in the previous case of C1SUS reboot. This happens already the second time (out of 2) after C1SUS machine reboot.
I changed /opt/rtcds/rtscore/release/src/fe/commData2.c back, recompiled and restarted c1rfm. Now everything is back. C1RFM -> C1OAF is still bad.
Den noticed this, and will write more later, I just wanted to sum up what Alex said / did while he was here a few minutes ago....
From my point of view during rfm -> oaf transmission through Dolphin we loose a significant part of the signal. To check that I've created MEDM screen to monitor the transmission errors in the OAF model. It shows how many errors occurs per second. For MCL channel this number turned out to be 2046 +/- 1. This makes sense to me as the sampling rate is 2048 Hz => then we actually receive only 1-3 data points per second. We can see this in the dataviewer.
C1:OAF-MCL_IN follows C1:IOO-MC_F in the sense that the scale of 2 signals are the same in 2 states: MC locked and unlocked. It seems that we loose 2046 out of 2048 points per second.
Is this enhancement of spectrum caused by the lock? Or by the actuation?
If this is also seen with approximately same amount of actuation to PRM POS,
this is just a suspension problem.
If this is only seen with the PRM locked, this is somehow related to the opt-mechanical coupling.
c1iscey is much less happy - neither the IOP nor the scy model are willing to talk to fb. I might give up on them after another few minutes, and wait for some daytime support, since I wanted to do DRMI stuff tonight.
Yeah, giving up now on c1iscey (Jamie....ideas are welcome). I can lock just fine, including the Yarm, I just can't save data or see data about ETMY specifically. But I can see LSC data, so I can lock, and I can now take spectra of corner optics.
This is the mx_stream issue reported previously. The symptom is that all models on a single front end loose contact with the frame builder, as opposed to *all* models on all front end loosing contact with the frame builder. That indicates that the problem is a common fb communication issue on the single front end, and that's all handled with mx_stream.
ssh'ing into c1iscey and running "sudo /etc/init.d/mx_stream restart" fixed the problem.
A few things tonight. Locked both arms simultaneously (IR only). Locked MICH, Locked PRMI, although it doesn't like staying locked for more than a minute or so, and not always that long.
Locking PRCL was possible by getting rid of the power normalization. We need to get some triggering going on for the power norm. I think it's a good idea for after the cavity is locked, but when PRCL is not locked, POP22 is ~0, so Refl33/Pop22 is ~inf. The PRCL loop kept railing at the Limit that was set. Getting rid of the power normalization fixed this railing.
I took some spectra of PRM's oplev, while PRMI was locked, and unlocked. The PRM is definitely moving more when the cavity is locked. I'm not sure yet what to do about this, but the result was repeatable many times (~6 or 7 over an hour or so). The OpLev spectra when PRMI was locked didn't depend too strongly on the PRM's alignment, although I think that's partly because I wasn't able to really get the PRM to optimal alignment. I think POP22I is supposed to get to 7 or so...last week with Koji it was at least flashing that high. But tonight I couldn't get POP22I above 4, and most of the time it wouldn't go above 3. As I was aligning PRM and the circulating SB power increased, POP22I fluctuations increased significantly, then the cavity unlocked. So maybe this is because as I get closer, PRM gets more wiggly. I tried playing 'chicken' with it, and took spectra as I was aligning PRM (align, get some improvement, stop to take spectra, then align more, stop to take spectra....) but usually it would fall out of lock after 1-2 iterations of this incremental alignment and I'd have to start over. When it relocked, it usually wouldn't come back to the same level of POP22I, which was kind of disappointing.
In the PDF attached, pink and light blue are when the PRMI is locked, and red and dark blue are no PRCL feedback. The effect is more pronounced with Pitch, but it's there for both Pitch and Yaw.
Also, I need to relook at my new restore/misalign scripts. They were acting funny tonight, so I'm taking back my "they're awesome, use them without thinking about it" certification.
UPDATE UPDATE: Genius me just checked the FE status screen again. It was fine ~an hour ago when I sat down to start interferometer-izing for the night, but now the SUS model and both of the ETMY computer models are having problems connecting to the fb. *sigh*
Restarted SUS model - it's now happy.
c1iscey is much less happy - neither the IOP nor the scy model are willing to talk to fb. I might give up on them after another few minutes, and wait for some daytime support, since I wanted to do DRMI stuff tonight.
Upgrades suck. Or at least making everything work again after the upgrade.
On the to-do list tonight: look at OSEM sensor and OpLev spectra for PRM, when PRMI is locked and unlocked. Goal is to see if the PRM is really moving wildly ("crazy" as Kiwamu always described it) when it's nicely aligned and PRMI is locked, or if it's an artifact of lever arm between PRM and the cameras (REFL and AS).
However, I can't get signals on DTT. So far I've checked a bunch of signals for SUS-PRM, and they all either (a) are just digital 0 or (b) are ADC noise. Lame.
Steve's elog 5630 shows what reasonable OpLev spectra should look like: exactly what you'd expect.
Attached below is a small sampling of different SUS-PRM signals. I'm going to check some other optics, other models on c1sus, etc, to see if I can narrow down where the problem is. LSC signals are fine (I checked AS55Q, for example).
UPDATE: SRM channels are same ADC noise. MC1 channels are totally fine. And Den had been looking at channels on the RFM model earlier today, which were fine.
ETMY channels - C1:SUS-ETMY_LLCOIL_IN1 and C1:SUS-ETMY_SUSPOS_IN1 both returned "unable to obtain measurement data". OSEM sensor channels and OpLev _PERROR channel were digital zeros.
ETMX channels were fine
Errors are probably really happening.... c1oaf computer status 4-bit thing: GRGG. The Red bit is indicating receiving errors. Probably the oaf model is doing a sample-and-hold thing, sampling every time (~1 or 2 times per sec) it gets a successful receive, and then holding that value until it gets another successful receive.
Den is adding EPICS channels to record the ERR out of the PCIE dolphin memory CDS_PART, so that we can see what the error is, not just that one happened.
Alex restarted oaf model: sudo rmmod c1oaf.ko, sudo insmod c1oaf.ko . Clicked "diag reset" on oaf cds screen several times, nothing changed. Restarted c1oaf again, same rmmod, insmod commands.
Den, Alex and I went into the IFO room, and looked at the LSC computer, SUS computer, SUS I/O chassis, LSC I/O chassis and the dolphin switch that is on the back of the rack, behind the SUS IO chassis. All were blinking happily, none showed symptoms of errors.
Alex restarted the IOP process: sudo rmmod c1x04, sudo insmod c1x04. Chans on dataviewer still bad, so this didn't help, i.e. it wasn't just a synchronization problem. oaf status: RRGG. lsc status: RGGG. ass status: RGGG.
sudo insmod c1lsc.ko, sudo insmod c1ass.ko, sudo insmod c1oaf.ko . oaf status: GRGG. lsc status: GGGG. ass status: GGGG. This means probably lsc needs to send something to oaf, so that works now that lsc is restarted, although oaf still not receiving happily.
Alex left to go talk to Rolf again, because he's still confused.
Comment, while writing elog later: c1rfm status is RRRG, c1sus status is RRGG, c1oaf status is GRGG, both c1scy and c1scx are RGRG. All others are GGGG.
Rana theorized that we're having problems with the MC error signal in the OAF model (separate elog by Den to follow) because we've named a channel "C1:IOO-MC_F", and such a channel already used to exist. So, Rana and I went out to do some brief cable tracing.
MC Servo Board has 3 outputs that are interesting: "DAQ OUT" which is a 4-pin LEMO, "SERVO OUT" which is a 2-pin LEMO, and "OUT1", which is a BNC->2pin LEMO right now.
DAQ OUT should have the actal MC_F signal, which goes through to the laser's PZT. This is the signal that we want to be using for the OAF model.
SERVO OUT should be a copy of this actual MC_F signal going to the laser's PZT. This is also acceptable for use with the OAF model.
OUT1 is a monitor of the slow(er) MC_L signal, which used to be fed back to the MC2 suspension. We want to keep this naming convention, in case we ever decide to go back and feed back to the suspensions for freq. stabilization.
Right now, OUT1 is going to the first channel of ADC0 on c1ioo. SERVOout is going to the 7th channel on ADC0. DAQout is going to the ~12th channel of ADC1 on c1ioo. OUT1 and SERVOout both go to the 2-pin LEMO whitening board, which goes to some new aLIGO-style ADC breakout boards with ribbon cables, which then goes to ADC0. DAQout goes to the 4pin LEMO ADC breakout, (J7 connector) which then directly goes to ADC1 on c1ioo.
So, to sum up, OUT1 should be "adc0_0" in the simulink model, SERVOout should be "adc0_6" on the simulink model, and DAQout should be "adc1_12" (or something....I always get mixed up with the channel counting on 4pin ADC breakout / AA boards).
In the current simulink setup, OUT1 (adc0_0) is given the channel name C1:IOO-MC_F, and is fed to the OAF model. We need to change it to C1:IOO-MC_L to be consistent with the old regime.
In the current simulink setup, SERVOout (adc0_6) is given the channel name C1:IOO-MC_SERVO. It should be called C1:IOO-MC_F, and should go to the OAF model.
In the current simulink setup,DAQout (~adc1_12) doesn't go anywhere. It's completely not in the system. Since the cable in the back of this AA / ADC breakout board box goes directly to the c1ioo I/O chassis, I don't think we have a degenerate MC_F naming situation. We've incorrectly labeled MC_L as MC_F, but we don't currently have 2 signals both called MC_F.
Okay, that doesn't explain precisely why we see funny business with the OAF model's version of MCL, but I think it goes in the direction of ruling out a degenerate MC_F name.
Problem: If you look at the screen cap, both simulink models are running on the same computer (c1ioo), so when they both refer to ADC0, they're really referring to the same physical card. Both of these models have adc0_6 defined, but they're defined as completely different things. Since we can trace / see the cable going from the MC Servo Board to the whitening card, I think the MC_SERVO definition is correct. Which means that this Green_PH_ADC is not really what it claims to be. I'm not sure what this channel is used for, but I think we should be very cautious and look into this before doing any more green locking. It would be dumb to fail because we're using the wrong signals.
I wanted to switch the implementation of IIR_FILTER from DIRECT FORM II to BIQUAD form in C1IOO and C1SUS models. I modified RCG file /opt/rtcds/rtscore/release/src/fe/controller.c by adding #define CORE_BIQUAD line:
I am really not ok with anyone modifying controller.c. If we're going to be messing around with that we need to change procedure significantly. This is the code that runs all the models, and we don't currently have any way to track changes in the code.
Did you change it back? If not, do so immediately and stop messing with it. Please consult with us first before embarking on these kinds of severe changes to our code. This is the kind of shit that other people have done that has bit us in the ass in the past.
Futhermore, there is already a way to enable biquad filters in the new version with out modifying the RCG source. All you need to do is set biquad=1 in the cdsParameters block for you model.
It was in vain to restart mx_stream yesterday as C1SUS did not see FB
controls@c1sus ~ 0$ /opt/open-mx/bin/omx_info
Open-MX version 1.3.901
build: root@fb:/root/open-mx-1.3.901 Wed Feb 23 11:13:17 PST 2011
Found 1 boards (32 max) supporting 32 endpoints each:
c1sus:0 (board #0 name eth1 addr 00:25:90:06:59:f3)
managed by driver 'igb'
Peer table is ready, mapper is 00:60:dd:46:ea:ec
0) 00:25:90:06:59:f3 c1sus:0
1) 00:60:dd:46:ea:ec fb:0 // this line was missing
2) 00:14:4f:40:64:25 c1ioo:0
3) 00:30:48:be:11:5d c1iscex:0
4) 00:30:48:bf:69:4f c1lsc:0
5) 00:30:48:d6:11:17 c1iscey:0
controls@fb ~ 0$ /opt/mx/bin/mx_info
MX Version: 1.2.12
MX Build: root@fb:/root/mx-1.2.12 Mon Nov 1 13:34:38 PDT 2010
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
Instance #0: 299.8 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
Status: Running, P0: Link Up
Network: Ethernet 10G
MAC Address: 00:60:dd:46:ea:ec
Product code: 10G-PCIE-8AL-S
Part number: 09-03916
Serial number: 352143
Mapper: 00:60:dd:46:ea:ec, version = 0x00000000, configured
Mapped hosts: 6
INDEX MAC ADDRESS HOST NAME P0
----- ----------- --------- ---
0) 00:60:dd:46:ea:ec fb:0 1,0
1) 00:30:48:d6:11:17 c1iscey:0 1,0
2) 00:30:48:be:11:5d c1iscex:0 1,0
3) 00:30:48:bf:69:4f c1lsc:0 1,0
4) 00:25:90:06:59:f3 c1sus:0 1,0
5) 00:14:4f:40:64:25 c1ioo:0 1,0
controls@fb ~ 0$ sudo /sbin/init q
controls@fb ~ 0$ sudo /etc/init.d/mx restart
We officially are *failures* at svn-ing our scripts and screens. This is NOT OKAY. I checked in a few things, since there were already folders on the svn, but many things don't have folders created. It's a hot mess. We need to get our shit together, and become as disciplined about MEDM and scripts as we have been (under Jamie's watchful eye) of the simulink models.
I'm not going to start fixing it all right now. It might not even happen at this point until after GWADW, but it needs to happen.
Since Den wasn't able to fix c1sus (make it talk to the framebuilder) before he left a few hours ago, I decided to do some housekeeping rather than actual locking.
I wrote new save / misalign / restore scripts for all of the suspended optics on the C1IFO_ALIGN screen. Adding save / restore versions for the mode cleaner optics should be quick and easy. Now when you use the ! button for each optic, it points you to the new scripts. I still have the burt capabilities there, but the restore script has the burt-restore line commented out.
SAVE: burt-save the PIT_COMM and YAW_COMM values, as well as write those values and the date to a text file.
MISALIGN: Turn off oplevs, move 100 steps of 0.01 in the "+" direction.
RESTORE: Move ~100 steps toward the saved value, until you're within 0.001 of the saved value (step size is "saved val" minus "current val" divided by 100). Then just write the saved value to the slider (otherwise if the slider were touched between the last "save" and the restore, we might not be able to step precisely to the value we want). Turn oplevs back on.
Scripts are in the same place the old ones used to live: ...../caltech/c1/medm/c1ifo/cmd/ New scripts are C1IFO_OPTIC(save/restore/misalign)_soft.cmd
I'm checking this one off of the to-do list.
Good things: (a) I remembered / re-learned / just plain learned a lot about scripting. (b) the optics are now walked slowly over to their misaligned state, and slowly walked back. The past regime had the optics suddenly kicked over by a lot, sometimes enough to trip / come close to tripping watchdogs, which was never good.
Bad things: it took a long time. Now it's bedtime.
We decided to reboot C1SUS machine in hope that this will fix the problem with seismic channels. After reboot the machine could not connect to framebuilder. We restarted mx_stream but this did not relp. Then we manually executed
/opt/rtcds/caltech/c1/target/fb/mx_stream -s c1x02 c1sus c1mcs c1rfm c1pem -d fb:0 -l /opt/rtcds/caltech/c1/target/fb/mx_stream_logs/c1sus.log
but c1sus still could not connect to fb. This script returned the following error:
controls@c1sus ~ 128$ cat /opt/rtcds/caltech/c1/target/fb/mx_stream_logs/c1sus.log
mmapped address is 0x7fb5ef8cc000
mapped at 0x7fb5ef8cc000
mmapped address is 0x7fb5eb8cc000
mapped at 0x7fb5eb8cc000
mmapped address is 0x7fb5e78cc000
mapped at 0x7fb5e78cc000
mmapped address is 0x7fb5e38cc000
mapped at 0x7fb5e38cc000
mmapped address is 0x7fb5df8cc000
mapped at 0x7fb5df8cc000
send len = 263596
OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
Looks like CDS error. We are leaving the WATCHDOGS OFF for the night.
GUR1_XYZ_IN1 and GUR2_XYZ_IN1 are the same and equal to GUR2_XYZ. This is bad since GUR1_XYZ_IN1 should be equal to GUR1_XYZ. Note that GUR#_XYZ are copies of GUR#_XYZ_OUT, so there may be (although there isn't right now) filtering between the _IN1's and the _OUT's. But certainly GUR1 should look like GUR1, not GUR2!!!
Looks like CDS problem, maybe some channel-hopping going on? I'm trying a restart of the c1sus computer right now, to see if that helps.....
Figure: Green and red should be the same, yellow and blue should be the same. Note however that green matches yellow and blue, not red. Bad.
I'm still not super happy with the low power level of the ETMX oplev, so I went to investigate.
This is a 3-year plot. The first ~year in the plot, the oplev sum is ~15000 cts, and in the most recent year, it's ~1000 cts. A new HeNe was installed in May 2011 (elog 4686), with an output of 2.6mW, after the old one had died. When the new one was installed, Steve said that it was giving ~1400 counts, so maybe 1000 isn't too, too embarrassing. There is, however, a lens on the injection side, which is AR coated for 1064. The power before this lens (measured with the Ophir, set to 632nm) was ~2.6mW. The power after this lens was ~1.5mW. Now THAT is embarrassing. I'm adding replacing that lens to the to-do list (elog 6595), although I don't want to do it until such a time (maybe in an hour, maybe in a few days?) when I've got the Xarm locked / aligned, so I can nicely re-center the oplev. UPDATE: The lens is a KBC 037 (-100) lens, and the sticker on the lens mount says coated for 1064. We don't have any KBC037's in the visible lens kits, so we need to get one before I can do this replacement (PURCHASED 10pm).
There is an elog (elog 5004) from July 2011, which mentions that these channels have not been saved for a long time, so that's why there's the year-long gap.
C1IOO model compiled, installed and is running now. C1SUS model compiled, but during installation I've got an error:
controls@c1sus ~ 0$ rtcds install c1sus
Installing system=c1sus site=caltech ifo=C1,c1
Installing start and stop scripts
Updating testpoint.par config file
/opt/rtcds/rtscore/branches/branch-2.5/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_120507_205359.par -gds_node=21 -site_letter=C -system=c1sus -host=c1sus
Installing GDS node 21 configuration file
Installing auto-generated DAQ configuration file
Installing EDCU ini file
Installing Epics MEDM screens
Running post-build script
ERROR: Could not find file: test.py
Searched path: :/opt/rtcds/userapps/release/cds/c1/scripts:/opt/rtcds/userapps/release/cds/common/scripts:/opt/rtcds/userapps/release/isc/c1/scripts:/opt/rtcds/userapps/release/isc/common/scripts:/opt/rtcds/userapps/release/sus/c1/scripts:/opt/rtcds/userapps/release/sus/common/scripts:/opt/rtcds/userapps/release/psl/c1/scripts:/opt/rtcds/userapps/release/psl/common/scripts
make: *** [install-c1sus] Error 1
Jamie, what is this test.py?
OK. Then we should make this number bigger such that the coherence is still completely maintained.
Is this set in the auto locker? Or manually set?
This effect is due to C1:IOO-WFS1_PIT_LIMIT=2000. When I turned if off, coherence between C1:IOO-WFS_PIT input and output signals restored
LIMIT is set manually, auto locker does not change it. I've put C1:IOO-WFS1_PIT_LIMIT=4000, it seems to be fine for now.