40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 309 of 341  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  280   Mon Jan 28 15:11:38 2008 robHowToDMFrunning compiled matlab DMF tools

I compiled Rana's seisBLRMS monitor, and it's now running on mafalda. To start your own DMF tools, here is a procedure:

1) build your tool in mDV, get it working the way you'd like.
2) Make a new directory /cvs/cds/caltech/apps/DMF/compiled_matlab/{your_new_directory} and copy the *.m file there.
3) Make the *.m in your new directory into a function with no args (just add a function line at the top)
4) compile it (from within a fully mDV-configured matlab) with mcc -m -R -nojvm {yourfile}.m at the matlab command line.
5) add a line corresponding to your new tool to the script /cvs/cds/caltech/apps/DMF/scripts/start_all
6) Run the start_all script referenced in part (5).

NB: Steps (4) and (6) must be carried out on mafalda.
  284   Tue Jan 29 14:56:39 2008 robUpdateDMFseisBLRMS 1.0

The seisBLRMS 1.0 program crashed at ~7:20 pm last night, so we didn't get data from overnight. It crashed when framecaching failed. I added
a try-catch-end statement around the call to dttfft2 to let the program survive this, then compiled it and started it on mafalda. After ~45 minutes, the compiled version encountered the same error, and while it didn't crash per se, after 20 minutes it still wasn't able to read data. We may have to dig deeper into the guts of mDV to make this stuff run more robustly.
  287   Wed Jan 30 20:39:31 2008 robUpdateDMFseisBLRMS


In order to reduce the probability of seisBLRMS crashing due to unavailability of data, I edited seisBLRMS.m so that it displays data from 6 minutes in the past, rather than 3. After compiling, this version ran for ~8 hours without crashing. I've killed the process now because it seems to interfere with alignment scripts that use ezcademod, causing "DATA RECEIVING ERROR 4608" messages. These don't cause ezcademod to crash, but so many of them are spit out that the scripts don't work very well. I guess running DMF constantly is just making the framebuilder work too hard with disk access. In the near term, we can maybe work around this by having DMF programs check the AutoDithering bit of the IFO state vector, and just not try to get data when we're running these sorts of scripts.
  291   Fri Feb 1 12:37:39 2008 robUpdateDMFseisBLRMS trends

Here are DV trends of the output of seisBLRMS over the last ~36 hours (which is how long it's been running), and another of the last 2 hours (which show the construction crew taking what appears to be a lunch break).
Attachment 1: seis36hours.png
seis36hours.png
Attachment 2: seis2hours.png
seis2hours.png
  307   Sun Feb 10 21:43:16 2008 robConfigurationIOOMC alignment tweaked

I adjusted the alignment of the free hanging mode cleaner to best transmit the PSL beam.
  309   Mon Feb 11 22:44:29 2008 robConfigurationDAQChange in channel trending on C1SUS2

I removed the MC2 optical lever related channels from trends, and the SRM POUT and YOUT (as these are redundant if we're also trending PERROR and YERROR). I did this because the c1susvme2 processor was having bursts of un-syncy lateness every ~15 seconds or so, and I suspected this might interfere with locking activities. This behaviour appears to have been happening for a month or so, and has been getting steadily worse. Rebooting did not fix the issue, but it appears that removing some trends has actually helped. Attached is a 50 day trend of the c1susvme2 sync-fe monitor.
Attachment 1: srm_sync.png
srm_sync.png
  310   Tue Feb 12 13:53:27 2008 robOmnistructureVACReturn of the RGA

The new RGA head was installed a few days ago. I just ran the RGAlogger script to see if it works, which it does. I also edited the crontab file on op340m to run the RGAlogger script every night at 1:25 AM. It should run tonight.
Attachment 1: RGA-080212.png
RGA-080212.png
  311   Tue Feb 12 16:18:29 2008 robConfigurationComputer Scripts / Programsautoburt cron moved to op340m

I moved the autoburt cron job from op440m to op340m. For some reason, burtrb requires gcc to run (I gather it uses the C-preprocessor to parse input files), so I had to install that on op340m to get it to work properly.

There are no more cron jobs running on op440m now.
  312   Tue Feb 12 16:34:07 2008 robDAQDMFseisBLRMS 1.1
The compiled version of seisBLRMS had been running ~2 weeks without crashing as of last night, when I killed it
so it wouldn't interfere with alignment scripts.  I added an EPICS channel C1:DMF-ENABLE, and updated the DMF
executables to check this channel while running.  So far it seems to work.  When you're running alignment acripts,
simply click the DISABLE button on the C1DMF_MASTER.adl screen, and then re-ENABLE when the scripts finish. 
It's not clear why this is necessary though.  Theories include the constant disk access is keeping the
framebuilder busy, reducing its ability to deal with ezcademod commands and DMF programs just flooding the
network with so much traffic that ezcademod-related packets run late and get ignored.

Also, for reasons of aesthetics, I changed the data delay from 6 minutes to 5 minutes.  We'll see if that's enough.
  313   Tue Feb 12 16:39:52 2008 robUpdateLockingreport

Did some locking work on DRFPMI on sunday and (with John) on monday nights. So far progress has not been terribly encouraging.

Problems include the DD_handoffs not working and the CARM->MCL handoff not working so well. To get around the DD signals trouble, I decided for now to just ignore 67% of the DD signals. We should be able to run with PRC & MICH on single demod signals, and SRC on a DD signal. This seems to work well in a DRMI state, and it also works well in a DRMI+2ARMs state.

The CARM->MCL handoff actually works, but it doesn't take kindly to the AO path and it doesn't work very stably. I guess this was always the most fragile part of the whole locking procedure, and it's fragility is really coming to light now. Investigation continues.
  316   Thu Feb 14 15:04:53 2008 robDAQDMFDMF delay

Sometime ago I edited seisBLRMS to keep of track of how long it was taking to write RMS data (that is, the delay between the accelerometer data and the write of the EPICS rms data). Here's a plot of that info, showing how the delay increases over time. I think this indicates a logical flaw in the timing of the seisBLRMS program, which sort of relies on everything running well consistently; this should not be difficult to fix. I'll maybe try increasing the delay to ~10 minutes, and making it relatively inflexible.
Attachment 1: DMFdelay.png
DMFdelay.png
  317   Thu Feb 14 15:05:18 2008 robDAQDMFseisBLRMS 1.1
> 
> Also, for reasons of aesthetics, I changed the data delay from 6 minutes to 5 minutes.  We'll see if that's enough.

5 minutes didn't work.  
  333   Fri Feb 22 11:11:00 2008 robUpdateElectronicsREFLDD problem found

I used a network analyzer that actually works to find a problem in the REFLDD electronics chain. There was loose (=bad) SMA-BNC adaptor on the output of channel one of the HP RF Amplifier. It worked intermittently, so going onto the ISCT and fiddling with cables could sometimes temporarily fix the problem. The bad adaptor has been given to Andrey to discard.
  334   Fri Feb 22 11:13:15 2008 robUpdateElectronicsRF Monitor (StocMon)

Quote:
It took some times because the written procedure to start the chiller is not very precise.


It is actually very precise. Precisely wrong.
  337   Fri Feb 22 16:47:54 2008 robUpdateElectronicsBaloney
Well I guess Rana didn't study too hard at Professor School, either. If he'd even bothered to actually read John's entry, he might have looked at the RF Out from the HP Analyzer. As it is, this experience so far has been like taking your car to a highly respected mechanic, telling him it's having acceleration problems, and then he takes a rag and wipes some dirt off the hood and then tells you "It's running fine. That'll be 500 bucks."

I make the current score:

Snarkiness: 2
Education:  0



I did RTFM, and it doesn't mention anything about crazy behaviour on the RF Output. So, I set up the analyzer to do a sweep from 500MHz to 1MHz, with output power of 0dBm, and plugged the output directly into the 300MHz scope with the input set to 50 Ohm impedance. The swept sine output looks totally normal from 500Mhz to 150MHz (measuring ~220mVrms below 300MHz -- 0dBm), where it abruptly transitions to a distorted waveform which the scope measures as having a frequency of ~25MHz and with 450mVrms (+6dBm). It then transitions again at some other part of the sweep to a cleaner-looking 25MHz waveform with ~1.2Vrms (+15dBm). See the attached Quicktime movie. The screeching in the background is the PSL door.

With this bizarre behaviour, it's actually possible that even someone who does everything carefully and correctly could break sensitive electronics with this machine. Let's get it fixed or get a new one.

Don't use the HP4195A anymore unless you want broken electronics.


Quote:
I'm not sure where Ward and Miller went to Analyzer school, but it was probably uncredited.
I turned it on and used 2 BNC cables and a T to hook up the source to the 2 inputs and measured the always-exciting TF of cable.

Score:  HP Analyzer  1
        Rob & John   0


I have left the analyzer on in this complicated configuration. RTFM boys.


Quote:
The HP 4195A network analyser may be broken, measurements below 150MHz are not reliable. Above 150MHz everything looks normal. This may be caused by a problem with its output (the one you'd use as an excitation) which is varying in amplitude in a strange way.

Analyzer
Attachment 1: bustedHP.mov
  343   Thu Feb 28 12:31:33 2008 robBureaucracyComputer Scripts / ProgramsMDV library does not work at "LINUX 2"

Quote:

While working on Thursday evening with Matlab scripts "dttfft2" and "get_data", I noticed that mDV library does not work at computer "LINUX 2" (the third computer in the control-room if you enter it from the restroom). There are multiple error messages if we try to run "hello_world", "dttfft2" or "get_data". In order to take data from accelerometers, I changed the computer - I was working from "LINUX 3" computer, the most right computer in the control room, but for the future someone should resolve the issue at "LINUX 2". I am not experienced enough to revive the correct work of mDV directory at "LINUX 2".

Andrey.


This turned out to be due to /frames not being mounted on linux2, as a result of a reboot. The issue is discussed in entry 270. I remounted the /frames, and added a line to mdv_config.m to check whether the frames are mounted.
  344   Thu Feb 28 13:04:59 2008 robUpdateVACrga logging working again

Quote:
The rga head and controller are running fine, but the data logging is not.


It should run tonight at 1:25 AM. To get the cron job to work properly on op340m, I had to make wrapper sh script which defines the perl library before calling the actual RGAlogger script.
  346   Thu Feb 28 19:37:41 2008 robConfigurationComputersmultiple cameras running and seisBLRMS

Quote:
1) Mafalda is now connected via an orange Cat5E ethernet cord to the gigabit ethernet switch in rack in the office space. It has been labeled at both ends with "mafalda".

2) Both the GC650M camera (from MIT) and the GC750M are working. I can run the sampleviewer code and get images simultaneously. Unforutnately, the fps on both cameras seems to drop roughly in half (not an exact measurement) when displaying both simultaneously at full resolution.

3)Discovered the Gigabit ethernet card in Mafalda doesn't support jumbo packets (packets of up to 9k bytes), which is what they recommend for optimum speed.

4)However, connecting the cameras through only gigabit switches to Mafalda did seem to increase the data rate anyways, roughly by a factor of 2. (Used to take about 80 seconds to get 1000 frames saved, now takes roughly 40 seconds).

5)Need to determine the bottleneck on the cameras. It may be the ethernet card, although its possible to connect multiple gigabit cards to single computer (depending on the number of PCI slots it has). Given the ethernet cards are cheap ($300 for 20) compared to even a single camera (~$800-1500), it might be worth while outfitting a computer with multiple.


I found the SampleViewer running and displaying images from the two cameras. This kept mafalda's network so busy that the seisBLRMS program fell behind by a half-hour from its nominal delay (so 45 minutes instead of 12), and was probably getting steadily further behind. I killed the SampleViewer display on linux2, and seisBLRMS is catching up.
  347   Thu Feb 28 19:49:21 2008 robUpdateElectronicsRF Monitor (StocMon)


Quote:

With Ben, we hooked up the RF Monitor box into the PSL rack and created 4 EPICS channels for the outputs:

C1:IOO_RF_STOC_MON_33
C1:IOO_RF_STOC_MON_133
C1:IOO_RF_STOC_MON_166
C1:IOO_RF_STOC_MON_199

The power cable bringing +15V to the preamplifier on the PSL table should be replaced eventually.


I changed the names of these channels to the more appropriate (and informative, as they're coming from the RFAMPD):

C1:IOO-RFAMPD_33MHZ
C1:IOO-RFAMPD_133MHZ
C1:IOO-RFAMPD_166MHZ
C1:IOO-RFAMPD_199MHZ

I also added them in an aesthetically sound manner to the C1IOO_LockMC.adl screen and put them in trends. Along the way, I also lost whatever Alberto had done to make these monitors read zero when there's no light on the diode. It doesn't appear to be written down anywhere, and would have been lost with a reboot anyway. We'll need a more permanent & automatable solution for this.
  355   Tue Mar 4 10:08:21 2008 robUpdateComputersgreen lights unreliable when c0daqctrl down

So far I've tried powering off the framebuilder, power-cycling the RAID (it was showing an error message about bad IDE channel #4), and rebooting the LSC (just for fun). When I reset the LSC, its green light on the RFM_NETWORK screen did not turn red, making all these lights suspect. The iscepics40m process is what controls these red/green lights, so maybe it's gone wonky. It appears to be running however, on c1dcuepics, and it also seems to be functioning correctly in other ways (it's communicating correctly with the LSC).

Update: Alex and Jay came by. The solution was to reset the c0daqctrl processor, which apparently was not done in Rana's rebooting spree. Or maybe it needed to be done last.
  358   Tue Mar 4 23:22:32 2008 robDAQComputersc1susvme1&2 rebooted

I found that some channels from c1susvme1 and c1susvme2 were not being recording by the DAQ (and were not showing up in DV). I rebooted these processors, which fix the problem. If you see other cases of this (signal exactly zero, but not a testpoint problem), just reboot the corresponding processor.
  362   Thu Mar 6 00:17:37 2008 robUpdateLockingDD handoff working
Got the DD (double demod) handoff scripts working tonight, with just the DRMI. So, now acquisition with the single demod signals is working well, and handoffs to all double demod signals using the input matrix ramping worked several times with the scripts. Up next will be more work with the DRM+ARMs.
  366   Mon Mar 10 02:05:08 2008 robUpdateLockingDRMI+2ARMs working better

Some encouraging progress on the locking front tonight. After the work on the DRM loops last week and a review of the settings for initial lock acquisition (loop gains, tickle amplitude, filter states, so on), the DRMI+2ARMS locking is working pretty well. That's to say, it takes from 5-15 minutes generally for the IFO to lock in the offset CARM state, with the arm powers at 0.5. It's then possible to raise the arm powers slightly, and handing off control of CARM to MCL works at low power, but engaging the AO path (using PO_DC as an error signal) is not working so well. Taking swept sines indicates that the PO_DC should be a good error signal. The next good thing to try might be just using PO_DC as an error signal for the length path, without using the AO path at all, to see if it's something in the hardware.
  380   Fri Mar 14 15:06:24 2008 robUpdateComputer Scripts / Programsrouting PEM -> ASS -> SUS_MCL

Quote:

on ASS RFM 1 has PEM signals at

float at 0x100000 has c0dcu1 first ICS110B chan 1
float at 0x100004 has chan 2
etc.

ASS sends to RFM 0

float at 0x100000 goes to PRM MCL
0x100004 to BS MCL
0x100008 to IMTX MCL
0x10000c to ITMY MCL
0x100010 to SRM MCL
0x100018 to MC1 MCL
0x10001c to MC3 MCL
0x100020 to ETMX MCL
0x100024 to ETMY MCL


You can differentiate between RFM 0 and RFM 1 in the simulink model by adding 0x4000000 to the offsets for RFM 1.
  381   Fri Mar 14 15:52:07 2008 robConfigurationLSCLSC code change

I've edited the LSC code to send different signals to the ASS box. Now, instead of the previously selected error signals deemed to be acceptable for the Alignment Sensing and Stabalization system, it sends the LSC control signals for each suspension to the ASS box (in its new incarnation as the Adaptive Susurration Subtraction system). These are the signals after the output matrix, and also after the LSC-[SUS] filter modules.
  383   Sun Mar 16 17:03:32 2008 robConfigurationCDSASS code change

I've updated the ass.mdl file in the directory:

/cvs/cds/caltech/users/alex/cds/advLigo/src/epics/simLink/

to get us started in the adaptive PEM noise subtraction.

After several iterations of remote help from Alex, the code compiles and runs, receives signals from the LSC, PEM, and MC2, and communicates with the suspension controllers. I've also adapted the .par file from the code generator, but haven't got the testpoints working with the new ASS code. There are no MEDM screens yet, and Matt's adaptive filter code has not been installed (there's a matrix as a placeholder).

Putting in the adaptive code should be simple, building the MEDM screens tedious, and getting the testpoints working uncertain. I noticed that the new testpoint.par file starts at a different channel number than the previous (working) version, which is strange. I probably have a script somewhere to change all these numbers by a constant offset, but I don't know if that's the actual problem--maybe stuff just needs to be rebooted.

The code receives as input the first 24 channels from the PEM ADCU, the eight suspension control signals from the LSC, and the output of the MCL filter from MC2. It outputs to the MCL filter input of each suspension (except MC2).
  386   Thu Mar 20 16:06:27 2008 robConfigurationLSCLSC code change

I changed the LSC code again. I noticed that when turning off the LSC (e.g., going from LA to OFF), the cpu time would jump from ~50 to ~80, and irrevocably de-sync all the SUS controllers. This was because turning off the LSC would suddenly zero the inputs to the decimation filters that send information to the ASS box, which for some reason greatly increases the computation time of the iir filter function call. I changed the code so that these inputs are never zeroed. The ASS receives inputs from the LSC all the time now.

I also noticed that the ASS machine was running in ~2400 usec. Yes, 2,400 microseconds. I don't know how long it's been doing that, but I restarted it. Immediately after restart, it ran at 1700 microseconds. After using the "RESET" field in the adaptOnline code, that dropped to ~100 usec. Now it's not doing any adaptive filtering, as I don't know what the good settings are and no-one has been elogging their IFO work the last few days.
  389   Fri Mar 21 11:54:38 2008 robUpdateVACtp 2 failed

Quote:
Small turbo #2 is the forepump of the maglev.
It failed last night, shut down the maglev and interlock closed V1
Ifo pressure is 20 mTorr now. The Yarm was still locked at 8am this morning.
The PSL beam to MC was blocked just before the output periscope.
The psl mechanial shutter did not work from epic screen.


The PSL mechanical shutter actually did trip last night, greatly confusing me and Rana. Not realizing that the software vacuum interlock had tripped, we manually re-opened the shutter. I'll modify the relevant MEDM screens to indicate when the EPICS interlock trips.
  398   Mon Mar 24 13:03:54 2008 robUpdateElectronicsHP4195A is back



Quote:
The swept sine output looks totally normal from 500Mhz to 150MHz (measuring ~220mVrms below 300MHz -- 0dBm), where it abruptly transitions to a distorted waveform which the scope measures as having a frequency of ~25MHz and with 450mVrms (+6dBm). It then transitions again at some other part of the sweep to a cleaner-looking 25MHz waveform with ~1.2Vrms (+15dBm).


The HP4195A is back from repair. At first, it exhibited exactly the same behaviour for which it was sent in for repair, and which is described above (pillage from entry 337). After speaking with the repair tech on the phone, who tried to imply that the digital scope was tricking us, I plugged the output into our HP8591E spectrum analyzer, just to have firm ammunition to combat the repair guy's looniness. This led to even weirder behaviour, like no output and overload signals on the inputs (with nothing connected). After turning the unit on and off several times, and firmly seating (and screwing in) the DB9 connectors in the back of the unit, it appears to be working properly. Except for a brief glitch as it passes through 150MHz, the swept sine signal now appears normal, both on the scope and in the spectrum analyzer.

Apparently the whole thing is due to a loose connection somewhere in the box, which wasn't actually fixed by the repair, but has at least been temporarily fixed by me stumbling around with a screwdriver and then pushing the power button a couple of times.
  400   Tue Mar 25 10:44:24 2008 robUpdateComputersc1susvme2

Quote:
c1susvme2 isn't behaving itself. It keeps getting out of sync and/or giving a red status light.

After going through the usual restart procedures a few times (unsuccessfully) we power cycled the c1susvme & c1sosvme crates. We think everything came back okay.

We still can't get the status and CRC (cyclic redundancy check) to return to normal on c1susvme2. If Alex is around tomorrow please ask him to take a look.


I rebooted it again this morning. The ASS machine is currently not running its process, for whatever reason (someone turn it off?). Let's leave it like this for a day and see how the c1susvme2 does. The other recent change is Steve's install of a cooling fan--maybe that's causing the problem.
  403   Tue Mar 25 16:34:47 2008 robUpdateComputersc1susvme2

Quote:

Quote:
c1susvme2 isn't behaving itself. It keeps getting out of sync and/or giving a red status light.

After going through the usual restart procedures a few times (unsuccessfully) we power cycled the c1susvme & c1sosvme crates. We think everything came back okay.

We still can't get the status and CRC (cyclic redundancy check) to return to normal on c1susvme2. If Alex is around tomorrow please ask him to take a look.


I rebooted it again this morning. The ASS machine is currently not running its process, for whatever reason (someone turn it off?). Let's leave it like this for a day and see how the c1susvme2 does. The other recent change is Steve's install of a cooling fan--maybe that's causing the problem.


Now c1susvme1 is joining the action. Since leaving the ASS off doesn't change anything, we can probably absolve it of blame. I now suspect the 4-pin LEMO cables going from the CLK DRIVER modules to the clock fanout modules. These cables are being squeezed/shaken by Steve's new fan setup, and may have been the culprit all along. John will do some testing to see if they are indeed the problem.
  406   Fri Mar 28 16:18:18 2008 robUpdateComputersc1susvme2 status
c1susvme2 is getting worse and worse. it won't run for more than ~45 minutes without fatally de-syncing. for now I've turned off c1iovme (which sends the MCL signal) to see if that's causing the problem. next I'll swap the boards for c1susvme1 and c1susvme2 to see if it's the cpu (or maybe the RFM card) itself, rather than the timing/pentek systems.
  408   Mon Mar 31 14:14:16 2008 robUpdateComputersc1susvme2 status

Quote:
c1susvme2 is getting worse and worse. it won't run for more than ~45 minutes without fatally de-syncing. for now I've turned off c1iovme (which sends the MCL signal) to see if that's causing the problem. next I'll swap the boards for c1susvme1 and c1susvme2 to see if it's the cpu (or maybe the RFM card) itself, rather than the timing/pentek systems.


I swapped the processors for c1susvme1 and c1susvme2. So for now, to startup, you should ssh into c1susvme1 and run the startup.cmd for c1susvme2, and vice versa.
  426   Fri Apr 18 16:27:04 2008 robUpdateSUSend station sus front-end bug fix

Quote:
installed and started new susEtmx.o and susEtmy.o to fix a problem with ETMY optical lever variables.


But where is the code?
  432   Mon Apr 21 12:58:42 2008 robUpdateASScheck adaptive

Quote:


Caryn Palatchi (a Caltech undergrad who just started working with us)
illustrated to me today that using even 1000 FIR taps is not very effective
for low frequency noise cancellation if you have a 2048 Hz sample rate. More
precisely, the asymptotic Wiener filter which our 'LMS' algorithm converges
to, can often amplify the noise at frequencies below f_sample/N_taps.

A less obvious thing that she also noticed is that there is almost no cancellation
of the 16.25 Hz bounce mode when using such a short filter. That's because that
mode is fairly high Q: the transfer function from the Z-ACC to the cavity signal
goes through the high-Q vertical suspension resonance; the FF signal we send back
goes through the low-Q horizontal pendulum response only. Therefore the filter
needs to be able to simulate ~100 cycles at 16.25 Hz in order to cancel that peak.

Duh.

The message here is: we need to find a computationally efficient way to do FIR filtering
or its not going to ever be cool enough to help us find the Crab.


This is the reason for "RDNSAMP" parameter in the ASS code. The FIR filtration is applied at the downsampled rate, not the machine rate. So, if RDNSAMP=32, the effective sampling rate of the FIR filter is 64Hz, and thus noise cancellation should be good down to 64Hz/1000, or 64mHz, and the filter has an impulse response time that extends to 15 secs. I'm not convinced the filter length is what's limiting the performance at the bounce mode, but I agree that a faster FIR implementation would be good.
  433   Mon Apr 21 13:12:21 2008 robUpdateComputer Scripts / Programstdsread bugs

Quote:
There seems to be a problem with reading the C1:IOO-MASTER_OVERFLOW field
when it is read in as part of an array. The only way for me to describe it
is to just attach the terminal output in this entry...this is mainly for
Matt and Rob
.


I first noticed that the output of the MC-WFS sensing matrix was different than
the outputs from a year ago, namely that the excitation channel was not being
processed and outputted to the file. This made the output matrix diagonalization
scripts fail.

I noticed that there are several different copies of tdsread.cc sitting around.
Looks like they have been hacked in the last year but I am not sure if this
excitation channel readback is an intentional change; email has been sent to the
authors to find out -- they will probably post some kind of response in the log
to resolve what's up.


My guess is that the problem with the IOO channel is not related, but I'm not sure:
op440m:WFS>set ioo_head = "${ifo}:IOO-"
op440m:WFS>set sus_head = "${ifo}:SUS-"
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3
_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${ioo_head}MAS
TER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3_MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0 0 0
op440m:WFS>echo `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
0
op440m:WFS>echo "tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW"
tdsread C1:SUS-MC1_MASTER_OVERFLOW C1:IOO-MASTER_OVERFLOW C1:SUS-MC2_MASTER_OVERFLOW
op440m:WFS>



This is the same bug described in entry 180. I believe it has nothing to do with tdsread, which did not change in the time period before the bug appeared, but perhaps has something to do with other EPICS libraries somewhere (tdsread relies on these epics libraries to do its dirty work). Here is entry 180 for reference:


Quote:
tdsread has developed a strange new illness, whereby it cannot read EPICS values from two subsystems at once (e.g., getting an LSC and SUS value simultaneously). I thought this might have something to with the fact that both losepics and iscepics are running on the same box,
but the same thing happens with IOO EPICS records, so that's not the culprit.

This is new behaviour, and it's only happening on the solaris machines. I suspect some ENV/cshrc juju has caused it, as the tdsread executable is the same one from April, and I don't think our EPICS infrastructure has changed otherwise. In the near term we can either try running the scripts on linux, or modify the IFO scripts to not do these types of calls.


The solution that's been in effect for the past few months has just been to modify the scripts to not make these kinds of calls.
  435   Tue Apr 22 10:59:24 2008 robUpdateSUSMC1 electronics busted

Quote:
I spent some time trying to fix the utter programming fiasco which was our MCWFS diagonalization script.

However, it still didn't work. Loops unstable. Using the matrix in the screen snapshot is OK, however.

Finally, I realized from looking at the imaginary part of the output matrix that there was something
wrong with the MC1 drive. The attached JPG shows TFs from pit-drives of the MC mirrors to WFS1.

MC1 & MC3 are supposed to have 28 elliptic low pass filters in hardware for dewhitening. The MC2
hardware is different and so we have given it a software 28 Hz ELP to compensate. But it looks like
MC1 doesn't have the low pass (no phase lag). I tried switching its COIL FM10 filters to make it
switch but no luck.

We'll have to engage the filters to make the McWFS work right and to get the MC noise down. This
needs someone to go check out the hardware I think.

I have turned the gain way down and this has stabilized the MC REFL signal as you can see from the StripTool screen.


This was just because the XYCOM was set to switch the "dewhites" based on FM9 rather than FM10. To check whether the hardware ellipDW filters were engaged, I drove MC1 & MC3 in position (using the MCL bank), and looked at the transfer functions MC2_MCL/MC1_MCL and MC2_MCL/MC3_MCL. This method uses the mode cleaner length servo to enable a relatively clear transfer function measurement of the ellipDW, modulo the loop gain of MCL and the fact that it's really hard to measure an ELP cascaded with a suspension. The hardware and the switching appear to be working fine.

It's now set up such that the hardware is ENGAGED when the coil FM10 filters are OFF, and I deleted all the FM10 filters from the coils of MC1 and MC3. Since we don't switch these filters on and off regularly, I see no need to waste precious SUS processor power on filters that just calculate "1".
  436   Tue Apr 22 16:17:48 2008 robUpdateSUSend station sus front-end bug fix

Quote:
installed and started new susEtmx.o and susEtmy.o to fix a problem with ETMY optical lever variables.


What Alex means is that the EPICS values for the ETMY optical levers were being clobbered in the RFM. The calculations were being done correctly in the FE, so the DAQ/testpoints were working--it was just the EPICS/RFM communication via c1losepics that was bugged. This was a result of the recent SUS code changes to accept inputs from the ASS for adaptive feedforward.
  438   Tue Apr 22 22:19:02 2008 robMetaphysicslorejiggling sliders

In the interests of tacit communication of scientific knowledge, I here reveal a nugget of knowledge which may or may not prove useful to new LIGOites: sometimes when front-end machines are rebooted, the hardware they control can wind up in a state which is not accurately represented by the EPICS values you may see. This can be easily rectified by momentarily changing the EPICS settings in question. For reference, this came up tonight in the context of the whitening gain sliders for the TransMon QPDs.
  442   Thu Apr 24 14:10:26 2008 robUpdateLockinglocking work
Rob, Johnnie

We made some progress on locking last night (Wed night), namely that we were able to handoff (briefly) the CARM-MCL path the REFL-DC error signal. We tried this because we suspect that the reason the PO-DC is not a good CARM error signal is because at low powers, the dc light level in the recycling cavity is dominated by the +f2 RF sideband. Thus, REFL-DC should work a bit better at low powers, which it did. It wasn't super stable, though, so this will require a bit of work to make the transition reliable & stable. The next things to work on include setting the AO path gain properly and possibly going to higher arm powers before handing off (thus increasing the discriminant).

Another thing we found is that the alignment scripts are not working in an ideal fashion. Running the alignment scripts for the two arms (XARM & YARM) leaves the Michelson badly misaligned, making it impossible to get good DRM alignment. This will have to be fixed.
  456   Sun Apr 27 18:11:58 2008 robDAQComputersbr40m?

The testpoint manager (which runs on fb40m) crashed this afternoon. Upon re-starting it, I found there was a rogue dtt process on op440m and also a daqd daemon running on br40m. One or both of these caused the tpman to crash. br40m is the frame broadcaster, which is never used here as we don't run DMT. I killed the daqd process there.

The way to find if there is a rogue process is to watch the output to the console from the tpman when you start it:

Allocate new TP handle 56 by 131.215.113.203
Allocate new TP handle 57 by 131.215.113.203
Allocate new TP handle 58 by 131.215.113.203
Allocate new TP handle 59 by 131.215.113.203
Allocate new TP handle 60 by 131.215.113.203
Allocate new TP handle 61 by 131.215.113.203
Allocate new TP handle 62 by 131.215.113.203
Allocate new TP handle 63 by 131.215.113.203
Allocate new TP handle 64 by 131.215.113.203
Allocate new TP handle 65 by 131.215.113.203
Allocate new TP handle 66 by 131.215.113.203
Allocate new TP handle 67 by 131.215.113.203
Allocate new TP handle 68 by 131.215.113.203


If you see something like this, with a new TP handle being allocated every few seconds, you need to log in to the corresponding host and kill whatever process has run away.
  464   Mon May 5 11:04:30 2008 robOmnistructureComputersNetwork setup

Mafalda was not connected to the network, and so our DMF-based seisBLRMS has not been running for ~1 week. I traced this to a broken ethernet cable connecting mafalda to the network switch in the rack next to the B&W printer. This cable has a broken connector at the switch side, which means it can't stay connected if there's any tension. It needs to be replaced.
  466   Tue May 6 17:28:39 2008 robConfigurationLSCAP33 -> POX33

I am in the process of switching the POX166 and AP33 photodetectors, so that they become POX33 and AP166. The IFO_CONFIGURE buttons won't work until I finish.
  467   Wed May 7 15:25:41 2008 robConfigurationLSCAP33 -> POX33

Quote:

I am in the process of switching the POX166 and AP33 photodetectors, so that they become POX33 and AP166. The IFO_CONFIGURE buttons won't work until I finish.


Done. We're now in the 40m CDD configuration.
  490   Wed May 21 15:21:33 2008 robUpdateComputer Scripts / Programsautolockers and cron

I added hourly cron jobs to op340m to ensure that

MC autolocker
FSS Slow Servo
PSL watch

are running. I've also edited the wiki procedure to reflect the fact that these no longer need to be restarted by hand.
  507   Fri May 30 12:37:45 2008 robUpdateSUSetmy oplev is back

Quote:
I relayed the optics for ETMY-oplev as shown in pictures below.
The reflected beam goes directly to the qpd


I turned on the servo. UGFs in PIT and YAW are ~3Hz. I had to flip the sign of the YAW.
  531   Thu Jun 12 01:51:23 2008 robUpdateLockingreport
rob, john

We've been working (nights) on getting the IFO locked this week. There's been fairly steady incremental progress each night, and tonight we managed to control CARM(MCL) using PO-DC, with the CARM(AO) path also on PO-DC. In the past, reaching this state has usually meant we're home free, as we could just crank the gain on the common mode servo and merrily reduce the CARM offset. Tonight, however, this state has been very twitchy, and efforts to ramp up the gain have been unsuccessful.

I've attached a diagram which I hope makes clear where we are in the stages of lock acquisition.
Attachment 1: lock_control_sequence.png
lock_control_sequence.png
  533   Thu Jun 12 15:55:15 2008 robUpdateLockingreport

Quote:
Rob: Awesome figure. As you can imagine, I have lots of questions, and hope that you will consider this figure to be the beginning, leading to ever-more detailed versions. But for now, I just want to ask whether you understand *what* is twitchy, and what the twitchiness does to prevent you from taking this further?


I definitely don't understand what's twitchy, but I have suspicions. Tonight we'll try to start by revisiting the other loops (the non-CARM loops) and see how they're dealing with the changing power levels. It may be that the DARM loop is going unstable due to gain variations (due to either increasing power or to rotation of demod phase) or it could be the PODD (or SPOB) saturating with increased power in the recycling cavity. I just hope the glitchiness doesn't have a digital origin.
  537   Wed Jun 18 00:19:29 2008 robUpdatePSLMOPA trend
15 day trend of MOPA channels. The NPRO temperature fluctations are real, and causing the PMC to consistently run up against its rails. The cause of the temperature fluctations is unknown. This, combined with the MZ glitches and Miller kicking off DC power supplies is making locking rather tetchy tonight. Hopefully Yoichi will find the problem with the laser and fix it by tomorrow night.
Attachment 1: MOPAtrend.png
MOPAtrend.png
  538   Wed Jun 18 16:07:57 2008 robSummaryComputersRFM network down

The RFM network tripped off around noon today. It's still down. The problem appears to be with the EPICS interface (c1dcuepics). Trying to restart one of the end stations yields the error: No response from EPICS.

Possible causes include (but not limited to): busted RFM card on c1dcuepics, busted PMC bus on c1dcuepics, busted fiber from c1dcuepics to the RFM switch. We need Alex.
ELOG V3.1.3-