40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 30 of 344  Not logged in ELOG logo
ID Date Authordown Type Category Subject
  346   Thu Feb 28 19:37:41 2008 robConfigurationComputersmultiple cameras running and seisBLRMS

Quote:
1) Mafalda is now connected via an orange Cat5E ethernet cord to the gigabit ethernet switch in rack in the office space. It has been labeled at both ends with "mafalda".

2) Both the GC650M camera (from MIT) and the GC750M are working. I can run the sampleviewer code and get images simultaneously. Unforutnately, the fps on both cameras seems to drop roughly in half (not an exact measurement) when displaying both simultaneously at full resolution.

3)Discovered the Gigabit ethernet card in Mafalda doesn't support jumbo packets (packets of up to 9k bytes), which is what they recommend for optimum speed.

4)However, connecting the cameras through only gigabit switches to Mafalda did seem to increase the data rate anyways, roughly by a factor of 2. (Used to take about 80 seconds to get 1000 frames saved, now takes roughly 40 seconds).

5)Need to determine the bottleneck on the cameras. It may be the ethernet card, although its possible to connect multiple gigabit cards to single computer (depending on the number of PCI slots it has). Given the ethernet cards are cheap ($300 for 20) compared to even a single camera (~$800-1500), it might be worth while outfitting a computer with multiple.


I found the SampleViewer running and displaying images from the two cameras. This kept mafalda's network so busy that the seisBLRMS program fell behind by a half-hour from its nominal delay (so 45 minutes instead of 12), and was probably getting steadily further behind. I killed the SampleViewer display on linux2, and seisBLRMS is catching up.
  347   Thu Feb 28 19:49:21 2008 robUpdateElectronicsRF Monitor (StocMon)


Quote:

With Ben, we hooked up the RF Monitor box into the PSL rack and created 4 EPICS channels for the outputs:

C1:IOO_RF_STOC_MON_33
C1:IOO_RF_STOC_MON_133
C1:IOO_RF_STOC_MON_166
C1:IOO_RF_STOC_MON_199

The power cable bringing +15V to the preamplifier on the PSL table should be replaced eventually.


I changed the names of these channels to the more appropriate (and informative, as they're coming from the RFAMPD):

C1:IOO-RFAMPD_33MHZ
C1:IOO-RFAMPD_133MHZ
C1:IOO-RFAMPD_166MHZ
C1:IOO-RFAMPD_199MHZ

I also added them in an aesthetically sound manner to the C1IOO_LockMC.adl screen and put them in trends. Along the way, I also lost whatever Alberto had done to make these monitors read zero when there's no light on the diode. It doesn't appear to be written down anywhere, and would have been lost with a reboot anyway. We'll need a more permanent & automatable solution for this.
  355   Tue Mar 4 10:08:21 2008 robUpdateComputersgreen lights unreliable when c0daqctrl down

So far I've tried powering off the framebuilder, power-cycling the RAID (it was showing an error message about bad IDE channel #4), and rebooting the LSC (just for fun). When I reset the LSC, its green light on the RFM_NETWORK screen did not turn red, making all these lights suspect. The iscepics40m process is what controls these red/green lights, so maybe it's gone wonky. It appears to be running however, on c1dcuepics, and it also seems to be functioning correctly in other ways (it's communicating correctly with the LSC).

Update: Alex and Jay came by. The solution was to reset the c0daqctrl processor, which apparently was not done in Rana's rebooting spree. Or maybe it needed to be done last.
  358   Tue Mar 4 23:22:32 2008 robDAQComputersc1susvme1&2 rebooted

I found that some channels from c1susvme1 and c1susvme2 were not being recording by the DAQ (and were not showing up in DV). I rebooted these processors, which fix the problem. If you see other cases of this (signal exactly zero, but not a testpoint problem), just reboot the corresponding processor.
  362   Thu Mar 6 00:17:37 2008 robUpdateLockingDD handoff working
Got the DD (double demod) handoff scripts working tonight, with just the DRMI. So, now acquisition with the single demod signals is working well, and handoffs to all double demod signals using the input matrix ramping worked several times with the scripts. Up next will be more work with the DRM+ARMs.
  366   Mon Mar 10 02:05:08 2008 robUpdateLockingDRMI+2ARMs working better

Some encouraging progress on the locking front tonight. After the work on the DRM loops last week and a review of the settings for initial lock acquisition (loop gains, tickle amplitude, filter states, so on), the DRMI+2ARMS locking is working pretty well. That's to say, it takes from 5-15 minutes generally for the IFO to lock in the offset CARM state, with the arm powers at 0.5. It's then possible to raise the arm powers slightly, and handing off control of CARM to MCL works at low power, but engaging the AO path (using PO_DC as an error signal) is not working so well. Taking swept sines indicates that the PO_DC should be a good error signal. The next good thing to try might be just using PO_DC as an error signal for the length path, without using the AO path at all, to see if it's something in the hardware.
  380   Fri Mar 14 15:06:24 2008 robUpdateComputer Scripts / Programsrouting PEM -> ASS -> SUS_MCL

Quote:

on ASS RFM 1 has PEM signals at

float at 0x100000 has c0dcu1 first ICS110B chan 1
float at 0x100004 has chan 2
etc.

ASS sends to RFM 0

float at 0x100000 goes to PRM MCL
0x100004 to BS MCL
0x100008 to IMTX MCL
0x10000c to ITMY MCL
0x100010 to SRM MCL
0x100018 to MC1 MCL
0x10001c to MC3 MCL
0x100020 to ETMX MCL
0x100024 to ETMY MCL


You can differentiate between RFM 0 and RFM 1 in the simulink model by adding 0x4000000 to the offsets for RFM 1.
  381   Fri Mar 14 15:52:07 2008 robConfigurationLSCLSC code change

I've edited the LSC code to send different signals to the ASS box. Now, instead of the previously selected error signals deemed to be acceptable for the Alignment Sensing and Stabalization system, it sends the LSC control signals for each suspension to the ASS box (in its new incarnation as the Adaptive Susurration Subtraction system). These are the signals after the output matrix, and also after the LSC-[SUS] filter modules.
  383   Sun Mar 16 17:03:32 2008 robConfigurationCDSASS code change

I've updated the ass.mdl file in the directory:

/cvs/cds/caltech/users/alex/cds/advLigo/src/epics/simLink/

to get us started in the adaptive PEM noise subtraction.

After several iterations of remote help from Alex, the code compiles and runs, receives signals from the LSC, PEM, and MC2, and communicates with the suspension controllers. I've also adapted the .par file from the code generator, but haven't got the testpoints working with the new ASS code. There are no MEDM screens yet, and Matt's adaptive filter code has not been installed (there's a matrix as a placeholder).

Putting in the adaptive code should be simple, building the MEDM screens tedious, and getting the testpoints working uncertain. I noticed that the new testpoint.par file starts at a different channel number than the previous (working) version, which is strange. I probably have a script somewhere to change all these numbers by a constant offset, but I don't know if that's the actual problem--maybe stuff just needs to be rebooted.

The code receives as input the first 24 channels from the PEM ADCU, the eight suspension control signals from the LSC, and the output of the MCL filter from MC2. It outputs to the MCL filter input of each suspension (except MC2).
  386   Thu Mar 20 16:06:27 2008 robConfigurationLSCLSC code change

I changed the LSC code again. I noticed that when turning off the LSC (e.g., going from LA to OFF), the cpu time would jump from ~50 to ~80, and irrevocably de-sync all the SUS controllers. This was because turning off the LSC would suddenly zero the inputs to the decimation filters that send information to the ASS box, which for some reason greatly increases the computation time of the iir filter function call. I changed the code so that these inputs are never zeroed. The ASS receives inputs from the LSC all the time now.

I also noticed that the ASS machine was running in ~2400 usec. Yes, 2,400 microseconds. I don't know how long it's been doing that, but I restarted it. Immediately after restart, it ran at 1700 microseconds. After using the "RESET" field in the adaptOnline code, that dropped to ~100 usec. Now it's not doing any adaptive filtering, as I don't know what the good settings are and no-one has been elogging their IFO work the last few days.
  389   Fri Mar 21 11:54:38 2008 robUpdateVACtp 2 failed

Quote:
Small turbo #2 is the forepump of the maglev.
It failed last night, shut down the maglev and interlock closed V1
Ifo pressure is 20 mTorr now. The Yarm was still locked at 8am this morning.
The PSL beam to MC was blocked just before the output periscope.
The psl mechanial shutter did not work from epic screen.


The PSL mechanical shutter actually did trip last night, greatly confusing me and Rana. Not realizing that the software vacuum interlock had tripped, we manually re-opened the shutter. I'll modify the relevant MEDM screens to indicate when the EPICS interlock trips.
  398   Mon Mar 24 13:03:54 2008 robUpdateElectronicsHP4195A is back



Quote:
The swept sine output looks totally normal from 500Mhz to 150MHz (measuring ~220mVrms below 300MHz -- 0dBm), where it abruptly transitions to a distorted waveform which the scope measures as having a frequency of ~25MHz and with 450mVrms (+6dBm). It then transitions again at some other part of the sweep to a cleaner-looking 25MHz waveform with ~1.2Vrms (+15dBm).


The HP4195A is back from repair. At first, it exhibited exactly the same behaviour for which it was sent in for repair, and which is described above (pillage from entry 337). After speaking with the repair tech on the phone, who tried to imply that the digital scope was tricking us, I plugged the output into our HP8591E spectrum analyzer, just to have firm ammunition to combat the repair guy's looniness. This led to even weirder behaviour, like no output and overload signals on the inputs (with nothing connected). After turning the unit on and off several times, and firmly seating (and screwing in) the DB9 connectors in the back of the unit, it appears to be working properly. Except for a brief glitch as it passes through 150MHz, the swept sine signal now appears normal, both on the scope and in the spectrum analyzer.

Apparently the whole thing is due to a loose connection somewhere in the box, which wasn't actually fixed by the repair, but has at least been temporarily fixed by me stumbling around with a screwdriver and then pushing the power button a couple of times.
  400   Tue Mar 25 10:44:24 2008 robUpdateComputersc1susvme2

Quote:
c1susvme2 isn't behaving itself. It keeps getting out of sync and/or giving a red status light.

After going through the usual restart procedures a few times (unsuccessfully) we power cycled the c1susvme & c1sosvme crates. We think everything came back okay.

We still can't get the status and CRC (cyclic redundancy check) to return to normal on c1susvme2. If Alex is around tomorrow please ask him to take a look.


I rebooted it again this morning. The ASS machine is currently not running its process, for whatever reason (someone turn it off?). Let's leave it like this for a day and see how the c1susvme2 does. The other recent change is Steve's install of a cooling fan--maybe that's causing the problem.
  403   Tue Mar 25 16:34:47 2008 robUpdateComputersc1susvme2

Quote:

Quote:
c1susvme2 isn't behaving itself. It keeps getting out of sync and/or giving a red status light.

After going through the usual restart procedures a few times (unsuccessfully) we power cycled the c1susvme & c1sosvme crates. We think everything came back okay.

We still can't get the status and CRC (cyclic redundancy check) to return to normal on c1susvme2. If Alex is around tomorrow please ask him to take a look.


I rebooted it again this morning. The ASS machine is currently not running its process, for whatever reason (someone turn it off?). Let's leave it like this for a day and see how the c1susvme2 does. The other recent change is Steve's install of a cooling fan--maybe that's causing the problem.


Now c1susvme1 is joining the action. Since leaving the ASS off doesn't change anything, we can probably absolve it of blame. I now suspect the 4-pin LEMO cables going from the CLK DRIVER modules to the clock fanout modules. These cables are being squeezed/shaken by Steve's new fan setup, and may have been the culprit all along. John will do some testing to see if they are indeed the problem.
  406   Fri Mar 28 16:18:18 2008 robUpdateComputersc1susvme2 status
c1susvme2 is getting worse and worse. it won't run for more than ~45 minutes without fatally de-syncing. for now I've turned off c1iovme (which sends the MCL signal) to see if that's causing the problem. next I'll swap the boards for c1susvme1 and c1susvme2 to see if it's the cpu (or maybe the RFM card) itself, rather than the timing/pentek systems.
  408   Mon Mar 31 14:14:16 2008 robUpdateComputersc1susvme2 status

Quote:
c1susvme2 is getting worse and worse. it won't run for more than ~45 minutes without fatally de-syncing. for now I've turned off c1iovme (which sends the MCL signal) to see if that's causing the problem. next I'll swap the boards for c1susvme1 and c1susvme2 to see if it's the cpu (or maybe the RFM card) itself, rather than the timing/pentek systems.


I swapped the processors for c1susvme1 and c1susvme2. So for now, to startup, you should ssh into c1susvme1 and run the startup.cmd for c1susvme2, and vice versa.
  426   Fri Apr 18 16:27:04 2008 robUpdateSUSend station sus front-end bug fix

Quote:
installed and started new susEtmx.o and susEtmy.o to fix a problem with ETMY optical lever variables.


But where is the code?
  432   Mon Apr 21 12:58:42 2008 robUpdateASScheck adaptive

Quote:


Caryn Palatchi (a Caltech undergrad who just started working with us)
illustrated to me today that using even 1000 FIR taps is not very effective
for low frequency noise cancellation if you have a 2048 Hz sample rate. More
precisely, the asymptotic Wiener filter which our 'LMS' algorithm converges
to, can often amplify the noise at frequencies below f_sample/N_taps.

A less obvious thing that she also noticed is that there is almost no cancellation
of the 16.25 Hz bounce mode when using such a short filter. That's because that
mode is fairly high Q: the transfer function from the Z-ACC to the cavity signal
goes through the high-Q vertical suspension resonance; the FF signal we send back
goes through the low-Q horizontal pendulum response only. Therefore the filter
needs to be able to simulate ~100 cycles at 16.25 Hz in order to cancel that peak.

Duh.

The message here is: we need to find a computationally efficient way to do FIR filtering
or its not going to ever be cool enough to help us find the Crab.


This is the reason for "RDNSAMP" parameter in the ASS code. The FIR filtration is applied at the downsampled rate, not the machine rate. So, if RDNSAMP=32, the effective sampling rate of the FIR filter is 64Hz, and thus noise cancellation should be good down to 64Hz/1000, or 64mHz, and the filter has an impulse response time that extends to 15 secs. I'm not convinced the filter length is what's limiting the performance at the bounce mode, but I agree that a faster FIR implementation would be good.
  433   Mon Apr 21 13:12:21 2008 robUpdateComputer Scripts / Programstdsread bugs

Quote:
There seems to be a problem with reading the C1:IOO-MASTER_OVERFLOW field
when it is read in as part of an array. The only way for me to describe it
is to just attach the terminal output in this entry...this is mainly for
Matt and Rob
.


I first noticed that the output of the MC-WFS sensing matrix was different than
the outputs from a year ago, namely that the excitation channel was not being
processed and outputted to the file. This made the output matrix diagonalization
scripts fail.

I noticed that there are several different copies of tdsread.cc sitting around.
Looks like they have been hacked in the last year but I am not sure if this
excitation channel readback is an intentional change; email has been sent to the
authors to find out -- they will probably post some kind of response in the log
to resolve what's up.


My guess is that the problem with the IOO channel is not related, but I'm not sure:
op440m:WFS>set ioo_head = "${ifo}:IOO-"
op440m:WFS>set sus_head = "${ifo}:SUS-"
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3
_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${ioo_head}MAS
TER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3_MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0 0 0
op440m:WFS>echo `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
0
op440m:WFS>echo "tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW"
tdsread C1:SUS-MC1_MASTER_OVERFLOW C1:IOO-MASTER_OVERFLOW C1:SUS-MC2_MASTER_OVERFLOW
op440m:WFS>



This is the same bug described in entry 180. I believe it has nothing to do with tdsread, which did not change in the time period before the bug appeared, but perhaps has something to do with other EPICS libraries somewhere (tdsread relies on these epics libraries to do its dirty work). Here is entry 180 for reference:


Quote:
tdsread has developed a strange new illness, whereby it cannot read EPICS values from two subsystems at once (e.g., getting an LSC and SUS value simultaneously). I thought this might have something to with the fact that both losepics and iscepics are running on the same box,
but the same thing happens with IOO EPICS records, so that's not the culprit.

This is new behaviour, and it's only happening on the solaris machines. I suspect some ENV/cshrc juju has caused it, as the tdsread executable is the same one from April, and I don't think our EPICS infrastructure has changed otherwise. In the near term we can either try running the scripts on linux, or modify the IFO scripts to not do these types of calls.


The solution that's been in effect for the past few months has just been to modify the scripts to not make these kinds of calls.
  435   Tue Apr 22 10:59:24 2008 robUpdateSUSMC1 electronics busted

Quote:
I spent some time trying to fix the utter programming fiasco which was our MCWFS diagonalization script.

However, it still didn't work. Loops unstable. Using the matrix in the screen snapshot is OK, however.

Finally, I realized from looking at the imaginary part of the output matrix that there was something
wrong with the MC1 drive. The attached JPG shows TFs from pit-drives of the MC mirrors to WFS1.

MC1 & MC3 are supposed to have 28 elliptic low pass filters in hardware for dewhitening. The MC2
hardware is different and so we have given it a software 28 Hz ELP to compensate. But it looks like
MC1 doesn't have the low pass (no phase lag). I tried switching its COIL FM10 filters to make it
switch but no luck.

We'll have to engage the filters to make the McWFS work right and to get the MC noise down. This
needs someone to go check out the hardware I think.

I have turned the gain way down and this has stabilized the MC REFL signal as you can see from the StripTool screen.


This was just because the XYCOM was set to switch the "dewhites" based on FM9 rather than FM10. To check whether the hardware ellipDW filters were engaged, I drove MC1 & MC3 in position (using the MCL bank), and looked at the transfer functions MC2_MCL/MC1_MCL and MC2_MCL/MC3_MCL. This method uses the mode cleaner length servo to enable a relatively clear transfer function measurement of the ellipDW, modulo the loop gain of MCL and the fact that it's really hard to measure an ELP cascaded with a suspension. The hardware and the switching appear to be working fine.

It's now set up such that the hardware is ENGAGED when the coil FM10 filters are OFF, and I deleted all the FM10 filters from the coils of MC1 and MC3. Since we don't switch these filters on and off regularly, I see no need to waste precious SUS processor power on filters that just calculate "1".
  436   Tue Apr 22 16:17:48 2008 robUpdateSUSend station sus front-end bug fix

Quote:
installed and started new susEtmx.o and susEtmy.o to fix a problem with ETMY optical lever variables.


What Alex means is that the EPICS values for the ETMY optical levers were being clobbered in the RFM. The calculations were being done correctly in the FE, so the DAQ/testpoints were working--it was just the EPICS/RFM communication via c1losepics that was bugged. This was a result of the recent SUS code changes to accept inputs from the ASS for adaptive feedforward.
  438   Tue Apr 22 22:19:02 2008 robMetaphysicslorejiggling sliders

In the interests of tacit communication of scientific knowledge, I here reveal a nugget of knowledge which may or may not prove useful to new LIGOites: sometimes when front-end machines are rebooted, the hardware they control can wind up in a state which is not accurately represented by the EPICS values you may see. This can be easily rectified by momentarily changing the EPICS settings in question. For reference, this came up tonight in the context of the whitening gain sliders for the TransMon QPDs.
  442   Thu Apr 24 14:10:26 2008 robUpdateLockinglocking work
Rob, Johnnie

We made some progress on locking last night (Wed night), namely that we were able to handoff (briefly) the CARM-MCL path the REFL-DC error signal. We tried this because we suspect that the reason the PO-DC is not a good CARM error signal is because at low powers, the dc light level in the recycling cavity is dominated by the +f2 RF sideband. Thus, REFL-DC should work a bit better at low powers, which it did. It wasn't super stable, though, so this will require a bit of work to make the transition reliable & stable. The next things to work on include setting the AO path gain properly and possibly going to higher arm powers before handing off (thus increasing the discriminant).

Another thing we found is that the alignment scripts are not working in an ideal fashion. Running the alignment scripts for the two arms (XARM & YARM) leaves the Michelson badly misaligned, making it impossible to get good DRM alignment. This will have to be fixed.
  456   Sun Apr 27 18:11:58 2008 robDAQComputersbr40m?

The testpoint manager (which runs on fb40m) crashed this afternoon. Upon re-starting it, I found there was a rogue dtt process on op440m and also a daqd daemon running on br40m. One or both of these caused the tpman to crash. br40m is the frame broadcaster, which is never used here as we don't run DMT. I killed the daqd process there.

The way to find if there is a rogue process is to watch the output to the console from the tpman when you start it:

Allocate new TP handle 56 by 131.215.113.203
Allocate new TP handle 57 by 131.215.113.203
Allocate new TP handle 58 by 131.215.113.203
Allocate new TP handle 59 by 131.215.113.203
Allocate new TP handle 60 by 131.215.113.203
Allocate new TP handle 61 by 131.215.113.203
Allocate new TP handle 62 by 131.215.113.203
Allocate new TP handle 63 by 131.215.113.203
Allocate new TP handle 64 by 131.215.113.203
Allocate new TP handle 65 by 131.215.113.203
Allocate new TP handle 66 by 131.215.113.203
Allocate new TP handle 67 by 131.215.113.203
Allocate new TP handle 68 by 131.215.113.203


If you see something like this, with a new TP handle being allocated every few seconds, you need to log in to the corresponding host and kill whatever process has run away.
  464   Mon May 5 11:04:30 2008 robOmnistructureComputersNetwork setup

Mafalda was not connected to the network, and so our DMF-based seisBLRMS has not been running for ~1 week. I traced this to a broken ethernet cable connecting mafalda to the network switch in the rack next to the B&W printer. This cable has a broken connector at the switch side, which means it can't stay connected if there's any tension. It needs to be replaced.
  466   Tue May 6 17:28:39 2008 robConfigurationLSCAP33 -> POX33

I am in the process of switching the POX166 and AP33 photodetectors, so that they become POX33 and AP166. The IFO_CONFIGURE buttons won't work until I finish.
  467   Wed May 7 15:25:41 2008 robConfigurationLSCAP33 -> POX33

Quote:

I am in the process of switching the POX166 and AP33 photodetectors, so that they become POX33 and AP166. The IFO_CONFIGURE buttons won't work until I finish.


Done. We're now in the 40m CDD configuration.
  490   Wed May 21 15:21:33 2008 robUpdateComputer Scripts / Programsautolockers and cron

I added hourly cron jobs to op340m to ensure that

MC autolocker
FSS Slow Servo
PSL watch

are running. I've also edited the wiki procedure to reflect the fact that these no longer need to be restarted by hand.
  507   Fri May 30 12:37:45 2008 robUpdateSUSetmy oplev is back

Quote:
I relayed the optics for ETMY-oplev as shown in pictures below.
The reflected beam goes directly to the qpd


I turned on the servo. UGFs in PIT and YAW are ~3Hz. I had to flip the sign of the YAW.
  531   Thu Jun 12 01:51:23 2008 robUpdateLockingreport
rob, john

We've been working (nights) on getting the IFO locked this week. There's been fairly steady incremental progress each night, and tonight we managed to control CARM(MCL) using PO-DC, with the CARM(AO) path also on PO-DC. In the past, reaching this state has usually meant we're home free, as we could just crank the gain on the common mode servo and merrily reduce the CARM offset. Tonight, however, this state has been very twitchy, and efforts to ramp up the gain have been unsuccessful.

I've attached a diagram which I hope makes clear where we are in the stages of lock acquisition.
Attachment 1: lock_control_sequence.png
lock_control_sequence.png
  533   Thu Jun 12 15:55:15 2008 robUpdateLockingreport

Quote:
Rob: Awesome figure. As you can imagine, I have lots of questions, and hope that you will consider this figure to be the beginning, leading to ever-more detailed versions. But for now, I just want to ask whether you understand *what* is twitchy, and what the twitchiness does to prevent you from taking this further?


I definitely don't understand what's twitchy, but I have suspicions. Tonight we'll try to start by revisiting the other loops (the non-CARM loops) and see how they're dealing with the changing power levels. It may be that the DARM loop is going unstable due to gain variations (due to either increasing power or to rotation of demod phase) or it could be the PODD (or SPOB) saturating with increased power in the recycling cavity. I just hope the glitchiness doesn't have a digital origin.
  537   Wed Jun 18 00:19:29 2008 robUpdatePSLMOPA trend
15 day trend of MOPA channels. The NPRO temperature fluctations are real, and causing the PMC to consistently run up against its rails. The cause of the temperature fluctations is unknown. This, combined with the MZ glitches and Miller kicking off DC power supplies is making locking rather tetchy tonight. Hopefully Yoichi will find the problem with the laser and fix it by tomorrow night.
Attachment 1: MOPAtrend.png
MOPAtrend.png
  538   Wed Jun 18 16:07:57 2008 robSummaryComputersRFM network down

The RFM network tripped off around noon today. It's still down. The problem appears to be with the EPICS interface (c1dcuepics). Trying to restart one of the end stations yields the error: No response from EPICS.

Possible causes include (but not limited to): busted RFM card on c1dcuepics, busted PMC bus on c1dcuepics, busted fiber from c1dcuepics to the RFM switch. We need Alex.
  551   Sun Jun 22 21:38:49 2008 robHowToGeneralIFO CONFIGURE

Now that we're getting back into locking, it's nice to have a stable alignment of the interferometer.
Thus, after you're done with your experiment using subsets of the interferometer (such as a single arm),

please use the IFO_CONFIGURE screen, and click "Restore last Auto-Alignment" in the yellow "Full IFO" section.

If you don't know what this means/how to do this, you shouldn't be using the interferometer on your own.
  583   Fri Jun 27 15:20:52 2008 robDAQLSC.ini file change

I removed C1:LSC-XARM_CTRL from the frames and added C1:LSC-CARM_ERR
  587   Sat Jun 28 03:10:25 2008 robUpdateComputersc1iovme

Quote:
C1susvme2 and C1iovme crashed which sent the optics swinging and tripped the watchdogs.

Koji and I were able to restore c1susvme2 without any trouble.

We have been unable to revive c1iovme. We have tried telneting in and running startup.cmd,
the process runs for a while then hangs with "DAQ init failed -- exiting".

Resetting the board doesn't help. I didn't try keying the whole crate.

All optics are back to normal with damping restored.


I tried keying the crate, then keying the DAQ controller & AWG, then powering down & restarting the framebuilder.
On coming up, the framebuild doesn't start a daqd process, and I can't get one to start by hand (it just prints "652", and then stops).
No error messages and daqd doesn't appear in the prstat.

I then tried keying the DAQ controller again (after the fb0 reboot), which blew the watchdogs on all the suspensions. So then I went around and keyed all the crates.

Now, the suspension controllers are back online. Still no c1iovme, and now the framebuilder/DAQ/AWG are also hosed. We can try keying all the crates again, in the order that Yoichi did last week.

After some more poking around, I found the daqd log file. It's now complaining about

Jun 28 03:00:39 fb daqd[546]: [ID 355684 user.info] Fatal error: channel `C1: PSL-FSS_MIXERM_F' is duplicated 126

This is the second error message like this. It first complained about C1: PSL-FSS_FAST_F, so I commented that out of C1IOOF.ini and rebooted the framebuilder (note this is an actual reboot of the full solaris machine). Eventually I discovered that C1IOOF.ini and C1IOO.ini are essentially identical. They presumably will keep getting these duplicate channel errors until one of them is completely removed.

C1IOO.ini has a modification time of seven PM on Friday night. Who did this and didn't elog it? I've now modified C1IOOF.ini, and I don't remember when it was last modified.
  592   Sun Jun 29 14:53:02 2008 robUpdateComputersRebooting

Quote:
All of the computers are now showing green lights.

Remaining problems:

Alignment scripts are failing with "ERROR: LDS - NDS server error #13"
I think this is a server transmission error.

Dataviwer shows all channels as zero.


Fixed. Just started the testpoint manager on fb40m.


su
/usr/controls/tpman &
  614   Tue Jul 1 13:34:29 2008 robUpdateComputersRFM network back

Quote:

For some reason, the computers requiring startup.cmd (like c1lsc) halt after running this command. Actually the computer is running ok, but the command freezes. Basically, what it does is simply to load a kernel module. I don't know what is wrong.
Anyway, I just closed the terminal after running startup.cmd and it seems fine for now.


This is normal. On the linux RTFEs (Real-Time Front Ends), the real-time code totally hijacks the kernel, disallowing any interrupts. The system thus becomes totally unresponsive while the code is running, and communicates only through the RFM and the VME backplane.
  615   Tue Jul 1 14:24:58 2008 robHowToComputer Scripts / Programsconlog time machine

I've written a perl script (now in the $SCRIPTS/general directory) which implements a "conlog restore" command, restoring channels matching a regexp to a given time using the conlog records and the EpicsTools.pm perl module. The script is called time_machine_conlog:


Quote:


op440m:~>time_machine_conlog

time_machine_conlog restores EPICS control settings using a conlog time
usage: time_machine_conlog [<--dryrun>] <date=yyyy/mm/dd,hh:mm:ss> <timezone> <regexp>

Can also accept a gps time, in which case timezone=gps.
Use the option <--dryrun> to see conlog output without restoring any settings.

EXAMPLE: time_machine_conlog 2008/05/30,12:00:00 PDT "C1:SUS-MC.*_(PIT|YAW)_COMM"



It sometimes returns an error message even when the command is successful--this is because conlog stores EPICS settings to an absurd level of precision, but ezcawrite will not write EPICS values to this level (or at least won't indicate if it did). I consider this a bug in ezcawrite so I'm not touching it.

The script is untested with regards to switch settings (such as ENABLE/DISABLE). It's mainly intended for numerical values.
  617   Tue Jul 1 21:27:27 2008 robHowToComputer Scripts / Programsslider twiddling after reboot

Sometimes after we reboot the front-end machines, some of the hardware gets stuck in an unknown state. We generally fix this by twiddling EPICS settings, which refresh the hardware somehow and put it into a known state. I've started a script (slider_twiddle) which we can just run after reboots to do this for us. Right now it just has the QPD whitening gain settings. As we find more stuff, we can add to it. It's in $SCRIPTS/Admin/.
  631   Thu Jul 3 13:54:26 2008 robConfigurationComputersmDV on rosalba

Does mDV work on rosalba? It can't find NDS_GetChannels. Looking on mafalda, I see that NDS_GetChannels is a mexglx. I think this means someone may need to compile it for 64-bit matlab before we can have mDV on rosalba. When that's done, we should get mDV running on megatron.
  632   Thu Jul 3 16:18:51 2008 robSummaryLockingspecgrams
I used ligoDV to make some spectrograms of DARM_ERR (1), QPDX (2), and QPDY (3). These show the massive instability from 30-40Hz growing in the XARM in the last two minutes of a reasonably high power lock (arm powers up to 30). It's strange that it only shows up in one arm.

CARM is on PO-DC, for both the MCL and the AO path.
DARM is on AS166Q.
Attachment 1: darm_specg.png
darm_specg.png
Attachment 2: qpdx_specg.png
qpdx_specg.png
Attachment 3: qpdy_specg.png
qpdy_specg.png
  655   Thu Jul 10 14:59:01 2008 robUpdateLockingRF common mode at zero offset
rob, john, yoichi

Last night we succeeded in reducing the CARM offset to zero.

We handed off control of the common mode servo from PO-DC to POX-I.

We pushed the common mode servo bandwidth to ~19kHz. Without the boosts, it had ~80 degs of phase margin. Didn't measure it after engaging the boosts (Boost + 1 superboost). Trying to engage the second superboost stage broke the lock.

The process is fully scripted, and the script worked all the way through several times.

The DARM ugf was ~200Hz. The RSE peak could clearly be seen. No optical spring, as expected (we're locking in anti-spring mode).

Engaging test mass de-whitening filters did not work (broke the lock).

I'm attaching a lock control sequence diagram and a trend of the arm power during a scripted up-sequence. I think the script can be sped up significantly (especially the long ramp period).

Up next:

Calibrated DARM spectrum
Noise hunting (start with dewhites)
DC - Readout
Lock to the springy side.
Attachment 1: lock_control_sequence_worked.png
lock_control_sequence_worked.png
Attachment 2: trendpowerbuild.png
trendpowerbuild.png
  658   Fri Jul 11 00:30:24 2008 robMetaphysicsComputersstrange SUS controllers

rob, johnnieM

We were hampered early tonight by the fact that someone sneakily turned off the HP RF Ampflier on the AS table.

After that, we were hampered further by mode cleaner strangeness. It would occasionally spontaneously unlock & blow its watchdogs. It never made it through the ontoMCL script (putting DC-CARM onto the MCL). After some investigation, we found that c1susvme1 and c1susvme2 were running stochastically late (SYNC_FE != 0), even though their computation times never got above 61. Also, the end SUS controllers were never late.

Weird.

After rebooting the vertex SUS controllers and the c1lsc, things appear to be working again.
  701   Fri Jul 18 23:24:24 2008 robUpdatePSLPMC PZT investigation

Quote:
I measured the HV coming to the PMC PZT by plugging it off from the PZT and hooking it up to a DVM.
The reading of DVM is pretty much consistent with the reading on EPICS. I got 287V on the DVM when the EPICS says 290V.

Then I used a T to monitor the same voltage while it is connected to the PZT. I attached a plot of the actual voltage measured by the DVM vs the EPICS reading.
It shows a hysteresis.
Also the actual voltage drops by more than a half when the PZT is connected. The output impedance of the HV amp is 64k (according to the schematic). If I believe this number, the impedance of the PZT should also be 64k. The current flowing the PZT is 1.6mA at 200V EPICS reading.
The impedance of the PZT directly measured by the DVM is 1.5M ohm, which is significantly different from the value expected above. I will check the actual output impedance of the HV amp later.
The capacitance of the PZT measured by the DVM is 300nF. I don't know if I can believe the DVM's ability to measure C.

I noticed that when a high voltage is applied, the actual voltage across the PZT shows a decay.
The second plot shows the step response of the actual voltage.
The voltage coming to the PZT was T-ed and reduced by a factor of 30 using a high impedance voltage divider to be recorded by an ADC.
The PMCTRANSPD channel is temporarily used to monitor this signal.
After the voltage applied to the PZT was increased abruptly (to ~230V), the actual voltage starts to exponentially decrease.
When the HV was reduced to ~30V, the actual voltage goes up. This behavior explains the weird exponential motion of the PZT feedback signal when the PMC is locked.
The cause of the actual voltage drop is not understood yet.
From the above measurements, we can almost certainly conclude that the problem of the PMC is in the PZT, not in the HV amp nor the read back.


I'd believe the Fluke's measurement of capacitance. Here's some info from PK about the PZT:


Quote:

But the PMC ones were something like
0.750 in. thick x 0.287 in. thick. 2 microns per 200 V displacement,
resonant frequency greater than 65 kHz. Typical capacitance is around 0.66
uF.


If the PZT capacitance has dropped by a factor of two, that seems like a bad sign. I don't know what to expect for a resistance value of the PZT, but I wouldn't be surprised if it's non-Ohmic. The 64k is the series resistor after the PA85, not the modeled resistance of the PZT itself.
  702   Sat Jul 19 19:39:44 2008 robUpdatePSLPMC PZT investigation

Quote:

Quote:
The 64k is the series resistor after the PA85, not the modeled resistance of the PZT itself.

Yes. What I meant was that because the measured voltage across the PZT was a half of the open voltage of the HV amp, the DC impedance of the PZT is expected to be similar to the output impedance of the HV amp. Of course, I don't think the DC impedance of a normal PZT should be such low.
I'm puzzled by the discrepancy between this expected DC impedance and the directly measured impedance by the Fluke DVM (1.5M Ohm).
One possibility is that the PZT leaks current only when a high voltage is applied.
  714   Tue Jul 22 13:15:14 2008 robUpdatePSLNote from R. Abbott re: the PMC

Quote:
an email from Rich:
Your PZT is broken.

R


Quelle surprise

Frown
  727   Wed Jul 23 21:48:30 2008 robConfigurationGeneralrestore IFO when you're done with it

when you are done with the IFO, please click "Restore last auto-alignment" on the yellow IFO portion of the C1IFO_CONFIGURE.adl screen. Failure to comply will be interpreted as antagonism toward the lock acquisition effort and will be met with excoriation.
  729   Thu Jul 24 01:04:01 2008 robConfigurationLSCIFR2023A (aka MARCONI) settings

Quote:


P.S.: We made a test by changing the frequency of the local oscillator by a little bit and then coming back to the original value. We observed that the phase of the signal can change, so every time this frequency is moved the 3f demod phase need to be retuned.



We discovered this little tidbit in March, and remembered it tonight. Basically we found that whenever you change the frequency on one of these signal generators (and maybe any other setting as well), the phase of the signal can change (it's probably just the sign, but still...), meaning that you when you return settings to their intial value, not everything is exactly as it once was. For most applications, this doesn't matter. For us, where we use one Marconi to demodulate the product of two other Marconis, it means we can easily cause a great deal of grief for ourselves, as the demod phase for the double demod signals can appear to change.

Programmatically, what this means is that every time you touch a Marconi you must elog it. Especially if you change a setting and then put it back.
  731   Thu Jul 24 02:57:26 2008 robUpdateLSCArm cavity g-factor measurement

Quote:

So, now I feel that the method for the TEM01 quest should be reconsidered.

If we have any unbalanced resonance for the phase modulation sidebands, the offset of the error signal is to be observed even with the carrier exactly at the resonance. We don't need to shake or move the cavity mirrors.

Presence of the MC makes the things more complicated. Changing the frequency of the modulation that should go throgh the MC is a bit tricky as the detuning produces FM-AM conversion. i.e. The beam incident on the arm cavity may be not only phase modulated but also amplitude modulated. This makes the measurement of the offset described above difficult.

The setup of the abs length measurement (FSR measurement) will be easily used for the measurement of the transverse mode spacings. But it needs some more time to be realized.


We should be able to see 166MHz sideband resonances using the double demodulated photodetectors. With these, the 33MHz sidebands will be acting as LO when the 166MHz sideband (or mode) resonates. Some modeling may be necessary to determine if the SNR will be good enough to make this worthwhile, however.
ELOG V3.1.3-