40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 308 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  403   Tue Mar 25 16:34:47 2008 robUpdateComputersc1susvme2

Quote:

Quote:
c1susvme2 isn't behaving itself. It keeps getting out of sync and/or giving a red status light.

After going through the usual restart procedures a few times (unsuccessfully) we power cycled the c1susvme & c1sosvme crates. We think everything came back okay.

We still can't get the status and CRC (cyclic redundancy check) to return to normal on c1susvme2. If Alex is around tomorrow please ask him to take a look.


I rebooted it again this morning. The ASS machine is currently not running its process, for whatever reason (someone turn it off?). Let's leave it like this for a day and see how the c1susvme2 does. The other recent change is Steve's install of a cooling fan--maybe that's causing the problem.


Now c1susvme1 is joining the action. Since leaving the ASS off doesn't change anything, we can probably absolve it of blame. I now suspect the 4-pin LEMO cables going from the CLK DRIVER modules to the clock fanout modules. These cables are being squeezed/shaken by Steve's new fan setup, and may have been the culprit all along. John will do some testing to see if they are indeed the problem.
  406   Fri Mar 28 16:18:18 2008 robUpdateComputersc1susvme2 status
c1susvme2 is getting worse and worse. it won't run for more than ~45 minutes without fatally de-syncing. for now I've turned off c1iovme (which sends the MCL signal) to see if that's causing the problem. next I'll swap the boards for c1susvme1 and c1susvme2 to see if it's the cpu (or maybe the RFM card) itself, rather than the timing/pentek systems.
  408   Mon Mar 31 14:14:16 2008 robUpdateComputersc1susvme2 status

Quote:
c1susvme2 is getting worse and worse. it won't run for more than ~45 minutes without fatally de-syncing. for now I've turned off c1iovme (which sends the MCL signal) to see if that's causing the problem. next I'll swap the boards for c1susvme1 and c1susvme2 to see if it's the cpu (or maybe the RFM card) itself, rather than the timing/pentek systems.


I swapped the processors for c1susvme1 and c1susvme2. So for now, to startup, you should ssh into c1susvme1 and run the startup.cmd for c1susvme2, and vice versa.
  426   Fri Apr 18 16:27:04 2008 robUpdateSUSend station sus front-end bug fix

Quote:
installed and started new susEtmx.o and susEtmy.o to fix a problem with ETMY optical lever variables.


But where is the code?
  432   Mon Apr 21 12:58:42 2008 robUpdateASScheck adaptive

Quote:


Caryn Palatchi (a Caltech undergrad who just started working with us)
illustrated to me today that using even 1000 FIR taps is not very effective
for low frequency noise cancellation if you have a 2048 Hz sample rate. More
precisely, the asymptotic Wiener filter which our 'LMS' algorithm converges
to, can often amplify the noise at frequencies below f_sample/N_taps.

A less obvious thing that she also noticed is that there is almost no cancellation
of the 16.25 Hz bounce mode when using such a short filter. That's because that
mode is fairly high Q: the transfer function from the Z-ACC to the cavity signal
goes through the high-Q vertical suspension resonance; the FF signal we send back
goes through the low-Q horizontal pendulum response only. Therefore the filter
needs to be able to simulate ~100 cycles at 16.25 Hz in order to cancel that peak.

Duh.

The message here is: we need to find a computationally efficient way to do FIR filtering
or its not going to ever be cool enough to help us find the Crab.


This is the reason for "RDNSAMP" parameter in the ASS code. The FIR filtration is applied at the downsampled rate, not the machine rate. So, if RDNSAMP=32, the effective sampling rate of the FIR filter is 64Hz, and thus noise cancellation should be good down to 64Hz/1000, or 64mHz, and the filter has an impulse response time that extends to 15 secs. I'm not convinced the filter length is what's limiting the performance at the bounce mode, but I agree that a faster FIR implementation would be good.
  433   Mon Apr 21 13:12:21 2008 robUpdateComputer Scripts / Programstdsread bugs

Quote:
There seems to be a problem with reading the C1:IOO-MASTER_OVERFLOW field
when it is read in as part of an array. The only way for me to describe it
is to just attach the terminal output in this entry...this is mainly for
Matt and Rob
.


I first noticed that the output of the MC-WFS sensing matrix was different than
the outputs from a year ago, namely that the excitation channel was not being
processed and outputted to the file. This made the output matrix diagonalization
scripts fail.

I noticed that there are several different copies of tdsread.cc sitting around.
Looks like they have been hacked in the last year but I am not sure if this
excitation channel readback is an intentional change; email has been sent to the
authors to find out -- they will probably post some kind of response in the log
to resolve what's up.


My guess is that the problem with the IOO channel is not related, but I'm not sure:
op440m:WFS>set ioo_head = "${ifo}:IOO-"
op440m:WFS>set sus_head = "${ifo}:SUS-"
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3
_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${ioo_head}MAS
TER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3_MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0 0 0
op440m:WFS>echo `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
0
op440m:WFS>echo "tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW"
tdsread C1:SUS-MC1_MASTER_OVERFLOW C1:IOO-MASTER_OVERFLOW C1:SUS-MC2_MASTER_OVERFLOW
op440m:WFS>



This is the same bug described in entry 180. I believe it has nothing to do with tdsread, which did not change in the time period before the bug appeared, but perhaps has something to do with other EPICS libraries somewhere (tdsread relies on these epics libraries to do its dirty work). Here is entry 180 for reference:


Quote:
tdsread has developed a strange new illness, whereby it cannot read EPICS values from two subsystems at once (e.g., getting an LSC and SUS value simultaneously). I thought this might have something to with the fact that both losepics and iscepics are running on the same box,
but the same thing happens with IOO EPICS records, so that's not the culprit.

This is new behaviour, and it's only happening on the solaris machines. I suspect some ENV/cshrc juju has caused it, as the tdsread executable is the same one from April, and I don't think our EPICS infrastructure has changed otherwise. In the near term we can either try running the scripts on linux, or modify the IFO scripts to not do these types of calls.


The solution that's been in effect for the past few months has just been to modify the scripts to not make these kinds of calls.
  435   Tue Apr 22 10:59:24 2008 robUpdateSUSMC1 electronics busted

Quote:
I spent some time trying to fix the utter programming fiasco which was our MCWFS diagonalization script.

However, it still didn't work. Loops unstable. Using the matrix in the screen snapshot is OK, however.

Finally, I realized from looking at the imaginary part of the output matrix that there was something
wrong with the MC1 drive. The attached JPG shows TFs from pit-drives of the MC mirrors to WFS1.

MC1 & MC3 are supposed to have 28 elliptic low pass filters in hardware for dewhitening. The MC2
hardware is different and so we have given it a software 28 Hz ELP to compensate. But it looks like
MC1 doesn't have the low pass (no phase lag). I tried switching its COIL FM10 filters to make it
switch but no luck.

We'll have to engage the filters to make the McWFS work right and to get the MC noise down. This
needs someone to go check out the hardware I think.

I have turned the gain way down and this has stabilized the MC REFL signal as you can see from the StripTool screen.


This was just because the XYCOM was set to switch the "dewhites" based on FM9 rather than FM10. To check whether the hardware ellipDW filters were engaged, I drove MC1 & MC3 in position (using the MCL bank), and looked at the transfer functions MC2_MCL/MC1_MCL and MC2_MCL/MC3_MCL. This method uses the mode cleaner length servo to enable a relatively clear transfer function measurement of the ellipDW, modulo the loop gain of MCL and the fact that it's really hard to measure an ELP cascaded with a suspension. The hardware and the switching appear to be working fine.

It's now set up such that the hardware is ENGAGED when the coil FM10 filters are OFF, and I deleted all the FM10 filters from the coils of MC1 and MC3. Since we don't switch these filters on and off regularly, I see no need to waste precious SUS processor power on filters that just calculate "1".
  436   Tue Apr 22 16:17:48 2008 robUpdateSUSend station sus front-end bug fix

Quote:
installed and started new susEtmx.o and susEtmy.o to fix a problem with ETMY optical lever variables.


What Alex means is that the EPICS values for the ETMY optical levers were being clobbered in the RFM. The calculations were being done correctly in the FE, so the DAQ/testpoints were working--it was just the EPICS/RFM communication via c1losepics that was bugged. This was a result of the recent SUS code changes to accept inputs from the ASS for adaptive feedforward.
  438   Tue Apr 22 22:19:02 2008 robMetaphysicslorejiggling sliders

In the interests of tacit communication of scientific knowledge, I here reveal a nugget of knowledge which may or may not prove useful to new LIGOites: sometimes when front-end machines are rebooted, the hardware they control can wind up in a state which is not accurately represented by the EPICS values you may see. This can be easily rectified by momentarily changing the EPICS settings in question. For reference, this came up tonight in the context of the whitening gain sliders for the TransMon QPDs.
  442   Thu Apr 24 14:10:26 2008 robUpdateLockinglocking work
Rob, Johnnie

We made some progress on locking last night (Wed night), namely that we were able to handoff (briefly) the CARM-MCL path the REFL-DC error signal. We tried this because we suspect that the reason the PO-DC is not a good CARM error signal is because at low powers, the dc light level in the recycling cavity is dominated by the +f2 RF sideband. Thus, REFL-DC should work a bit better at low powers, which it did. It wasn't super stable, though, so this will require a bit of work to make the transition reliable & stable. The next things to work on include setting the AO path gain properly and possibly going to higher arm powers before handing off (thus increasing the discriminant).

Another thing we found is that the alignment scripts are not working in an ideal fashion. Running the alignment scripts for the two arms (XARM & YARM) leaves the Michelson badly misaligned, making it impossible to get good DRM alignment. This will have to be fixed.
  456   Sun Apr 27 18:11:58 2008 robDAQComputersbr40m?

The testpoint manager (which runs on fb40m) crashed this afternoon. Upon re-starting it, I found there was a rogue dtt process on op440m and also a daqd daemon running on br40m. One or both of these caused the tpman to crash. br40m is the frame broadcaster, which is never used here as we don't run DMT. I killed the daqd process there.

The way to find if there is a rogue process is to watch the output to the console from the tpman when you start it:

Allocate new TP handle 56 by 131.215.113.203
Allocate new TP handle 57 by 131.215.113.203
Allocate new TP handle 58 by 131.215.113.203
Allocate new TP handle 59 by 131.215.113.203
Allocate new TP handle 60 by 131.215.113.203
Allocate new TP handle 61 by 131.215.113.203
Allocate new TP handle 62 by 131.215.113.203
Allocate new TP handle 63 by 131.215.113.203
Allocate new TP handle 64 by 131.215.113.203
Allocate new TP handle 65 by 131.215.113.203
Allocate new TP handle 66 by 131.215.113.203
Allocate new TP handle 67 by 131.215.113.203
Allocate new TP handle 68 by 131.215.113.203


If you see something like this, with a new TP handle being allocated every few seconds, you need to log in to the corresponding host and kill whatever process has run away.
  464   Mon May 5 11:04:30 2008 robOmnistructureComputersNetwork setup

Mafalda was not connected to the network, and so our DMF-based seisBLRMS has not been running for ~1 week. I traced this to a broken ethernet cable connecting mafalda to the network switch in the rack next to the B&W printer. This cable has a broken connector at the switch side, which means it can't stay connected if there's any tension. It needs to be replaced.
  466   Tue May 6 17:28:39 2008 robConfigurationLSCAP33 -> POX33

I am in the process of switching the POX166 and AP33 photodetectors, so that they become POX33 and AP166. The IFO_CONFIGURE buttons won't work until I finish.
  467   Wed May 7 15:25:41 2008 robConfigurationLSCAP33 -> POX33

Quote:

I am in the process of switching the POX166 and AP33 photodetectors, so that they become POX33 and AP166. The IFO_CONFIGURE buttons won't work until I finish.


Done. We're now in the 40m CDD configuration.
  490   Wed May 21 15:21:33 2008 robUpdateComputer Scripts / Programsautolockers and cron

I added hourly cron jobs to op340m to ensure that

MC autolocker
FSS Slow Servo
PSL watch

are running. I've also edited the wiki procedure to reflect the fact that these no longer need to be restarted by hand.
  507   Fri May 30 12:37:45 2008 robUpdateSUSetmy oplev is back

Quote:
I relayed the optics for ETMY-oplev as shown in pictures below.
The reflected beam goes directly to the qpd


I turned on the servo. UGFs in PIT and YAW are ~3Hz. I had to flip the sign of the YAW.
  531   Thu Jun 12 01:51:23 2008 robUpdateLockingreport
rob, john

We've been working (nights) on getting the IFO locked this week. There's been fairly steady incremental progress each night, and tonight we managed to control CARM(MCL) using PO-DC, with the CARM(AO) path also on PO-DC. In the past, reaching this state has usually meant we're home free, as we could just crank the gain on the common mode servo and merrily reduce the CARM offset. Tonight, however, this state has been very twitchy, and efforts to ramp up the gain have been unsuccessful.

I've attached a diagram which I hope makes clear where we are in the stages of lock acquisition.
Attachment 1: lock_control_sequence.png
lock_control_sequence.png
  533   Thu Jun 12 15:55:15 2008 robUpdateLockingreport

Quote:
Rob: Awesome figure. As you can imagine, I have lots of questions, and hope that you will consider this figure to be the beginning, leading to ever-more detailed versions. But for now, I just want to ask whether you understand *what* is twitchy, and what the twitchiness does to prevent you from taking this further?


I definitely don't understand what's twitchy, but I have suspicions. Tonight we'll try to start by revisiting the other loops (the non-CARM loops) and see how they're dealing with the changing power levels. It may be that the DARM loop is going unstable due to gain variations (due to either increasing power or to rotation of demod phase) or it could be the PODD (or SPOB) saturating with increased power in the recycling cavity. I just hope the glitchiness doesn't have a digital origin.
  537   Wed Jun 18 00:19:29 2008 robUpdatePSLMOPA trend
15 day trend of MOPA channels. The NPRO temperature fluctations are real, and causing the PMC to consistently run up against its rails. The cause of the temperature fluctations is unknown. This, combined with the MZ glitches and Miller kicking off DC power supplies is making locking rather tetchy tonight. Hopefully Yoichi will find the problem with the laser and fix it by tomorrow night.
Attachment 1: MOPAtrend.png
MOPAtrend.png
  538   Wed Jun 18 16:07:57 2008 robSummaryComputersRFM network down

The RFM network tripped off around noon today. It's still down. The problem appears to be with the EPICS interface (c1dcuepics). Trying to restart one of the end stations yields the error: No response from EPICS.

Possible causes include (but not limited to): busted RFM card on c1dcuepics, busted PMC bus on c1dcuepics, busted fiber from c1dcuepics to the RFM switch. We need Alex.
  551   Sun Jun 22 21:38:49 2008 robHowToGeneralIFO CONFIGURE

Now that we're getting back into locking, it's nice to have a stable alignment of the interferometer.
Thus, after you're done with your experiment using subsets of the interferometer (such as a single arm),

please use the IFO_CONFIGURE screen, and click "Restore last Auto-Alignment" in the yellow "Full IFO" section.

If you don't know what this means/how to do this, you shouldn't be using the interferometer on your own.
  583   Fri Jun 27 15:20:52 2008 robDAQLSC.ini file change

I removed C1:LSC-XARM_CTRL from the frames and added C1:LSC-CARM_ERR
  587   Sat Jun 28 03:10:25 2008 robUpdateComputersc1iovme

Quote:
C1susvme2 and C1iovme crashed which sent the optics swinging and tripped the watchdogs.

Koji and I were able to restore c1susvme2 without any trouble.

We have been unable to revive c1iovme. We have tried telneting in and running startup.cmd,
the process runs for a while then hangs with "DAQ init failed -- exiting".

Resetting the board doesn't help. I didn't try keying the whole crate.

All optics are back to normal with damping restored.


I tried keying the crate, then keying the DAQ controller & AWG, then powering down & restarting the framebuilder.
On coming up, the framebuild doesn't start a daqd process, and I can't get one to start by hand (it just prints "652", and then stops).
No error messages and daqd doesn't appear in the prstat.

I then tried keying the DAQ controller again (after the fb0 reboot), which blew the watchdogs on all the suspensions. So then I went around and keyed all the crates.

Now, the suspension controllers are back online. Still no c1iovme, and now the framebuilder/DAQ/AWG are also hosed. We can try keying all the crates again, in the order that Yoichi did last week.

After some more poking around, I found the daqd log file. It's now complaining about

Jun 28 03:00:39 fb daqd[546]: [ID 355684 user.info] Fatal error: channel `C1: PSL-FSS_MIXERM_F' is duplicated 126

This is the second error message like this. It first complained about C1: PSL-FSS_FAST_F, so I commented that out of C1IOOF.ini and rebooted the framebuilder (note this is an actual reboot of the full solaris machine). Eventually I discovered that C1IOOF.ini and C1IOO.ini are essentially identical. They presumably will keep getting these duplicate channel errors until one of them is completely removed.

C1IOO.ini has a modification time of seven PM on Friday night. Who did this and didn't elog it? I've now modified C1IOOF.ini, and I don't remember when it was last modified.
  592   Sun Jun 29 14:53:02 2008 robUpdateComputersRebooting

Quote:
All of the computers are now showing green lights.

Remaining problems:

Alignment scripts are failing with "ERROR: LDS - NDS server error #13"
I think this is a server transmission error.

Dataviwer shows all channels as zero.


Fixed. Just started the testpoint manager on fb40m.


su
/usr/controls/tpman &
  614   Tue Jul 1 13:34:29 2008 robUpdateComputersRFM network back

Quote:

For some reason, the computers requiring startup.cmd (like c1lsc) halt after running this command. Actually the computer is running ok, but the command freezes. Basically, what it does is simply to load a kernel module. I don't know what is wrong.
Anyway, I just closed the terminal after running startup.cmd and it seems fine for now.


This is normal. On the linux RTFEs (Real-Time Front Ends), the real-time code totally hijacks the kernel, disallowing any interrupts. The system thus becomes totally unresponsive while the code is running, and communicates only through the RFM and the VME backplane.
  615   Tue Jul 1 14:24:58 2008 robHowToComputer Scripts / Programsconlog time machine

I've written a perl script (now in the $SCRIPTS/general directory) which implements a "conlog restore" command, restoring channels matching a regexp to a given time using the conlog records and the EpicsTools.pm perl module. The script is called time_machine_conlog:


Quote:


op440m:~>time_machine_conlog

time_machine_conlog restores EPICS control settings using a conlog time
usage: time_machine_conlog [<--dryrun>] <date=yyyy/mm/dd,hh:mm:ss> <timezone> <regexp>

Can also accept a gps time, in which case timezone=gps.
Use the option <--dryrun> to see conlog output without restoring any settings.

EXAMPLE: time_machine_conlog 2008/05/30,12:00:00 PDT "C1:SUS-MC.*_(PIT|YAW)_COMM"



It sometimes returns an error message even when the command is successful--this is because conlog stores EPICS settings to an absurd level of precision, but ezcawrite will not write EPICS values to this level (or at least won't indicate if it did). I consider this a bug in ezcawrite so I'm not touching it.

The script is untested with regards to switch settings (such as ENABLE/DISABLE). It's mainly intended for numerical values.
  617   Tue Jul 1 21:27:27 2008 robHowToComputer Scripts / Programsslider twiddling after reboot

Sometimes after we reboot the front-end machines, some of the hardware gets stuck in an unknown state. We generally fix this by twiddling EPICS settings, which refresh the hardware somehow and put it into a known state. I've started a script (slider_twiddle) which we can just run after reboots to do this for us. Right now it just has the QPD whitening gain settings. As we find more stuff, we can add to it. It's in $SCRIPTS/Admin/.
  631   Thu Jul 3 13:54:26 2008 robConfigurationComputersmDV on rosalba

Does mDV work on rosalba? It can't find NDS_GetChannels. Looking on mafalda, I see that NDS_GetChannels is a mexglx. I think this means someone may need to compile it for 64-bit matlab before we can have mDV on rosalba. When that's done, we should get mDV running on megatron.
  632   Thu Jul 3 16:18:51 2008 robSummaryLockingspecgrams
I used ligoDV to make some spectrograms of DARM_ERR (1), QPDX (2), and QPDY (3). These show the massive instability from 30-40Hz growing in the XARM in the last two minutes of a reasonably high power lock (arm powers up to 30). It's strange that it only shows up in one arm.

CARM is on PO-DC, for both the MCL and the AO path.
DARM is on AS166Q.
Attachment 1: darm_specg.png
darm_specg.png
Attachment 2: qpdx_specg.png
qpdx_specg.png
Attachment 3: qpdy_specg.png
qpdy_specg.png
  655   Thu Jul 10 14:59:01 2008 robUpdateLockingRF common mode at zero offset
rob, john, yoichi

Last night we succeeded in reducing the CARM offset to zero.

We handed off control of the common mode servo from PO-DC to POX-I.

We pushed the common mode servo bandwidth to ~19kHz. Without the boosts, it had ~80 degs of phase margin. Didn't measure it after engaging the boosts (Boost + 1 superboost). Trying to engage the second superboost stage broke the lock.

The process is fully scripted, and the script worked all the way through several times.

The DARM ugf was ~200Hz. The RSE peak could clearly be seen. No optical spring, as expected (we're locking in anti-spring mode).

Engaging test mass de-whitening filters did not work (broke the lock).

I'm attaching a lock control sequence diagram and a trend of the arm power during a scripted up-sequence. I think the script can be sped up significantly (especially the long ramp period).

Up next:

Calibrated DARM spectrum
Noise hunting (start with dewhites)
DC - Readout
Lock to the springy side.
Attachment 1: lock_control_sequence_worked.png
lock_control_sequence_worked.png
Attachment 2: trendpowerbuild.png
trendpowerbuild.png
  658   Fri Jul 11 00:30:24 2008 robMetaphysicsComputersstrange SUS controllers

rob, johnnieM

We were hampered early tonight by the fact that someone sneakily turned off the HP RF Ampflier on the AS table.

After that, we were hampered further by mode cleaner strangeness. It would occasionally spontaneously unlock & blow its watchdogs. It never made it through the ontoMCL script (putting DC-CARM onto the MCL). After some investigation, we found that c1susvme1 and c1susvme2 were running stochastically late (SYNC_FE != 0), even though their computation times never got above 61. Also, the end SUS controllers were never late.

Weird.

After rebooting the vertex SUS controllers and the c1lsc, things appear to be working again.
  701   Fri Jul 18 23:24:24 2008 robUpdatePSLPMC PZT investigation

Quote:
I measured the HV coming to the PMC PZT by plugging it off from the PZT and hooking it up to a DVM.
The reading of DVM is pretty much consistent with the reading on EPICS. I got 287V on the DVM when the EPICS says 290V.

Then I used a T to monitor the same voltage while it is connected to the PZT. I attached a plot of the actual voltage measured by the DVM vs the EPICS reading.
It shows a hysteresis.
Also the actual voltage drops by more than a half when the PZT is connected. The output impedance of the HV amp is 64k (according to the schematic). If I believe this number, the impedance of the PZT should also be 64k. The current flowing the PZT is 1.6mA at 200V EPICS reading.
The impedance of the PZT directly measured by the DVM is 1.5M ohm, which is significantly different from the value expected above. I will check the actual output impedance of the HV amp later.
The capacitance of the PZT measured by the DVM is 300nF. I don't know if I can believe the DVM's ability to measure C.

I noticed that when a high voltage is applied, the actual voltage across the PZT shows a decay.
The second plot shows the step response of the actual voltage.
The voltage coming to the PZT was T-ed and reduced by a factor of 30 using a high impedance voltage divider to be recorded by an ADC.
The PMCTRANSPD channel is temporarily used to monitor this signal.
After the voltage applied to the PZT was increased abruptly (to ~230V), the actual voltage starts to exponentially decrease.
When the HV was reduced to ~30V, the actual voltage goes up. This behavior explains the weird exponential motion of the PZT feedback signal when the PMC is locked.
The cause of the actual voltage drop is not understood yet.
From the above measurements, we can almost certainly conclude that the problem of the PMC is in the PZT, not in the HV amp nor the read back.


I'd believe the Fluke's measurement of capacitance. Here's some info from PK about the PZT:


Quote:

But the PMC ones were something like
0.750 in. thick x 0.287 in. thick. 2 microns per 200 V displacement,
resonant frequency greater than 65 kHz. Typical capacitance is around 0.66
uF.


If the PZT capacitance has dropped by a factor of two, that seems like a bad sign. I don't know what to expect for a resistance value of the PZT, but I wouldn't be surprised if it's non-Ohmic. The 64k is the series resistor after the PA85, not the modeled resistance of the PZT itself.
  702   Sat Jul 19 19:39:44 2008 robUpdatePSLPMC PZT investigation

Quote:

Quote:
The 64k is the series resistor after the PA85, not the modeled resistance of the PZT itself.

Yes. What I meant was that because the measured voltage across the PZT was a half of the open voltage of the HV amp, the DC impedance of the PZT is expected to be similar to the output impedance of the HV amp. Of course, I don't think the DC impedance of a normal PZT should be such low.
I'm puzzled by the discrepancy between this expected DC impedance and the directly measured impedance by the Fluke DVM (1.5M Ohm).
One possibility is that the PZT leaks current only when a high voltage is applied.
  714   Tue Jul 22 13:15:14 2008 robUpdatePSLNote from R. Abbott re: the PMC

Quote:
an email from Rich:
Your PZT is broken.

R


Quelle surprise

Frown
  727   Wed Jul 23 21:48:30 2008 robConfigurationGeneralrestore IFO when you're done with it

when you are done with the IFO, please click "Restore last auto-alignment" on the yellow IFO portion of the C1IFO_CONFIGURE.adl screen. Failure to comply will be interpreted as antagonism toward the lock acquisition effort and will be met with excoriation.
  729   Thu Jul 24 01:04:01 2008 robConfigurationLSCIFR2023A (aka MARCONI) settings

Quote:


P.S.: We made a test by changing the frequency of the local oscillator by a little bit and then coming back to the original value. We observed that the phase of the signal can change, so every time this frequency is moved the 3f demod phase need to be retuned.



We discovered this little tidbit in March, and remembered it tonight. Basically we found that whenever you change the frequency on one of these signal generators (and maybe any other setting as well), the phase of the signal can change (it's probably just the sign, but still...), meaning that you when you return settings to their intial value, not everything is exactly as it once was. For most applications, this doesn't matter. For us, where we use one Marconi to demodulate the product of two other Marconis, it means we can easily cause a great deal of grief for ourselves, as the demod phase for the double demod signals can appear to change.

Programmatically, what this means is that every time you touch a Marconi you must elog it. Especially if you change a setting and then put it back.
  731   Thu Jul 24 02:57:26 2008 robUpdateLSCArm cavity g-factor measurement

Quote:

So, now I feel that the method for the TEM01 quest should be reconsidered.

If we have any unbalanced resonance for the phase modulation sidebands, the offset of the error signal is to be observed even with the carrier exactly at the resonance. We don't need to shake or move the cavity mirrors.

Presence of the MC makes the things more complicated. Changing the frequency of the modulation that should go throgh the MC is a bit tricky as the detuning produces FM-AM conversion. i.e. The beam incident on the arm cavity may be not only phase modulated but also amplitude modulated. This makes the measurement of the offset described above difficult.

The setup of the abs length measurement (FSR measurement) will be easily used for the measurement of the transverse mode spacings. But it needs some more time to be realized.


We should be able to see 166MHz sideband resonances using the double demodulated photodetectors. With these, the 33MHz sidebands will be acting as LO when the 166MHz sideband (or mode) resonates. Some modeling may be necessary to determine if the SNR will be good enough to make this worthwhile, however.
  732   Thu Jul 24 03:08:20 2008 robUpdateLocking+f2 DRMI+2ARMS

rob, john, yoichi

Tonight we tried to move the 166MHz (f2) sideband frequency by changing the settings on the Marconi. Reducing the frequency by 4kHz reduced the amplitude of the 166MHz sidebands, but we were still able to lock the DRMI with the +-f2 sidebands by electronically compensating for the gain decrease, and also to lock the DRMI+2ARMs while resonating the -f2 sideband. No luck with the +f2.

Then we larkily tried increasing the frequency by 4kHz, which ~doubled the f2 sideband transmission through the MC. This means our frequencies/MC length have been mismatched for months. Apparently I explained the level of the f2 sidebands by just imagining that I'd (or someone) had set the modulation depth at that level some time in the past.

It's a miracle any locking worked at all in this state. Once this was done and we worked out a few kinks in the script, adjusting some gains to compensate, we managed to get the DRMI+2ARMS to lock a couple of times while resonating the +f2 sideband. It takes a while, but at least it happens. Tomorrow we'll measure the length of the mode cleaner properly and then try again. No need to vent just yet.
  751   Mon Jul 28 23:41:07 2008 robConfigurationPSLFSS/MC gains twiddled

I found the FSS and MC gain settings in a weird state. The FSS was showing excess PC drive and the MC wouldn't lock--even when it did, the boost stage would pull it off resonance. I adjusted the nominal FSS gains and edited the mcup and mcdown scripts. The FSS common gain goes to 30dB, Fast gain to 22dB, and MCL gain goes to 1 (which puts the crossover back around ~85 degrees where phase rises above 40 degrees).
  752   Tue Jul 29 01:03:17 2008 robConfigurationIOOMC length measurement
rob, yoichi

We measured the length of the mode cleaner tonight, using a variant of the Sigg-Frolov method. We used c1omc DAC outputs to inject a signal (at 2023Hz) into the AO path of the mode cleaner and another at DC into the EXT MOD input of the 166MHz IFR2023A. We then moved an offset slider to change the 166MHz modulation frequency until we could not see the 2023Hz excitation in a single-bounce REFL166. This technique could actually be taken a step further if we were really cool--we could actually demodulate the signal at 2023Hz and look for a zero crossing rather than just a powerspec minimum. In any case, we set the frequency on the Marconi by looking at the frequency counter when the Marconi setting+EXT MOD input were correct, then changed the Marconi frequency to be within a couple of Hz of that reading after removing the EXT MOD input. We then did some arithmetic to set the other Marconis.

The new f2 frequency is:

New              OLD
--------------------------
165983145        165977195

  756   Tue Jul 29 14:38:02 2008 robUpdateSUSETMY and PRM have EQ related problems

Quote:
The attached trend shows that ETMY and PRM both had large steps in their sensors
around the time of the EQ and didn't return afterwards. The calibration of the
OSEM sensors is ~0.5 mm/V. The PRM sensors respond when we give it huge biases
but there is very little change in the ETMY. Almost certainly true that the
optics have shifted in their wire slings and that we will have to vent to
examine and repair at least ETMY.

Jenne is looking at the spectra of the other suspensions to see if there is
other more subtle issues.


Some additional notes/update:

ETMY, PRM, & MC2 had OSEM signals at a rail (indicating stuck optics). Driving the optics with full scale DAC output freed ETMY and MC2, so while these may have shifted in their slings it may be possible to avoid a repair vent. PRM is still stuck. One OSEM appears to respond with full range to large drives, but the other three face OSEMS remain disturbingly near the rail (HIGH, which is what would happen if a magnet fell off).
  757   Tue Jul 29 18:15:36 2008 robUpdateIOOMC locked

I used the SUS DRIFT MON screen to return the MC suspensions to near their pre-quake values. This required fairly large steps in the angle biases. Once I returned to the printed values on the DRIFT screen (from 3/08), I could see HOM flashes in the MC. It was then pretty easy to get back to a good alignment and get the MC locked.
  771   Wed Jul 30 15:28:08 2008 robUpdateLSCY arm locked

By using a combination of the SUS-DRIFT mon screen and the optical levers (which turned out pretty well) I steered the BS, ITMY, and ETMY back to their previous positions, and was able to lock the Y arm. The "Restore Y Arm" script on the IFO_CONFIGURE screen works. I couldn't test the alignment script, as a dump truck/construction vehicle showed up and started unlocking the MC.
  848   Mon Aug 18 17:37:14 2008 robUpdateLockingrecovery progress

I removed the beam block after the PSL periscope and opened the PSL shutter.

There was no MC Refl beam on the camera, so I decided to trust the PSL launch
and aligned the MC to the PSL beam. Here are the old and new values for
the MC angle biases:
 __Epics_Channel_Name______   __OLD_____    ___New___
 C1:SUS-MC1_PIT_COMM          4.490900        3.246900 
 C1:SUS-MC1_YAW_COMM          0.105500	      -0.912500
 C1:SUS-MC2_PIT_COMM          3.809700	      3.658600 
 C1:SUS-MC2_YAW_COMM          -1.837100	      -1.217100
 C1:SUS-MC3_PIT_COMM          -0.614200	      -0.812200
 C1:SUS-MC3_YAW_COMM          -3.696800	      -3.303800

After this, the beam looks a *little low* going into the Faraday Isolator.
Nonetheless, after turning on the IFO input steering PZTs, I was able to
quickly steer the PRM get a beam on the REFL camera and into the REFL OSA.
The PRM optical lever beam is also striking the quad.

I then used the ETMX optical lever as a reference for realigning. After
steering around the input PZTs and ITMX, I saw some flashes in Xarm trans, then got
it locked and ran the alignment script ~5 times. The arm power went
up to 0.9, so I tweaked the MC1 to put the MC refl beam back on MCWFS.
The XARM power then went up to .96. Good enough for now.

Then I started to try and re-align the YARM. Since the oplevs for both ITMY
and the BS are untrustworthy, I first tried to get the beam bouncing off ITMX
and the BS back into the AS OSA, to try and recover some BS alignment. This
didn't work, as the AS OSA may not be a good reference anyways. After
wandering around in the dark for a little while, I decided to try an automated
scan of the alignment space. I used the trianglewave script to scan
the angle biases of BS, ITMY, & ETMY, then looked at the trend of the transmitted
power to find the gps time when there were flashes. I then used
time_machine_conlog to restore the biases to that time. This was close
enough to easily recover the alignment. After several rounds of aligning &
centering oplevs, things look good.

Also locked a PRM. Will work on the DRM tomorrow.

I'm leaving the optics in their "aligned" states over night, so they can
start their "training."

Note: The MC is not staying locked. Needs investigation.

For tomorrow:

lock up the DRM
fix the mode cleaner
re-align mode cleaner to optimize beam through Faraday
re-align all optics again (will be much easier than today)
re-align beam onto all PDs after good alignment of suspended optics is established.
Attachment 1: flatlissa.png
flatlissa.png
  862   Wed Aug 20 13:23:32 2008 robUpdateLockingDRMI locked

I was able to lock the DRMI this afternoon. All the optical levers have been centered.
  952   Wed Sep 17 12:55:28 2008 robConfigurationIOOMC length
I measured the mode cleaner length last night:

SR620                Marconi
                     199178070
165981524            165981725
                     132785380
                      33196345


I did the division in Marconi-land, rather than SR620-land.
If someone wants to do this in SR620-land, feel free to do it and post the numbers.
  953   Wed Sep 17 12:58:12 2008 robUpdateLockingbad

Locking was pretty unsuccessful last night. All the subparts were locked (ARMs, PRM, DRM) and
aligned, but no DRMI+2ARMs locks. The alignment may have drifted significantly by the time I
got around to working the full shebang, however.

We should get back into the habit of clicking the
yellow "Restore last auto-alignment" button when we finish using the interferometer.
  961   Thu Sep 18 01:14:23 2008 robSummaryComputersEPICS BAD

Somehow the EPICS system got hosed tonight. We're pretty much dead in the water till we can get it sorted.

The alignment scripts were not working: the SUS_[opt]_[dof]_COMM CA clients were having consistent network failures.
I figured it might be related to the network work going on recently--I tried rebooting the c1susaux (the EPICS VME
processor in 1Y5 which controls all the vertex angle biases and watchdogs). This machine didn't come back after
multiple attempts at keying the crate and pressing the reset button. All the other cards in the crate are displaying
red FAIL lights. The MEDM screens which show channels from this processor are white. It appears that the default
watchdog switch position is OFF, so the suspensions are not receiving any control signals. I've left the damping loops
off for now. I'm not sure what's going on, as there's no way to plug in a monitor and see why the processor is not coming up.

A bit later, the c1psl also stopped communicating with MEDM, so all the screens with PSL controls are also white. I didn't try
rebooting that one, so all the switches are still in their nominal state.
  975   Mon Sep 22 12:06:58 2008 robUpdateSUSITMY UL OSEM


Last week I found the ITMY UL OSEM dead. I went around and checked the connections on the various flat ribbon cables
in the suspension control chain; pushing hard on the rack end of the long cable that goes from the sus electronics rack to the
ITMY sat amplifier fixed the problem. It's been fine since then.

NB: A visual inspection of the cable connection would not have revealed a problem. You just can't trust those flat
ribbon connectors with the hook latches.
  985   Tue Sep 23 13:25:07 2008 robUpdateLockinga bit better
I've been spending time working on the short DOF loops (PRC,MICH,SRC) in an attempt to make the
initial stage of lock acquisition (the DRMI+2ARMs, no spring) better. This seems to have been
largely successful, as last night there were several locks of the DRMI+2ARMs with pretty short
wait times.

The output matrix for the short DOFs is a bit strange, though. The MICH->PRM element is about
3 times too small, which seems to indicate something broken in hardware. The MICH->SRM element
seems normal, though, which suggests the BS is isn't broken--either the PRM has had a sudden
actuation increase or it's a problem with the sensing.
ELOG V3.1.3-