40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 287 of 357  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  14220   Mon Oct 1 12:03:41 2018 not yukiConfigurationASCPZT driver board verification

I assume this QPD set is a D1600079/D1600273 combo.

How much was the SUM output during the measurement? Also how much were the beam radii of this beam (from the error func fittings)?
Then the calibration [V/m] is going to be the linear/inv-linear function of the incident power and the beam radus.

You mean the linear range is +/-50mV (for a given beam), I guess.

 

  16251   Mon Jul 19 22:16:08 2021 pacoUpdateLSCPRFPMI locking

[gautam, paco]

Gautam managed to lock PRFPMI a little before ~ 22:00 local time. The ALS to RF handoff logic was found to be repeatable, which enabled us to lock a total of 4 times this evening. Under this nominal state, we can work on PRFPMI to narrow down less known issues and carry out systematic optimization. The second time we achieved lock, we ran sensing lines before entering the ASC stage (which we knew would destroy the lock), and offline analysis of the sensing matrix is pending (gpstime = 1310792709 + 5 min).

Things to note:

(a) there is an unexpected offset suggesting that the ALS and RF disagreed on what the lock setpoint should be, and it is still unclear where the offset is coming from.

(b) the first time the lock was reached, the ASC up stage destroyed it, suggesting these loops need some care (we were able to engage the ASC loops at low gains (0.2 instead of 1) but as soon as we enabled some integrators this consistently destroyed the lock

(c) gautam had (burt) restored to the settings from back in March when the PRFPMI was last locked, suggesting there was a small but somehow significant difference in the IFO that helped today relative to last week


Take home message--> The mere fact that we were able to lock PRFPMI rules out the considerably more serious problems with the signal chain electronics or processing. This should also be a good starting point for further debugging and optimization.


gautam: the circulating power, when the ASC was tweaked, hit 400 (normalized to single arm locked with a misaligned PRM) suggesting a recycling gain of 22.5, and an average arm loss of ~30ppm round trip (assuming 2% loss in the PRC). 

  16269   Wed Aug 4 18:19:26 2021 pacoUpdateGeneralAdded infrasensing temperature unit to martian network

[ian, anchal, paco]

We hooked up the infrasensing unit to power and changed its default IP address from 192.168.11.160 (factory default) to 192.168.113.240 in the martian network. The sensor is online with user controls and the usual password for most workstations in that IP address.

  16274   Tue Aug 10 17:24:26 2021 pacoUpdateGeneralFive day trend

Attachment 1 shows a five and a half day minute-trend of the three temperature sensors. Logging started last Thursday ~ 2 pm when all sensors were finally deployed. While it appears that there is a 7 degree gradient along the XARM it seems like the "vertex" (more like ITMX) sensor was just placed on top of a network switch (which feels lukewarm to the touch) so this needs to be fixed. A similar situation is observed in the ETMY sensor. I shall do this later today.


Done. The temperature reading should now be more independent from nearby instruments.


Wed Aug 11 09:34:10 2021 I updated the plot with the full trend before and after rearranging the sensors.

  17798   Tue Aug 22 10:29:14 2023 pacoUpdateOptical LeversStorm and earthquake recovery -- ETMY oplev laser dead, ITMY stuck?

[JC, paco]

This morning we noted most optics were tripped, probably as a result of a recent M>5 earthquake in the area (on Sun 08/20). Most optics were restored and damped nicely, except for ITMY.

PMC locked to HOM --> realigned and locked

We aligned PMC to maximize its transmission to ~ 0.670, after this IMC was locked and we engaged the WFS to recover the alignment.

ETMY oplev laser --> replaced aligned and locked

Most suspended optics were restored, but we noticed the OpLev sum on ETMY and ITMY were too low so we checked the lasers on both optics. The ITMY HeNe laser is on, but the one on ETMY is off. JC tested with a new laser head and the controller was determined to be good. Then, we tried resetting the previous one (labeled Oct 25 2020) but didn't have luck, so yet another HeNe laser died. We removed the old one and luckily our spare had the same form factor so it wasn't hard to recover the nominal alignment. After this we verified that the OPLEV loops on ETMY were working.

ITMY local damping --> still "stuck" or worse

The local damping on ITMY is not working properly. This puts it in a weird alignment state which is why we also don't see a large Oplev sum count on the QPD. The shadow sensor (OSEM) signals are all small, the available rms monitors are ~ 0.0, 0.1 mV, and kicking the optic around doesn't produce a corresponding OSEM signal, even when undamped. Therefore, we believe ITMY is either stuck (UR/LR) or worse. We tried the usual "shake" technique but didn't see any sensors being restored.

  17800   Tue Aug 22 11:31:38 2023 pacoUpdateOptical LeversStorm and earthquake recovery -- ITMY restored

[JC, Koji-remote, paco]

ITMY stuck --> Shaken remotely and restored, ARMS aligned

With Koji's assistance we restored ITMY (it was stuck) and finished aligning both arms. Then JC centered the OpLevs for ETMs, ITMs and BS

ITMY camera blinking --> Replaced camera

JC checked the situation with our ITMYF (face) camera as the image seemed faulty and blinking. The issue this time was not in the power supply as has been before, but rather the CCD itself. After replacing the unit and aligning the ARM cavity, we redrew the marker "guides" on the control room screen for quick reference.

  951   Tue Sep 16 16:47:01 2008 peteConfigurationPSLPrototype FSS reference installed
After verifying output, I installed the new prototype 21.5 MHz FSS reference (Wenzel crystal oscillator and ZHL-2 amp). Yoichi and I successfully locked the MC, and have left the new reference in place. It's temporarily sitting on the corner of the big black optics table (AP table?).
  986   Tue Sep 23 15:28:06 2008 peteConfigurationPSLnew 21.5 MHz FSS reference installed
The new 21.5 MHz FSS reference is now installed in the rack with the 7 Sorensen PS. Both outputs give 18.7 dBm. The MC seems happy.

Bob did the +24 V and +15 V hookups for the amp and the Wenzel oscillator respectively, off of the din strips on the right of the rack.

I have attached two photographs. One shows the front of the box as mounted in the rack, and the other shows the inside of the box. From the second photo the circuit is apparent. Black wire coming in has ground, green has +15, and white has +24. After the switches, ground and +15 go to the Wenzel crystal oscillator, and ground and +24 go to the mini-circuits amp. There is 5 dB attenuation between the Wenzel 21.5 MHz output and the amp input. There is 3 dB attenuation between the amp output and the splitter.

The Wenzel crystal oscillator is their "streamline" model, and puts out 13.2 dBm. The amp is mini-circuits ZHL-2.
  1043   Mon Oct 13 13:51:49 2008 peteConfigurationPSLattempt to measure FSS ref phase
On Friday I began a measurement of the FSS reference phase. The setup involves the following:
+ turn off the 166 MHz LO (top signal generator on 1Y2 rack)
+ bring FSS LO 21.5 MHz to the 166 MHz delay line phase shifter, and back out the phase shifter with a second length of cable
+ add a length of cable to the RF 21.5 MHz in preparation for measuring FSS IN2 as a function of delay

Trouble locking the FSS, and ran out of time before the measurement could be performed.
  1046   Tue Oct 14 14:19:36 2008 peteConfigurationPSLFSS ref phase
Today I made several measurements which should yield the optimized phase for the FSS 21.5 MHz reference. I made two sets of measurements, using the 166 MHz phase delay shifter. For each phase value I made 5 measurements of a 500 kHz injection into test2 at 1 Vpp, with the 4195 spectrum analyzer on in1 with the high impedance probe, 51 points, a 10 kHz range. It was surprisingly noisy. I will make plots using matlab to find the maximum, and hope for consistent results between the two sets of measurements. If it is too noisy or inconsistent I will repeat the measurement at ~800 kHz.

Once I find the phase which yields peak amplitude in in1, I will measure the relative phase between LO and RF going in to the FSS, measure the speed of light in RG58 cable, and construct a new cable which will implement the desired relative phase.
  1050   Wed Oct 15 22:07:52 2008 peteConfigurationPSLFSS ref phase measurements
Optimizing the FSS LO/RF phase at 500 kHz, above the servo band, proved to be noisy and those measurements were useless. Today I repeated
the measurement at 35 kHz and got good signal to noise. I've attached a plot of the 35 kHz peak in dBm as measured at IN2 by SR785, with
an injection into TEST2 at 35 kHz with 0.2 Vpp, as a function of delay in ns given by the delay phase shifter normally used by the 166 MHz.
I fit the bottom (quadratic) portion of this curve, and found an optimum delay of 25.8 ns, which can be implemented as 25.81 ns on the phase
shift box (25 + 1/2 + 1/4 + 1/16). This is an uncalibrated number and meaningless.. For all these measurements a very long SMA cable
(did not measure) was inserted on the RF output of the 21.5 MHz reference box. The actual phase difference depends on these cable lengths
which I didn't measure.

To determine the actual phase difference I compared the LO and RF input points with the 25.81 ns delay, using a scope with poor man's
averaging (33 manual triggers and recording the phase measurements). The phase difference was 8.24 degrees with an error on the mean of 3.4%,
with the LO having the longer effective cable (cable plus delay from the phase delay box). As a sanity check I set the phase delay box
to 20 ns and re-measured, and found 49.5 degrees. (1/21.5 MHz) * (49.5-8.24)/360 = 5.3 ns, which is about the difference between 20 ns
and 25.81 ns. I did the same with 0 ns dialed in, and found a difference of 21.5 ns (I expected 25.8 ns). So it is possible that the
phase delay box isn't too precise.

Finally, to determine the length of cable needed to implement 8.24 degrees of phase at 21.5 MHz with RG58 cable, I made some phase measurements
using the FSS reference box and mismatched cables. I used three cable lengths (93 cm, 140.5 cm, and 169.5 cm ) and two mismatched pairs with
dL of 29 and 76.5 cm. For each pair I took average of 20 measurements, finding 22.54 degrees mean for the dL=29 cm pair (0.78 degrees/cm, or
a speed of light of 1.0e10 cm/s, or 10.6 cm of cable length on the LO) and 43.57 degrees mean for dL=76.5 cm pair (0.57 degrees/cm, or a speed
of light of 1.4e10 cm/s, or 14.5 cm of cable length on the LO). I expected more precise agreement.

Maybe the 21.5 MHz reference box is not zero phase between the outputs. This could be easily tested. It might be interesting to repeat this
measurement with a few more dL values.
  1053   Thu Oct 16 13:12:58 2008 peteConfigurationPSLphase between FSS reference outputs
I verified the phase between the FSS reference outputs (used for LO and RF) using matched BNC cables. I measured 0.95 degree (average of 12 scope measurements).
  1054   Thu Oct 16 16:26:26 2008 peteConfigurationPSLFSS phase matching cable installed
RG 405 cable has a solid teflon dielectric, and a velocity factor of 0.69. To get the 8.2 degrees of additional phase on the LO output at 21.5 MHz then requires 22 cm of cable. I made a cable that ended up being 21 cm long after I'd gained some experience putting on the connector. It gives a phase difference between LO and RF of about 10 degrees. It is currently installed.
  1078   Thu Oct 23 20:47:28 2008 peteConfigurationPSLFSS LO calibration for MEDM
Today I took a quick series of measurements to calibrate the FSS LO power measurement in the MEDM. This was done by using the spec.an. to measure the 21.5 MHz peak in dBm at the LO input to the FSS box on the PSL table, and recording the MEDM value, for attenuations applied at the FSS REF box output ranging from -5 dBm to -30 dBm.

I measured the loss due to the BNC cable I used, which was (19.66-19.50) dBm. I accounted for this and plotted ln(MEDM) vs. dBm on the attached plot. A linear fit of this gives the CALC field of a calc record for the IOC db:
6.29*LOGE(A)+5.36

Since no one knew how to do this nonlinear conversion in EPICS I will describe how to do it in detail tomorrow. It is simple, although it requires power cycling the scipe3 bunch (typing "reboot" or "ctl-x" at the command prompt took it down, but it did not come back). I did power cycle those computers a few times today.
  1083   Fri Oct 24 11:21:26 2008 peteConfigurationPSLFSS LO input calibrated in dBm
Based on the measurements described in my previous elog, I created a new calc record in the file /cvs/cds/caltech/target/c1psl/psl.db
grecord(calc, "C1:PSL-FSS_LOCALC")
{
        field(INPA,"C1:PSL-FSS_LODET")
        field(SCAN,".1 second")
        field(PREC,"4")
        field(CALC,"6.29*LOGE(A)+5.36")
}

After restarting scipe3 to load this change, I told C1PSL_FSS.adl to look at this record instead of *LODET. That MEDM screen now shows LO input calibrated in dBm.

For reference, the operators available for use in the CALC field are listed in the EPICS Record ref manual, Chapter 9. The manual can be found here:
http://www.aps.anl.gov/epics/EpicsDocumentation/AppDevManuals/RecordRef/Recordref-3.html

Yoichi said he was fixing an SVN problem, so I have not yet committed the two files I changed: /cvs/cds/caltech/target/c1psl/psl.db and /cvs/cds/caltech/medm/c1/psl/C1PSL_FSS.adl.
  1245   Thu Jan 22 12:08:59 2009 peteUpdateoplevsoplev calibration
Following the procedure described in Royal Reinecke's 2006 SURF report, I've calibrated the ETMY yaw oplev DOF. The idea is to sweep the mirror tilt, measuring the transmitted cavity power and the oplev error signal. The cavity power can be related to the mirror tilt in radians following D. Anderson APPLIED OPTICS, Vol. 23, No. 17, 1984.

I've made a simple matlab script which spits out the final number; it calls Royal's perl script to do the sweep. I get 420 microrad/ct for ETMY yaw. In 2006 Royal got 250 microrad/ct. Could something have changed this much, or is one of us wrong? I'll double check my procedure and do the other arm cavity oplevs, and describe it in detail when I have more confidence in it.

Kakeru and I plan to extend this to handle the PRM, SRM, and BS. One script to rule them all.
  1247   Thu Jan 22 23:36:50 2009 peteHowTooplevsarm cavity oplev calibration
calibrated the y-arm oplevs. the procedure is contained in a matlab script. the whereabouts of this script will be revealed in a future log entry.

ITMYpit 140 microrad/ct
ITMYyaw 98 microrad/ct
ETMYpit 400 microrad/ct
ETMYyaw 440 microrad/ct (previous measurement gave 420 microrad/ct)

procedure:

1) Start with a single arm aligned and locked. Dither the mirror tilt in a DOF. Measure arm cavity power and oplev error signal. See the first attached plot.

2) Fit the plot to a gaussian and determine mu and sigma.

3) For a spherical ETM optic, the power in the cavity P(a), as a function of translational beam axis displacement a=R*sin(theta), is proportional to exp[-a^2/(2*x^2)] where x is the waist size (D. Anderson APPLIED OPTICS, Vol. 23, No. 17, 1984). The power as a function of mirror tilt in cts, P(tilt) is proportional to exp[-(tilt-mu)^2 /(2*sigma^2)]. So if R is the mirror radius then theta = arcsin(a/R) = arcsin[(1/R)*(tilt-mu)*x/sigma)].

4) Fit theta versus mirror tilt to get the calibration. See the second attached plot.

5) For a flat ITM optic, mirror tilt causes an angular displacement of the beam. The math for this case is given in Anderson.
  1251   Fri Jan 23 16:33:27 2009 peteUpdateoplevsx-arm oplev calibrations
ITMXpit 71 microrad/ct
ITMXyaw 77 microrad/ct
ETMXpit 430 microrad/ct
ETMXyaw 430 microrad/ct

As with y-arm, my ITM measurements agree with Kakeru and Royal, but my ETM measurements are not quite a factor of 2 higher. Kakeru and I are investigatin.
  1327   Thu Feb 19 23:50:31 2009 peteUpdateLockingaligned pd's on AP table

Yoichi, Peter

While continuing our efforts to lock, we noticed the procedure failed at a point it had gotten past last night:  turning on the bounce/roll filters in MICH, PRC, and SRC.  We checked the MICH transfer function and noticed that the unity gain point was ~10 Hz, well below the bounce modes.   We tried increasing the gain but found saturation, and Rob suggested that there could be misalignment on the AP table, which Steve worked on today.  We went out and found two of the PDs (ASDD133 and AS166) to be badly misaligned probably due to a bumped optic upstream.  We re-aligned.

 

 

  1335   Tue Feb 24 18:42:15 2009 peteUpdateLockingmc board repair
Peter, Yoichi
Last night:


Quote:
However, when we measured the MCL loop gain with several different AO path gains, the loop shape did not change at all. This led us to suspect the AO path may not be connected. The cabling from the common mode board to the MC board seemed ok. We tested the signal flow in the MC board using a signal generator and an oscilloscope. Then we found that a signal injected to the IN2 (AO path) does not reach to the TP1A (right after the boost stages), though the signal is visible in the OUT2 (monitor BNC right after the initial amplifier (B-amp) for the AO path). The signal from IN1 (MC REFL) can be observed at TP1A. This means something is broken between the B-amp and the sum-amp in the AO path. We will check the MC board tomorrow.


Today we examined the MC board. With the extension board in place everything seemed fine. Without the extension board we could replicate the problem. Jiggling the IN2 jack caused the injected signal observed at TP1A to come and go. These jacks are unfortunately mounted directly on the board. We traced the problem to a resistor in this path (R30) which looked fishy. We soldered on a new 2K resistor with OWC and it fixed the problem.
  1435   Fri Mar 27 02:40:06 2009 peteSummaryIOOMC glitch investigation

Yoichi, Pete

The MC loses lock due to glitches in the MC1 coils. 
We do not know which coil for sure, and we do not know if it is a problem going into the board, or a problem on the board. 
We suspect either the UL or LR coil bias circuits (Pete would bet on UL).  If you look at the bottom 4 plots in the attached file, you can see a relatively large 3 minute dip in the UL OSEM output, with a corresponding bump in the LR (and smaller dips in the other diagonal).  
These bumps do not show up in the VMONS which is why we are suspicious of the bias.
To test we are monitoring 4 points in test channels, for UL and UR, both going into the bias driver circuit, and coming out of the current buffer before going into the coils. 
 

We ran cable from the suspension rack to the IOO rack to record the signals with DAQ channels.

The test channels:

UL coil      C1:IOO-MC_DRUM1  (Caryn was using, we will replace when we are done)

UL input   C1:IOO-MC_TMP1 (Caryn was using, we will replace when we are done)

LR coil      C1:PEM-OSA_SPTEMP

LR input   C1:PEM-OSA_APTEMP

We will leave these overnight; we intend to remove them tomorrow or Monday.

We closed the PSL shutter and killed the MC autolocker.

  1463   Thu Apr 9 12:23:49 2009 peteUpdateLockingtuning ETM common mode

Pete, Yoichi

Last night, we put the IFO in FP Michelson configuration.  We took transfer functions of CARM and DARM, first using CM excitations directly on the ETMs, and then using modulations of the laser frequency via MC excitation.  We found that there was basically no coupling into DARM using the MC excitation, but that there was coherence in DARM using the ETM excitation.  Therefore, I tuned the ETM common mode in the output matrix.  I did this by taking transfer functions of PD1_Q with PD2_I (see attached plot).  I changed the  drdown_bang script to set C1:LSC-BTMTRX_14 0.98 and C1:LSC-BTMTRX_24 1.02.

  1489   Thu Apr 16 16:26:57 2009 peteUpdateLockingWed. night locking
yoichi, pete

We installed the watchLockLoss script in scripts/AutoDTT/.  This script monitors arm power and uses command line
DTT to save 5 s snapshot of the interferometer when it senses loss of lock.  We ran it on linux and it seemed to
save an xml file about half the time; we'll try it on solaris.  

I managed to get up to arm power of about 20 a couple of times.  IFO lost lock a couple of times after turning
off moving zero.  MC2 would often get tripped by lock loss and need resetting.  Maybe we will try to stiffen the
op levs.
  1525   Tue Apr 28 01:48:45 2009 peteConfigurationVACVC1 open

At about 1am or so Yoichi and I opened VC1.  CC1 had fallen to about 5e-5 torr.

  1557   Thu May 7 18:12:12 2009 peteUpdateLockingarm power curve

I've plotted TRX, TRY, PD12I and PD11Q.  Arm powers after locking increase for a few tens of minutes, peak out, and then decrease before lock is lost.

 

 

  1558   Thu May 7 23:21:04 2009 peteUpdateLockingarm power curve

Quote:

I've plotted TRX, TRY, PD12I and PD11Q.  Arm powers after locking increase for a few tens of minutes, peak out, and then decrease before lock is lost.

 

 

 I should have mentioned that the AS port camera image seems to get progressively uglier over the course of these locks.  Maybe we can use the JoeCam to make a movie of it. 

  1560   Fri May 8 02:08:59 2009 peteUpdateLockinglock stretches

locks last for about an hour.  this was true last night as well (see "arm power curve" entries).   the second lock shown here evolves differently for unknown reasons.  the jumps in the arm powers of the first lock are due to turning on DC readout.  length-to-angle needs tuning.

 

 

  1565   Fri May 8 15:40:44 2009 peteUpdateLockingprogressively weaker locks

the align script was run after the third lock here.  it would have been interesting to see the arm powers in a 4th lock 

  1578   Tue May 12 17:26:56 2009 peteUpdateoplevsetmy oplev quad was bad

Pete, Rob

After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115)  was noisy.  Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag).  We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts.  I popped in the ETMX quad and everything looked fine.  I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine.  We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads. 

Attached is a plot.  The reference curves are with the faulty quad (115).  The others are with the 121.

 

  1580   Wed May 13 03:05:13 2009 peteUpdateoplevsetmy oplev quad was bad

Quote:

Pete, Rob

After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115)  was noisy.  Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag).  We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts.  I popped in the ETMX quad and everything looked fine.  I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine.  We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads. 

Attached is a plot.  The reference curves are with the faulty quad (115).  The others are with the 121.

 

 I adjusted the ETMY quad gains up by a factor of 10 so that the SUM is similar to what it was before.

  1585   Thu May 14 02:36:05 2009 peteUpdateLockingunstable IFO

It seems that the MC3 problem is intermittent (one-day trend attached).  I tried to take advantage of a "clean MC3" night, but the watch script would usually fail at the transition to DC CARM and DARM.  It got past this twice and then failed later, during powering up.   I need to check the handoff.

 

  1587   Thu May 14 16:07:20 2009 peteSummarySUSChannel Hopping: That ancient enemy (MC problems)

Quote:

Quote:
The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps.


This started around April 23, around the time that TP1 failed and we switched to the cryopump, and also when there was a mag 4 earthquake in LA. My money's on the EQ. But I don't know how.


I wonder if this is still a problem. It has been quiet for a day now. I've attached a day-long trend. Let's see what happens.
  1588   Fri May 15 00:02:34 2009 peteUpdateSUSETMX coils look OK

I checked the four rear coils on ETMX by exciting XXCOIL_EXC channel in DTT with amplitude 1000@ 500 Hz and observing the oplev PERROR and YERROR channels.  Each coil showed a clear signal in PERROR, about 2e-6 cts.  Anyway, the coils passed this test.

 

  1610   Wed May 20 01:41:19 2009 peteUpdateVACcryopump probably not it

I found some neat signal analysis software for my mac (http://www.faberacoustical.com/products/), and took a spectrum of the ambient noise coming from the cryopump.  The two main noise peaks from that bad boy were nowhere near 3.7 kHz.

  1616   Thu May 21 18:05:03 2009 peteUpdateSUSETMX coils look OK

Quote:

I checked the four rear coils on ETMX by exciting XXCOIL_EXC channel in DTT with amplitude 1000@ 500 Hz and observing the oplev PERROR and YERROR channels.  Each coil showed a clear signal in PERROR, about 2e-6 cts.  Anyway, the coils passed this test.

 

 I also made xfer fctns of the 4 piston coils on ETMY and ETMX with OL_PIT.  (I looked at all 4 even though the attached plot only shows three.)  So it looks ike the coils are OK.

  1620   Fri May 22 01:27:14 2009 peteUpdateSUS200 days of MC3 side

Looks like something went nuts in late April.  We have yet to try a hard reboot.

  1641   Tue Jun 2 02:28:58 2009 peteUpdateLockingDD handoff work

alberto, pete

 

We worked on tuning the DD handoff tonight.  We checked the DD PD alignments and they looked fine.  First I tuned the 3 demod phases to minimize offsets.  Then I noticed that the post-handoff MICH xfer function needed an increase in gain to look like the pre-handoff xfer function (which has a UGF of about 25 Hz).  I increased the MICH PD9_Q gain from 2 to 7 in the input matrix.   But, the handoff to PRC still failed, so tomorrow we will try to find out why.

In the plot, ref0 is before MICH handoff, and ref1 is after MICH handoff.  There is also a PRC trace (before PRC handooff).

 

 

  1643   Tue Jun 2 23:53:12 2009 peteDAQComputersreset c1susvme1

rob, alberto, rana, pete

we reset this computer, which was out of sync (16384 in the FE_SYNC field instead of 0)

  1645   Wed Jun 3 03:22:16 2009 peteUpdateLockingDD handoff

Rana, Alberto, Pete

We have the DD handoff nominally working.  Sometimes, increasing the SRC gain at the end makes MICH get unstable.  This could be due to a non-diagonal term in the matrix, or possibly because the DRM locks in a funky mode sometimes. 

To get the DD handoff working, first we tuned demod phases in order to zero the offsets in the PD signals handed-off-to.  Based on transer function measurements, I set the PRC PD6_I element to 0.1, and set the PD8_I signal to 0, since it didn't seem to be contributing much.  We also commented out the MICH gain increase at the end of the DD_handoff script.

It could still be more stable, but it seems to work most of the time.

 

 

  1652   Thu Jun 4 16:54:19 2009 peteUpdateLockingdaytime DD handoff

I played with the DD handoff during the day.  The DRM dark port was flickering like a candle flame in Dracula's castle.  The demod offsets for the handoff signals looked fine.  After MICH handoff, the MICH_CTRL started to get unstable at some low frequency, maybe 3 Hz (I didn't measure).  So I increased the MICH gain from 0.1 to 0.17 and it settled down.  PRC and SRC went fine.  Then the DD_handoff script raised the MICH gain to 0.7, and an instability started to grow in MICH_CTRL (at some higher frequency).  I decreased the MICH gain from 0.7 to 0.5, and it settled down and stayed stable.

  1653   Thu Jun 4 23:39:23 2009 peteUpdatePEM5 days, 20 days of accelerometers

Looks like yesterday was particularly noisy.  It's unclear to me why diurnal variation much more visible in MC1_Y, and why the floor wanders.

 

The first plot shows 5 days.  The second plot shows 20 days.

  1658   Fri Jun 5 17:22:55 2009 peteUpdateLockingdaytime locking

After fixing the tp problem, I tried locking again.  Grabbing and DD handoff, no problem.  Died earlier than last night, handing off CARM to REFL_DC, around arm power of 4 or so.  Seems to happen after turning off the moving zero, Rob says it might be touchy in daytime.

  1679   Tue Jun 16 16:10:01 2009 peteUpdateLockinginput matrix experiments

Last night Rob ran senseDRM and loadDRMImatrixData and came up with the following for the input matrix:

tdswrite C1:LSC-ITMTRX_b2 0.065778 \
C1:LSC-ITMTRX_d2 2.2709 \
C1:LSC-ITMTRX_f2 2.9361 \
C1:LSC-ITMTRX_122 0.42826 \
C1:LSC-ITMTRX_b3 -0.064839 \
C1:LSC-ITMTRX_d3 -0.016913 \
C1:LSC-ITMTRX_f3 -0.021576 \
C1:LSC-ITMTRX_123 -0.0025243 \
C1:LSC-ITMTRX_b5 0.3719 \
C1:LSC-ITMTRX_d5 1.3109 \
C1:LSC-ITMTRX_f5 -0.16412 \
C1:LSC-ITMTRX_125 0.39574 \
C1:LSC-ITMTRX_33 0 \
C1:LSC-ITMTRX_42 0 \
C1:LSC-ITMTRX_155 0

Today, I reran these and got the following, and DD_handoff remained happy:

tdswrite C1:LSC-ITMTRX_b2 -0.10329 \
C1:LSC-ITMTRX_d2 2.0344 \
C1:LSC-ITMTRX_f2 3.2804 \
C1:LSC-ITMTRX_122 0.22516 \
C1:LSC-ITMTRX_b3 -0.076292 \
C1:LSC-ITMTRX_d3 -0.014603 \
C1:LSC-ITMTRX_f3 -0.12101 \
C1:LSC-ITMTRX_123 0.0054128 \
C1:LSC-ITMTRX_b5 0.33521 \
C1:LSC-ITMTRX_d5 1.1425 \
C1:LSC-ITMTRX_f5 -0.32759 \
C1:LSC-ITMTRX_125 0.25877 \
C1:LSC-ITMTRX_33 0 \
C1:LSC-ITMTRX_42 0 \
C1:LSC-ITMTRX_155 0

I wanted to remeasure with the canonical output matrix (-0.7 from MICH to PRM and 0.7 from MICH to SRM), but the DRM freaked out when MICH to PRM went below -0.3.

  1769   Tue Jul 21 17:01:18 2009 peteDAQDAQtemp channel PEM-PETER_FE

I added a temporary channel, to input 9 on the PEM ADCU.    Beware the 30, 31, and 32 inputs.  I tried 32 and it only gave noise.

 

 

  1781   Wed Jul 22 20:11:26 2009 peteUpdateComputersRCG front end

I compiled and ran a simple (i.e. empty) front end controller on scipe12 at wilson house.  I hooked a signal into the ADC and watched it in the auto-generated medm screens. 

There were a couple of gotchas:

1. Add an entry SYS to the file /etc/rc.local, to the /etc/setup_shmem.rtl line, where the system file is SYS.mdl.

2. If necessary, do a BURT restore.  Or in the case of a mockup set the BURT Restore bit (in SYS_GDS_TP.adl) to 1.

 

  1805   Wed Jul 29 12:14:40 2009 peteUpdateComputersRCG work

Koji, Pete 

Yesterday, Jay brought over the IO box for megatron, and got it working.  We plan to firewall megatron this afternoon, with the help of Jay and Alex, so we can set up GDS there and play without worrying about breaking things.  In the meantime, we went to Wilson House to get some breakout boards so we can take transfer functions with the 785, for an ETMX controller.  We put in a sine wave, and all looks good on the auto-generated epics screens, with an "empty" system (no filters on). Next we'll load in filters and take transfer functions.

Unfortunately we promised to return the breakout boards by 1pm today.  This is because, according to denizens of Wilson House, Osamu "borrowed" all their breakout boards and these were the last two!  If we can't locate Osamu's cache, they expect to have more in a day or two.

Here is the transfer function of the through filter working at 16KHz sampling. It looks fine except for the fact that the dc gain is ~0.8. Koji is going to characterize the digital down sampling filter in order to try to compare with the generated code and the filter coefficients.


  1819   Mon Aug 3 13:47:42 2009 peteUpdateComputersRCG work

Alex has firewalled megatron.  We have started a framebuilder there and added testpoints.  Now it is possible to take transfer functions with the shared memory MDC+MDP sandbox system.  I have also copied filters into MDC (the controller) and made a really ugly medm master screen for the system, which I will show to no one.

  1826   Tue Aug 4 13:40:17 2009 peteUpdateComputersRCG work - rate

Koji, Pete

 

Yesterday we found that the channel C1:MDP-POS_EXC looked distorted and had what appeared to be doubled frequency componenets, in the dataviewer.  This was because the dcu_rate in the file /caltech/target/fb/daqdrc was set to 16K while the adl file was set to 32K.  When daqdrc was corrected it was fixed.  I am going to recompile and run all these models at 16K.  Once the 40 m moves over to the new front end system, we may find it advantageous to take advantage of the faster speeds, but maybe it's a good idea to get everything working at 16K first.

  1829   Tue Aug 4 17:51:25 2009 peteUpdateComputersRCG work

Koji, Peter

 

We put a simple pendulum into the MDP model, and everything communicates.  We're still having some kind of TP or daq problem, so we're still in debugging mode.  We went back to 32K in the .adl's, and when driving MDP,  the MDC-ETMX_POS_OUT is nasty, it follows the sine wave envelope but goes to zero 16 times per second.

 

The breakout boards have arrived.  The plan is to fix this daq problem, then demonstrate the model MDC/MDP system.  Then we'll switch to the "external" system (called SAM) and match control TF to the model.  Then we'd like to hook up ETMX, and run the system isolated from the rest of the IFO.  Finally we'd like to tie it into the IFO using reflective memory.

  1839   Wed Aug 5 17:41:54 2009 peteUpdateComputersRCG work - daq fixed

The daq on megatron was nuts.  Alex and I discovered that there was no gds installation for site_letter=C (i.e. Caltech) so the default M was being used (for MIT).  Apparently we are the first Caltech installation.  We added the appropriate line to the RCG Makefile and recompiled and reinstalled (at 16K).  Now DV looks good on MDP and MDC, and I made a transfer function that replicates  bounce-roll filter.  So DTT works too.

ELOG V3.1.3-