40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 193 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  11056   Fri Feb 20 19:09:48 2015 ericqUpdateASCQPD frontend code unified

I have changed all of the oplevs and transmon QPDs to use the common ISC QPD library block, which differs mainly in its divide by zero protection. 

c1scx.mdl and c1scy.mdl were directly changed for the transmon QPDs. The oplevs were done by changing the sus_single_control.mdl library part, which is used for all of the SOSs. 

Then, because of the underscore introduced (i.e. OLPIT becomes OL_PIT because there is an OL block), I went on a sed safari to find and replace the new channel names into:

  • The filter ini files
  • various MEDM screens
  • The optic misaligning scripts (which currently live in medm/MISC/ifoalign, and need to get moved to scripts/)
  • A recent BURT snapshot, to restore all of the switches and settings easily. 
  • scripts/activateDQ.py, which is responsible for renaming OL_PIT_IN1_DQ to OPLEV_PERROR, etc.

I've fixed everything that occured to me, and the usual ways I'm used to interacting with the oplevs all seem to work at this time, but it's entirely possible I've overlooked something.

One important note is: because we are now using an effectively immutable QPD library block, the oplev urad conversion has to take place in the DoF matrix. The EPICS records C1:SUS-[OPTIC]_OL_[DOF]_CALIB still exist, but do not multiply the fast signals. Rather, the OL_MTRX elements are multiples of the CALIB value. I thought about making a new QPD_CALIBRATED part or something, but then we're right back to using custom code, which is what we're trying to avoid. 

All of the oplev DoFs are stable, I checked a few loop TFs like ETMY pitch and PRM yaw, and they looked normal. 

  11061   Tue Feb 24 18:54:26 2015 ericqUpdateASCSingle arm QPD ASC stability

I've lowered the UGFs for the transmission QPD servos to ~1-2Hz, and made it just an integrator. I left the arms locked with the QPD servos on for a few hours during the daytime today, and they succesfully prevented the Y arm from losing power from alignment drift for ~4 hours. Turning the servo off caused TRY to drop to ~0.6 or so. 

The X arm was only held for 2 hours or so, because after some unlock/drift event the power was below the servo trigger threshold. However, after gently nudging ETMX to get the transmission above the threshold, the servo kicked in, and brought it right back to TRX=1.0

Unfortunately, daqd was dead for much of the day, so I don't have much data to show; the trend was inferred from the wall striptool. 

It is not proven that there aren't further issues that prevent this from working with higher / more dynamic arm powers, but this is at least a point in favor of it working. 

EDIT: Here's a screenshot of the wall StripTool. Brown is TRY, blue is TRX. The downturn at the very end is me deactivating the servos. 

There is no scientific justifcation for the 0.9 threshold. Really, I should look at the noise/SNR again, now that there is some ND filtering on the QPDs. 

Attachment 1: trend.png
trend.png
  11063   Wed Feb 25 04:21:58 2015 ericqUpdateCDSSome model updates

I changed the suspension library block to acquire the SUS_[optic]_LSC_OUT channels at 16k for sensing matrix investigations. We could save the FB some load by disabling these and oplev channels in the mode cleaner optic suspensions. 

I removed nonexistant PDs from c1cal, to try and speed it up from its constantly overflowing state. It's still pretty bad, but under 60us most of the time. 

I also cleaned out the unused IPCs for simulated plant stuff from c1scx and c1sus, to get rid of red blinkeys. 

  11072   Thu Feb 26 00:20:54 2015 ericqUpdateLSCModelled effect of relative modulation phase

I'm working on some more modelling investigations of this whole situation. The main thing I wanted to do was to look at the complex field amplitudes / IFO reflectivity to see how the PDH signal is affected by different field components. 

I still have plenty more to do, but I got a result which I though I should share. In addition to Jenne's simulation, I also see that between our "nominal" and "canceled" states as defined in Kojis ELOG 11036, there is a factor of ~20 difference in the PRCL signal in REFL33. 

The plots below are kind of like "PDH Signal Budgets" of the two states. 

Specifically, the reason our gain gets reduced is that, in the "canceled" state, the 44*11 and 55*22 products conspire to weaken the signal by having a slope opposite to the -11*22 type products. In contrast, in our "nominal" case, all of the products slope together. 

However, this also predicts that the nominal REFL33 is more sensitive to Carrier*33 than to the signal we desire, -11*22. The only reason it ever worked seems to be the biggest contriubutor, the unexpected 44*11! 

The "residual" trace is the difference of REFL33 and the sum of the field products shown, to justify that all relevant products had been included. 

The simulation that produced this was set up to create 4 orders of modulation at each EOM, with 3 orders of sidebands on sidebands. The demodulation phase was taken by lining up a PRM excitation entirely along I, as we would do on the actual instrument. 

MIST Simulation files attached!

Attachment 1: 33budget_canceled.png
33budget_canceled.png
Attachment 2: 33budget_nominal.png
33budget_nominal.png
Attachment 3: 2015-02-ModPhase.zip
  11073   Thu Feb 26 01:51:39 2015 ericqUpdateLSCSideband HOMs

So, my previous post suggested that 44*11 products might be the dominating signals in our "nominal" setup. I suppose that this could be not so bad, since it's not too unlike -11*22; the 11MHz field couples into the PRC and reflects with a rapidly changing phase around PRC resonance, and 44MHz is antiresonant, so it is a good local oscillator for REFL33. 

However, it then occured to me that my previous HOM analysis only looked at the 11MHz and 55MHz sidebands. 

When extending this to all of the sidebands within 55MHz, I discovered a troubling fact:

With the IFO parameters I have, the second spatial order +22MHz and fourth spatial order +44MHz fields almost exactly coresonate with the carrier in the PRFPMI! (DR, too)

If this is true, then any REFL33 signal seems to be in danger if we have an appreciable amount of these modes from, say, imperfect modematching.

On the other hand, we've been able to hold the PRMI with REFL33 when ALS is "on resonance," so I'm not sure what to think. (As a reminder, this analysis is done with analytic formulae for the complex reflectivities of the arm cavities and coupled recycling cavities as a function of CARM, spatial order and field frequency. Source is attached.)

It seems the Y arm geometry is to blame for this.

Maybe we should try to measure/confirm the Y arm g-factor...

Attachment 1: C1_HOMcurves_PR.png
C1_HOMcurves_PR.png
Attachment 2: C1_HOMcurves_Y.png
C1_HOMcurves_Y.png
Attachment 3: C1_HOMcurves_X.png
C1_HOMcurves_X.png
Attachment 4: C1_HOMlist.zip
  11076   Thu Feb 26 13:17:31 2015 ericqUpdateComputer Scripts / ProgramsFB IO load

Over the past few days, I've occasionally been peeking at the framebuilder IO load to see If I could correlate anything with it, but it's usually been low when I looked. I.e. with daqd and all models running, the %wa time was in the few percents at most.

Just now, I was seeing some EPICS sluggishness, and sure enough, the %wa was in the 50-60 range. I used iostat -xmh 5 on the framebuilder to see that /dev/sda, the /frames drive, was at 100% utilization, which means it was reading and writing as fast as it possibliy could. 

I ssh'd over to nodus, and with iotop found that an rsync job was running (rsync -am --exclude .*.gwf full 131.215.114.19::40m/full), and its IO rates corresponded very closely to the data read rates on the framebuilder from /frames. 

I killed the rsync process on nodus, and the %wa time on the framebuilder dropped to near zero. The ASS striptools, where I had noticed the sluggishness, immediately started updating faster.

While rsync is supposed to play nice with a system's IO demands, maybe it only knows about nodus's IO usage, not fb which is the underlying NFS server where the frames live. I think it would be good to throttle the bandwidth of these jobs to a specific bandwidth. 50MB/s seemed like too much, so maybe 10MB/s is ok?

  11084   Fri Feb 27 11:20:49 2015 ericqUpdateComputer Scripts / ProgramsiPython Notebook for LSC Sensing Matrix
Quote:

** along the way, I noticed that the reason this notebook hasn't been working since last night is that someone sadly installed a new anaconda python distro today  without telling anyone by ELOG. This new distro didn't have all the packages of the previous one.no I've updated it with astropy and uncertainties packages.

My bad, sorry! 

Yesterday, I was trying to install a package with anaconda's package manager, conda, but it was crashing in some weird way. I wasn't able to fix it, which led me to create a fresh installation. 

  11087   Mon Mar 2 17:02:01 2015 ericqUpdateLSCBS - PRM decoupling

Using PRX, I remeasured the relative actuation strengths of the BS and PRM to see if the PRM correction coefficient we're using is good. 

My result is that we should be using MICH -> -0.2655 x PRM + 0.5*BS.

This is very close to our current value of -0.2625 x PRM, so I don't think it will really change anything.


Measurement details:

The reason that the BS needs to be compensated is that it really just changes the PRM->ITMX distance, lx, while leaving the PRM-ITMY distance, ly, alone. I confirmed this by locking PRY and seeing no effect on the error signal, no matter how hard I drove the BS. 

I then locked PRX, and drove an 804Hz oscillation on the BS and PRM in turn, and averaged the resultant peak heights. I then tried to cancel the signal by sending the excitation with opposite signs to each mirror, according to their relative meaured strength.

In this way, I was able to get 23dB of cancellation by driving 1.0 x PRM - 0.9416 * BS. 

Now, in the PRMI case, we don't want to fully decouple like this, because this kind of cancellation just leaves lx invarient, when really, we want MICH to move (lx-ly) and PRCL to move (lx+ly). So, we use half of the PRM cancellation to cancel half of the lx motion, and introduce that half motion to ly, making a good MICH signal. Thus, the right ratio is 0.5*(1.0/0.9416) = 0.531. Then, since we use BS x 0.5, we divide by two again to get 0.2655. Et voila.

  11094   Tue Mar 3 19:19:15 2015 ericqUpdateIOOPC Drive / FSS Slow correlation

Jenne and I were musing the other night that the PC drive RMS may have a "favorite" laser temperature, as controlled by the FSS Slow servo; maybe around 0.2.

I downloaded the past 30 days of mean minute trend data for MC Trans, FSS Slow and PC Drive, and took the subset of data points where transmission was more than 15k, and the FSS slow output was within 1 count of zero. (This was to exclude some outliers when it ran away to 3 for some days). This was about 76% of the data. I then made some 2D histograms, to try and suss out any correlations. 

Indeed, the FSS slow servo does like to hang out around 0.2, but this does not seem to correlate with better MC transmission nor lower PC drive.

In the following grid of plots, the diagonal plots are the 1D histograms of each variable in the selected time period. The off diagnoal elements are the 2D histograms. They're all pretty blob-y, with no clear correlation. 

Attachment 1: jointplot.png
jointplot.png
  11098   Wed Mar 4 19:03:19 2015 ericqUpdateLSCArm length remeasurement

As discussed at today's meeting, we would like to (re)measure the Arm cavity lengths to ~mm precision, and their g-factors. Any arm length mismatch affects the reflection phase of the sidebands in the PRMI, which might be one source of our woes. Also, as I mentioned in a previous elog, the g-factors influence whether our 2f sidebands are getting pulled into the interferometer or not.

These both can be done by scanning the arm on ALS and measuring the green beat frequency at each IR resonance. (Misaligning the input beam will enhance the TM10 Mode content, and let us measure its guoy phase shift)

I started working on this today, but I have measurements to do, since at the time of today's measurements, I was fooled by the limits of the ALS offset sliders that I could only scan through two FSRs. Looking back at Manasa's previous measurment (ELOG 9804), I see now that more FSRs are possible.

Ways I will try to improve the measurement:

  • Jenne claims that the main limitation on ALS scanning range is the length to pitch coupling of the ETMs. If so, I should be able to get even more FSRs by scanning with MC2, as I did today, since the IMC cavity length is shorter, meaning more arm FSRs/unit length. More FSRs mean better statistics on the FSR slope fitting.
  • FSR error:
    • I am measuring the out-of-loop PDH signal of the arm at the same time as the beat spectrum is being measured, to know the magnitude of displacement fluctuations and any overall offset from the PDH zero crossing.
  • Beat frequency error:
    • I updated the HP8591E gpib scripts to be able to set the bandwidth and averaging settings in order to really nail down observed beat frequency.
    • I've written some code to fit the spectrum to a lorentzian profile, for evaluation of the linewidth/frequency uncertainty
    • I am also considering beating the analyzer with a rubidium clock to compensate for systematic errors, since ELOG 9837 says the analyzer is off by 140Hz/10MHz, i.e. 10ppm. Since we're trying to measure 1mm/40m~25ppm, this can matter.

Just for kicks, here are scans from today.

Attachment 1: Xscan.png
Xscan.png
Attachment 2: Yscan.png
Yscan.png
  11099   Thu Mar 5 04:29:13 2015 ericqUpdateLSCLocking work tonight

Brief elog of my activities tonight:

I was able to transition the digitial CARM control to REFL11 through the common mode board a total of one time, lock broke after a few seconds.

My suspicion was that when we did this on Monday, we unintentionally had a reasonable DARM offset, which reduced the finesse enough to let us take linear transfer functions and hop over. So, tonight, I intentionally looked at transitioning to CM_SLOW at some DARM offset. Using DARM offset of a few times 0.1 really calms the "buzzing" down, and makes it fairly straightforward to measure linear CARM sensing TFs. However, the CARM optical plant seems to change a fair amount depending on the DARM offset, in such a way that I was not able to compensate well enough to repeatedly transition.


Before I did anything else tonight, I measured the ALS noise down to 0.1 Hz, as a benchmark of how things are behaving.

With the arms locked on POX/POY, the HZ calibrated ALS channels reported

  • ALSX : 471Hz RMS
  • ALSY: 298 Hz RMS

Then, with the arms CARM/DARM locked on ALS, the PDH signals reported (using a line and the HZ channels for conversion)

  • Xarm : 552 Hz RMS
  • Yarm : 264 Hz RMS

Not bad! I roughly estimate this to mean ~90pm RMS CARM/DARM motion. (If X was as good as Y, it would be ~50pm...)


Some things I feel are worth noting:

  • In an effort to avoid the ETMX issues that Jenne had last night. I used MC2 to actuate CARM, and 2xETMY to actuate DARM. None of my locklosses appeared to be due to saturation of DARM, so I think it worked fine. The main drawback seems to be that if you have a violent lock loss, you may have to wait a bit for the IMC to relock; this only happened once tonight.
  • After the IR resonance finding scripts, I would run a z servo to try and get the PDH signal to cross zero. This made the ALS CARM and DARM zeros closer to the real resonating zeros than I usually see.
  • It is lately possible to sit at higher powers (albeit with very high RIN) for sizable amounts of time. In my last lock, I was in the range of 10-60x single arm power for around 30 minutes before I blew it with a failed transition attempt.
  • The set points for the QPD servos don't change much from lock to lock. I didn't have any problem using them tonight.

Tomorrow, I'll post some transfer functions of the difference between the ALS and CARM plants that I measured.

  11111   Fri Mar 6 14:51:59 2015 ericqUpdateGeneralX arm linewidth, loss

The fit FWHM is 10.444kHz +-55Hz. 

If we take the FSR from ELOG 9804, this implies an Xarm fineese of 380 +- 2. 

Assuming an ITMX transmission of 1.4%, this means an Xarm loss of 240 +- 90ppm. 

This is substatially lower than the ~500ppm I had measured via the unlocked/locked ASDC power method, but still pretty high. 

Since we were able to get continuous frequency counter values into the digital system, I decided to give it a quick spin with a calibrated single arm ALS scan. This should be repeated when amplifiers are in place, because the Y IR beatnote is wandering around in a way I don't trust and I'm not sure if the frequency counters have good absolute calibration...

Neverthess, I did a 5 minute scan through the Xarm, and fit it nicely to a lorentzian peak. 

  11126   Tue Mar 10 03:37:03 2015 ericqUpdateLSCLocking efforts

[Q, J]

Not much luck locking tonight; we made the RF transition to CARM numerous times, but it never lasted more than a minute or so. We were able to take a couple of loop and spectrum measurements as we transistioned. 

Here are some spectra showing the noise evolution of CARM_IN1 and DARM_IN1 as we start to transition CARM to RF. We did not manage to grab spectra while CARM was RF only; we can go back in the DQ to find some data. 

As we transition, our phase bubble is shrinking, which may explain our poor stability. On the following plot, I actually mistyped the legend. The cyan trace is ALL RF. I'm not sure why we have a 1/f^2 shape from 100->200Hz. 

[


We adjusted the pole compensation frequency by looking at REFL11/ALS during a CARM swept sine measurement, the -3db/-45degree point looked more like 80Hz. Strangely, the compensated REFL11 signal appears to lag the ALS signal around the UGF. Maybe this is a loop effect? 


In terms of practical improvements, I've written a script that reliably transitions from POX/POY IR lock to ALS CARM/DARM lock already on resonance. This is saving us a bunch of time. I've svn'd the new ALS script and the new carm_cm_up that uses it. 


We looked into the odd oplev behavior as well. We had earlier seen what looked like railed values on the FM output medm screen (which seemed unexpected for an AC coupled loop), but dataviewer showed it was actually ringing/railing at some 10+Hz as the oplev beam fell off the QPD. The ringing continues even after the quadrant values stop crossing zero, so I think it may be the filters themselves misbehaving. Why there is new behavior here is still beyond me. 


We lost a fair bit of time to a fussy mode cleaner tonight; there was a good 45 minute stretch where it refused to lock for more than a minute or so, the PC drive angriliy never falling below 5. The thing I changed when it started working was using the fast C1:IOO-MC_F channel instead of the slow C1:IOO-MC_FAST_MON as a readback for the FSS input offset; oddly there is a DC difference between the two. This has resulted in a FSS offset of ~4.2, whereas it was previously ~1.8. After this change, the PC drive fell to ~1.0 levels, and the IMC has been mostly ok. 


Given our problems stabilizing the RF lock, we attempted to give the FOOL path a shot, since we now had a better idea of the neccesary REFL11 gain. In short, no luck. Every attempt to use some RF signal just disturbed the lock further. We didn't really pursue it too much after a couple of attempts showed little promise. 

Attachment 1: 2015-03-10_rfCarmOLG.png
2015-03-10_rfCarmOLG.png
Attachment 2: 2015-03-10_rfTransitions.png
2015-03-10_rfTransitions.png
  11130   Tue Mar 10 20:08:23 2015 ericqUpdateCDScdsRampMuxMatrix Backported to 40m RCG

I have successfully backported the cdsRampMuxMatrix part for use in our RCG v2.5 system. This involved grabbing new files, merging changes, and hacking around missing features from RCG 2.9. 

The added/changed files, with relative paths referred to /opt/rtcds/rtscore/release/src/, are:

  • M include/drv/tRamp.c
    • New C functions to directly report current ramping value and ramping state 
  • M epics/util/feCodeGen.pl
    • Added if statements to main simulink parser to properly handle the part
  • AM epics/util/lib/RampMuxMatrix.pm
    • New Perl script that writes the frontend code for a given ramp matrix
  • A epics/util/mkrampmatrix.pl
    • New perl script that creates the default MEDM screen
  • A epics/simLink/lib/cdsRampMuxMatrix.mdl
    • Simulink block for the part
  • M epics/simLink/CDS_PARTS.mdl
    • Added block and doc for the part (which is missing an underscore in its definition of EPICS field names)

[A means the file was added, M means the file was modified]

Most of the trouble came from the EPICS reporting of the live ramping value and ramping state, since this depended on some future RCG value masking function. I had to rewrite the C-code writing perl script to define and update these EPICS variables in a more old-school way. 

This leaves us vunerable to the fact that a user/program can directly write to the live matrix element and ramping state, which would cause bad and unexpected behavior of the matrix.

So, when using a ramping matrix: NEVER WRITE to [MAT]_[N]_[N] as you would for a normal matrix. Use [MAT]_SETTING_[N]_[M] and trigger [MAT]_LOAD_MATRIX.

Similary, [MAT]_RAMPING_[N]_[N] is off limits. 

I tested the new part in the c1tst model. There are two EPICS input (TST-RAMP1_IN and TST-RAMP2_IN) that are the inputs to a 2x2 ramp matrix called TST-RAMP, and the outputs go to two testpoints (TST-RAMP1_OUT and TST-RAMP2_OUT) and two epics outputs (TST-RAMP1_OUTMON and TST-RAMP2_OUTMON). You can write something to the inputs from ezca or whatever, and use the C1TST_RAMP.adl medm screen in the c1tst directory to try it out. The buttons turn red when you've input a new matrix, yellow when a ramp is ongoing and green when the live value agrees with the setting. 

At this time, I have not rebuilt any of our operational models in search of potential issues.

I have created backups of the files I modified, such that a file such as feCodeGen.pl was renamed feCodeGen.40m.pl, and left next to the modified file. I am open to more robust ways of doing the backup; since our RCG source is an svn checkout of the v2.5 branch (with local modifications, to boot), I suppose we don't want to commit there. Maybe we make a 40m branch? A seperate repo? 

  11152   Fri Mar 20 16:44:49 2015 ericqUpdateIOOWaking up the IFO

X arm ASS is having some issues. ITMX oplev was recentered with ITMX in a good hand-aligned state. 

The martian wifi network wasn't showing up, so I power cycled the wifi router. Seems to be fine now. 

  11159   Mon Mar 23 10:36:55 2015 ericqUpdateVACPressure watch script

Based on Jenne's chiara disk usage monitoring script, I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi. I also updated the chiara disk checking script to work on the new Nodus setup. I tested the two, only emailing myself, and they appear to work as expected. 

The scripts are committed to the svn. Nodus' crontab now includes these two scripts, as well as the crontab backup script. (It occurs to me that the crontab backup script could be a little smarter, only backing it up if a change is made, but the archive is only a few MB, so it's probably not so important...)

  11160   Mon Mar 23 13:27:33 2015 ericqUpdateSUSITMX oplev quadrant gains unbalanced

I've been poking around the oplev situation. One thing I came across regarding ITMX was that the gain on segment 4 seems to be about higher than the other segments. I was led to believe this by steering the optic around, and looking at the counts on each quadrant when the other 3 were dark.

Putting a gain of 0.86 (the ratio of the other segments' max counts over segment 4's max counts) in the segment 4 FM flattens the 1 Hz peak in the ITMX_OL_SUM spectrum, as well as significantly reducing the sub-Hz coherence of the sum with the individual quandrant counts. This is what I would expect from reducing the coupling of angular motion due to quadrant gain mismatch into the sum. 

Here are the ITMX_OL_SUM spectra before and after (oplev servos are off).

The "burps" and control filter saturations are still unexplained. Investigations continue...

Attachment 1: olsum.png
olsum.png
  11162   Mon Mar 23 22:56:54 2015 ericqUpdateComputer Scripts / ProgramsNodus web things

Back when Diego and I were getting all of the web services running up on the new nodus, we inexplicably were not able to get the hosting of the public_html directory and wikis to share the same port of 30889. In ELOG 10793, we stated that public_html was hosted on a new port, 30888, though we didn't really bring much attention to that new fact. 

Unbeknowst to us at the time, this broke other links/bookmarks/sites that people had been using. Koji pointed this out to me the other day, but I have not made any sort of resolution. For now, the public_html directory, and the sites therein, have been taken offline. 


In other nodus news, Jamie has set Nodus' apache service with a certificate for SSL goodness. We want to extend this to the ELOG, which uses a built in webserver, rather than apache. 

He set up a proxy at the https address which will later host the secured elog: https://nodus.ligo.caltech.edu:8081/

When we make the switch to running the ELOG with HTTPS on by default, living on port 8081, we will set up apache to point 8080 at 8081, to preserve all of the old links. 

I.e. this change should effectively be invisible to ELOG users if we implement it right. 

  11163   Tue Mar 24 05:05:09 2015 ericqUpdateLSCAO Path engaged

[J, Q]

Terse tonight, more verbose tomorrow. 

We have succesfully achieved multiple kHz bandwidth using the CARM AO path. The CM board super boosts are at too high of a frequency to use effectively, given the flattening of the AO TF. 


Jenne's totally, completely, and in all possible ways uncalibrated plot.  Calibration lines are in here (numbers in control room notebook).  I'm going to export and replot the data tomorrow, in real units.

CARM_DARM_AOengaged_23March2015.pdf

Attachment 1: CARM_DARM_AOengaged_23March2015.pdf
CARM_DARM_AOengaged_23March2015.pdf
Attachment 2: loops.png
loops.png
  11167   Tue Mar 24 18:22:11 2015 ericqUpdateLSCAO Path engaged

For increased flatness of the AO response, and thus less gain peaking in the CARM loop, I reccomend turning down the MC servo VCO gain to 22dB, -6dB of the current setting. 

From there, we should be able to up the overall CARM gain by another 10dB, and turn on a super boost. 


I measured the IN1/IN2 response of the IMC loop with the aglient analyzer providing the IN2 excitation, to see the transfer function of the AO acutation. The hump in the TF explains the flattening out of the CARM OLTF we saw last night. Turning down the gain by 6dB flattens this bump, and more importantly, has around 10dB less gain when the phase goes through -180, meaning more gain margin for the CARM loop. 

Oddly, when I back out the MC OLG from these measurements, the loop shape is different than what Koji and Rana measured in December (ELOG 10841). Specifically, there is some new flattening of the loop shape around 300-400kHz that lowers the frequency where the phase hits -180. What could have caused this???

The -6dB that I mentioned was determined by putting the MC UGF at about 100kHz, at the peak of the phase bubble. This should allow us to safely have a CARM UGF of 40kHz since the MC loop has around +10dB loop gain there, which Rana once quoted as a rule of thumb for these loops. At that UGF, at least one CM board super boost should be fine, based on the loop shapes measured last night. 

Lastly, I also checked out whether the 3 MC super boosts were limiting the AO shape; I did not observe any diffrence of the AO TF when turning off one super boost. It's likely totally fine. 

Attachment 1: IMC_ao_Mar242015.png
IMC_ao_Mar242015.png
Attachment 2: IMC_olgs_Mar242015.png
IMC_olgs_Mar242015.png
  11168   Tue Mar 24 18:47:10 2015 ericqUpdateLSCAO Path engaged

Jenne has more detailed notes about how things went down last night, but I figure I should write about how we got the AO path stably up. 

As the carm_cm_up script stood after Jenne and Den's work last week, the CARM loop looked like the gold trace in the loop shape plot I posted in the previous elog. The phase bubble was clearly enlarged by the AO path, but there was some bad crossover instability brewing at 400 Hz. This was evident as a large noise peak, and would lead to lock loss if we tried to increase the overall CARM gain.

Quote:

 

As with our single arm CM board locking adventures, it was useful to have a filter that made the digital loop shape steeper around the crossover region, so that the 1/f AO+cavity pole shape played nice with the digital slope. As in the single arm trials, this effectively meant undoing the cavity pole compensating zero with a corresponding pole, letting the physical cavity pole do the steepening. This is only possible once the AO path has bestowed some phase upon you. A zero at a somewhat higher frequency (500Hz) gives the digital loop back some phase, which is neccesary to stay locked when the loop has only a few hundred Hz UGF, and the digital phase still matters. This gives us the purple trace. 

This provided us with a loop shape that could smoothly be ramped up in overall gain towards UGFs of multiple kHz (red trace). At this point we could reliably turn on the first boost, which will help in transitioning the PRMI to 1f signals (green trace). We didn't want to ramp it up too much, as we saw that the phase bubble likely ended not much higher than 100kHz, and the OLG magnitude was flattening pretty clearly around 40kHz. While we could turn on a super boost, it didn't look too nice, as we would have to stay at low phase margin to avoid bad gain peaking (blue trace).

As could be seen in the noise spectra that Jenne showed, you can see the violin notches in the CARM noise. This means we are injecting the digital loop noise all over the place. We attempted rolling off the digital loop (by undoing the zero at 500Hz), but found this made the gain at ~200Hz crash down, almost becoming unstable. We likely haven't positioned the crossover frequency in the ideal place for doing this. 

We didn't really give the interferometer any time to see how the long term stability was, since we wanted to poke around and measure as much as we could. While not every attempt would get us all the way there, the current carm_cm_up's success rate at achieving multi-kHz CARM bandwidth was pretty good (probably more than 50%) and the whole thing is still pretty snappy. 

  11182   Tue Mar 31 02:51:39 2015 ericqUpdateLSCSome locks

I had a handful of ~10 minute locks tonight. I intended to work on the 1f PRMI transition, but ended just familiarizing myself with the current scheme. 

Before touching anything, I committed the locking scripts to the svn. Unfortunately, the up script as I found it never worked for me tonight. I had to reintroduce the digitial crossover helper in CM_SLOW to get past the ramping up of the overall REFL11 gain. (With this is in place, there is some bad ringing around 200Hz for a time, but it goes away... or unlocks)

I did phase the PD formally known as REFL55 with an 800Hz PRM excitation while in full lock.42 to 102 degrees, ~30dB ratio between the I and Q peaks. However, come to think of it, how much does the CARM loop interfere with this?

The locklosses I had seemed to be due to a large fluctuation in all cavities' power. Maybe this will be helped by better PRC angular control, but we could maybe be helped by normalizing the digital part of the CARM loop by the arm transmissions once lock is acquired. 

  11185   Tue Mar 31 18:27:58 2015 ericqUpdateLSCSome locks
Quote:

Can we plot the arm power trend for multiple locks to see if it is associated with any thermal phenomenan in the IFO?

I'm currently more inclined to believe that the arm power trends have more to do with the arm alignments. Here's a 10 minute lock from last night, where the QPD servos were switched on about halfway through. I couldn't get Den's new servos to turn on without blowing the lock, so I reverted to my previous design, but still only actuated on the ETMs, with their oplevs still on. 

The most obvious feature is the reduction in power that seems to correspond to a ~10urad pitch deflection of ITMX when the lock begins. Is this optical spring action?

Also, it looks like the Y arm Yaw loop was badly tuned, and injecting noise. Ooops.


As of Den's QPD tuning, the QPD servos just actuate on the ETM. This next lock effectively had the QPD servos on the entire time, and we can see a similar drift in ITMX, and how ETMX then follows it to keep the QPD spot stationary. (Here, I'm plotting the QPD servo control signals, unlike above, so we can see X pitch servo output drift with the ITMX deflection)

Again, ITMX is moving in pitch by ~10urad when the interferometer starts resonating. If this is an optical spring, why does this just happen to ITMX? If it is digital shenanigans, how does it correlate with the lock, since there is nothing actuating on ITMX but oplevs and OSEM damping? Is light scattering into the ITMX OSEMs?

 

Attachment 1: qpdSwitch.png
qpdSwitch.png
Attachment 2: qpdAlways.png
qpdAlways.png
  11191   Wed Apr 1 23:56:36 2015 ericqUpdateLSCX Green Power drifting

Something funky is happening with the green light locked to the X arm. The green transmitted power is drifiting around. Maybe something weird is happening with the doubler? The digital thermal feedback loop is not on. 

The green has been locked on a TM00 mode this whole time. The step in power is me closing the PSL green shutter, but I'm not doing anything during the smooth changes in power. IR power is steady, so the alignment should be ok. I can't recover full power with the end PZT alignement either. 

 

Attachment 1: Xgreen_drifting.png
Xgreen_drifting.png
  11194   Thu Apr 2 04:11:20 2015 ericqUpdateLSCNot much locking, Xover measurement

A paltry two locks tonight, but not entirely useless. I had some issues keeping the PRMI locked, which some additional boosts helped with. But, my feeling was that our crossover process is not tuned well. 

At full lock, both sub-loops have high gain around the crossover region, so the usual DTT loop transfer function measurement produces a meausrement of Gdigital/G_aopath (or minus that. I.e. I'm not currently 100% which is the bad phase in this plot, though it intuitive looks like 0 ). Thus, we can directly look at the crossover frequency and the effect of the different filters there. (I've also been working on an up-to-date CARM loop model today, so this will help inform that). 

Below, the black traces are the crossover at the end of the script when using the 120:500 "helper," and purple is without it. As we turn up the AO path gain, the trace "falls" from above, which explains why we can see instabilities around the violin filter. 

Having the helper on definitely made the probability of surviving the first overall CARM gain ramp higher, but it's not currently intuitively clear to me why that is the case. Afterwards, we can turn the helper off, to keep the shallower crossover shape. This is what I've put in to the up script for now. I also added a few seconds delay for when the script wants to switch DARM to RF only; I found it was maybe speeding too fast through this point.

DTT xml attached

Attachment 1: CARMxOver_Apr3.png
CARMxOver_Apr3.png
Attachment 2: Apr2_Xover.xml.zip
  11195   Thu Apr 2 15:34:34 2015 ericqUpdateLSCNot much locking, Xover measurement

Here's the comparison of last night's crossover measurement to my loop model. Not stellar, but not totally off base. All of the digital filters are read directly from the foton filter file, and translated from their SOS coefficients, so they should be accurate. I may have tallied together the wrong arrangement of FMs, though. I will recheck. 

Although I don't have a measurement to compare it with yet (as I don't know where the crossover was, the filter statesolder, etc. for the older loop measurements), here's what my current CARM loop model looks like, just for kicks. Here, only the first CM board boost is on. If we turn on some super boosting, we can probably ease up on some of the digital boosts, lower the crossover frequency, and put some lowpass that suppress the violin filters' effect on the crossover and reduces digital sensing noise injection. 

Lastly, I'll just note that my current MIST model predicts that the CARM cavity pole should be at ~170Hz, and a peak arm transmission of 180 times single arm power. I saw powers of ~120 last night. 

Attachment 1: xOverModel.png
xOverModel.png
Attachment 2: loopModel.png
loopModel.png
  11196   Thu Apr 2 17:11:28 2015 ericqUpdateLSCNot much locking, Xover measurement

Whoops, I implemented the IOP downsampling filters wrong. Once I did that, it looked like just delay mismatch, so I added two more computation cycles for a total of four 16k cycles, which is maybe not so justified... Nevertheless, model and measurement now agree much better. Here are the corrected plots. 

 

Attachment 1: xOverModel.png
xOverModel.png
Attachment 2: loopModel.png
loopModel.png
  11206   Tue Apr 7 04:21:45 2015 ericqUpdateASCAngular Control during Locking

[J, Q]

Alignment is making it tough for locks to last more than 10 minutes. Many (but not all) locklosses correlate with some optic drifting away, and taking all of the light with it. The other locklosses are the quick ones that seem to pop up out of nowhere; we haven't made any headway on these. We wanted to get to a state where we could just let the interferometer sit for some minutes, to explore the data, but got caught up with alignment and PRMI things.

We're finding that both ITMs experience some DC force when entering full PRFPMI lock. I will calculate the torque expected from radiation pressure + offset beam spot, especially for ITMX, where we choose the spot position to be uncontrolled by ASS. 

I set up the QPD ASC servos to act in a common/differential way on the ETMs. The C1:ASC-XARM_[PIT/YAW] filter modules act on the common alignment, whereas the C1:ASC-YARM_[PIT/YAW] filter modules act on the differential alignment. This can soon be cleaned up with some model renaming to reduce confusion. 

Using DC oplev values as a guide, we are hand tuning ITM alignment once the AO path is engaged and we see the DC drift occurring. Then, we set the QPD servo offsets and engage them. 

In this manner, we were able to lock the interferometer at:

  • Arm transmission 150 x single arm power
  • POPDC indicated a recycling gain of ~5.5
  • ASDC/POPDC indicated a contrast of 99.8%
  • REFLDC indicated a visibility of 80%

We made the PRMI transition to 1f numerous times, but found that the sideband power fluctuations would get significantly worse after the transition. 

We found that the gains that were previously used were too small by a factor of a few. There is a DC change visible in REFL165 before and after the transition (Also POP55, aka REFL55, is not DQ'd angry). Really, it isn't certain that we've zero'd the offset in the CARM board either, so REFL55's zero crossing isn't necessarily more trustworthy that REFL165's. We can go back in the data and do some 2D histograming to see where in the error signal space the sideband power is maximized. 

Jenne reports:

  • The all RF transition succeeded 13/29 times. 
  • PRMI 1f transision succeeded 10/10 times. 
  11208   Wed Apr 8 13:26:47 2015 ericqUpdateLSCREFL55 signal back to its normal ADC inputs

As the POP55 demod board is actually demodulating the REFL55 signal, I have connected its outputs to the REFL55 ADC inputs. Now, we can go back to using the REFL55 input matrix elements, and the data will be recorded. 

I have changed the relevant lines in the locking script to reflect this change. 

  11210   Thu Apr 9 02:58:26 2015 ericqUpdateLSCAll 1F, all whitened

blarg. Chrome ate my elog. 

112607010 is the start of five minutes on all whitened 1F PDs. REFL55 has more low frequency noise than REFL165, I think we may need more CARM supression (i.e. we need to think about the required gain). This is also supported by the difference in shape of these two histograms, taken at the same time in 3f full lock. The CARM fluctuations seem to spread REFL55 out much more.  

I made some filters and scripts to do DC coupling of the ITM oplevs. This makes maintaining stable alignment in full lock much easier. 

I had a few 15+ minute locks on 3f, that only broke because I did something to break it.  

Here's one of the few "quick" locklosses I had. I think it really is CARM/AO action, since the IMC sees it right away, but I don't see anything ringing up; just a spontaneous freakout. 

Attachment 1: quickLockLoss.png
quickLockLoss.png
Attachment 2: 55_1.png
55_1.png
Attachment 3: 55_2.png
55_2.png
  11213   Fri Apr 10 12:09:19 2015 ericqUpdateLSCSome small progress, may have DAC problem?
Quote:

At the very end, the last 10 seconds or so, the POP110 power goes down, and sits at about half it's maximum value.  POP22 isn't quite as bad, in that it still touches the max, but the RIN is about 50%.  The carrier DC signals (TRX, TRY, POPDC) don't see this huge jump.  I don't think I was touching anything the last few tens of seconds.  I'm not sure yet how I can so significantly lose sideband power, without losing a similar amount of carrier power. 

I saw this same kind of behavior in my locklosses on Wednesday night; we should check out the 165 data, and see if the 3f PRCL error signal shows some drift away from zero.

Also, it's odd that CARM_IN1 and REFL11_I_ERR have different low frequency behavior in the plot you posted. I guess they have some difference in demodulation phase.  REFL11_I's bump at -40sec coincides with the dip in arm power and a rise in REFLDC, but ASDC seems pretty smooth, so maybe it is a real CARM fluctuation.

I set the REFL11 analog demodulation angle (via cable length) about a year ago (ELOG 9850), with some assumption about PRCL having the same demod angle as CARM, but this was probably set with the arms misaligned. We should recheck this; maybe we're coupling some other junk into CARM. 

  11214   Fri Apr 10 17:05:45 2015 ericqUpdateLSCRelative ETM calibration (Rough MC2 calibration)

I did a quick measurement get an idea of the ETM actuator calibration, relative to the ITMs. This will still hold if/when we revisit the ITM calibration via the Michelson. 

For the test masses, I locked the arms individually using MC2 as the actuator, and took transfer functions from the SUS-[OPTIC]_LSC_EXC point to the PO[X/Y]_I_ERR error signals. There were two points with coherence less than 99% that I threw away. I then took the fraction at each point, and am using the standard deviation of those fractions as the reported random error, since the coherence was super high for all points, making the error of each point negligible relative to their spread. 

This gives:

  • ETMX/ITMX: 2.765 +- 0.046
  • ETMY/ITMY: 2.857 +- 0.029

With the data from ELOG 8242, this implies:

  • ETMX: 13.00 +- 0.22 x 10 -9 / f2 m/counts
  • ETMY: 13.31 +- 0.15x 10 -9 / f2 m/counts

MC2 data was taken with the arms locked with the ETMs. The results are not so clean, the fractions don't line up; there is some trend with excitation frequency... The ratio is around the same as the ETMs, but I'm not going to quote any sort of precision, since I don't fully understand what's happening. Kind of a bummer, because it struck me that we could get an idea of the arm length mismatch by the difference in IMC frequency / arm FSR. I'll think about this some more...

Attachment 1: quickCal.png
quickCal.png
  11215   Fri Apr 10 18:39:39 2015 ericqUpdateLSCRelative ETM calibration (Rough MC2 calibration)

I didn't verify that the loop gain was low enough at the excitation frequencies. blush

I put a 1kHz ELP in both arm servos, and got cleaner data for both. The ETM numbers are pretty much consistent with the previously posted ones, and the MC2 data now is consistent across frequencies. However, the MC2 numbers derived from each arm are not consistent.

Now:

  • ETMX / ITMX: 2.831 +- 0.043
  • MC2 / ITMX: 3.260 +- 0.062
  • ETMY / ITMY: 2.916 +- 0.041
  • MC2 / ITMY: 3.014 +- 0.036

With the data from ELOG 8242, this implies:

  • ETMX: 13.31 +- 0.21 x 10-9 / f2 m/counts
  • ETMY: 13.59 +- 0.20 x 10-9 / f2 m/counts
  • MC2 in Xarm meters : 15.32 +- 0.30 x 10-9 / f2 m/counts
  • MC2 in Yarm meters : 14.04 +- 0.18 x 10-9 / f2 m/counts
This is, of course, pretty fishy. Each arm sees the same frequency fluctuation of the light coming out of the IMC, especially given that the MC2 to arm data was taken simultaneously for both arms. Now, one possible source of this kind of mismatch would be a mismatch of the arm lengths, but there is no way they differ by 10%, as they would have to in order to explain the above numbers. To me, it seems more likely that the ITM calibrations are off. 
Attachment 1: betterCal.png
betterCal.png
  11216   Mon Apr 13 19:34:02 2015 ericqUpdateIOOModulation Frequency Tuned to IMC Length

I've been fiddling with the mode cleaner and green beat box today, to try and get an absolute frequency calibration for MC2 motion. The AC measurements have all turned out weird, I get fractional power laws instead of the 1/f^2 that we expect from the MC2 pendulum. At DC, I get a rough number of 15 green kHz per MC2 count, but this translates to ~7e-10 m/count which is in contrast to the 6e-9 m/count from 2009. I will meditate on this a bit. 


In any case, while working at the IOO rack, I tuned the 11MHz modulation frequency, as was done in ELOGs 9324 and 10314, by minimizing one of the beats of the 11MHz and 29.5MHz sidebands. 

The new modulation frequency / current IMC FSR is 11.066209 +- 1 Hz, which is a only a few ppm change from the tuning from last July.

This implies a IMC round trip length of 27.090800m +- 2um.

Attached is a plot showing the beat of 55-29.5 going down as I changed the marconi frequency. 

 

Attachment 1: fMod_tuning.pdf
fMod_tuning.pdf
  11219   Wed Apr 15 02:26:41 2015 ericqUpdateLSCAttempted DRMI + ALS arms

I spent some time tonight trying to revive DRMI locking, with the arms held off on ALS. Not much news, I haven't been able to get more than a few short spurts of resonance, using 1F signals.

I did use SRY to measure the BS->SRCL coupling by exciting each mirror and looking at their relative coupling to AS55Q. I found that we should use a value of +0.28 +-0.01 in the MICH->SRM element.

  11220   Wed Apr 15 15:14:18 2015 ericqUpdateComputer Scripts / ProgramsCDSutils upgraded to v474

CDSutlils has been updated to the newest version, 474; there are some matrix interface methods that will make our locking scripts easier to read, modify, and maintain.

I've tested the ALS and CARM down scripts, and the LSC offsets script, and they all work fine. 

  11249   Sat Apr 25 18:50:47 2015 ericqUpdateVACPressure watch script broken

Ugh, this turns out to be because cron doesn't source the controls bashrc that defines where to find caget and all that jazz that many commands depend on. This is probably also why the AutoMX cron job isn't working either. 

Also, cron automatically emails everything from stderr to the email address that is configured for the user, which is why the n2 script blew up the foteee account and why the AutoMX script was blowing up my email yesterday. This can be avoided by doing something like this in the crontab:

0 8 * * * /bin/somecommand >> somefile.log 2>&1

(The >> part means that the standard output is appended to some log file, while the 2>&1 means send the standard error stream to the same place as stdout)

I made this change for the n2 script, so the foteee email account should be safe from this script. I haven't figured out the right way to set up cron to have all the right $PATH and other environment stuff, such as epics may need, so the script is still not working. 

  11265   Fri May 1 13:22:08 2015 ericqUpdateDAQPEM Slow channels added to saved frames

Rana asked me to include add slow outputs (OUT16) of the seismometer BLRMS channels to the frames. 

All of the PEM slow channels are already set up in c1/chans/daq/C1EDCU_PEM.ini, but up to this point, daqd had no knowledge of this file, since it wasn't included in c1/target/fb/master, which defines all the places to look for files describing channels to be written to disk. This file already includes lines for C1EDCU_LSC.ini and such, which from old elogs, looks like was set up by hand for subsystems we care about. 

Hence, since we now care about slow trends for the PEM subsystem, I have added a line to the daqd master file to tell it to save the PEM slow channels. This looks to have increased the size of the individual 16 second frame files from 57MB to 59MB, which isn't so bad.

  11273   Tue May 5 10:40:05 2015 ericqHowToComputer Scripts / ProgramsHow to get a web page running on Nodus

How to get your own web page running on Nodus

  1. On any martian machine, put your stuff in /users/public_html/$MYPAGE/
  2. On Nodus, run: ln -s /users/public_html/$MYPAGE /export/home/
  3. Your site is now available at https://nodus.ligo.caltech.edu:30889/$MYPAGE/
  4. If you want to allow straight up directory listing to the entire internet, on Nodus run: sudoedit /etc/sites-available/nodus, and add the following lines towards the bottom:
<Directory /export/home/$MYPAGE>
    Options +Indexes
</Directory>
  11285   Tue May 12 08:51:08 2015 ericqUpdateCDSc1lsp and c1sup removed?
Quote:

was this change not elogged??

This is my sin.

Back in Febuary (around the 25th) I modified c1sus.mdl, removing the simulated plant connections we weren't using from c1lsp and c1sup. This was included in the model's svn log, but not elogged. blush

The models don't start with the rtcds restart shortcut, because I removed them from the c1lsc line in FB:/diskless/root/etc/rtsystab (or c1lsc:/etc/rtsystab). There is a commented out line in there that can be uncommented to restore them to the list of models c1lsc is allowed to run. 

However, I wouldn't suspect that the models not running should affect the suspension drift, since the connections from them to c1sus have been removed. If we still have trends from early February, we could look and see if the drift was happening before I made this change. 

  11297   Mon May 18 09:50:00 2015 ericqUpdateGeneralsome status
Quote:

Added this to the megatron crontab and commented out the op340m crontab line. IF this works for awhile we can retire our last Solaris machine.

For some reason, my email address is the one that megatron complains to when cron commands fail; since 11:15PM last night, I've been getting emails that the rampdown.py line is failing, with the super-helpful message: expr: syntax error

  11299   Mon May 18 14:22:05 2015 ericqUpdateComputer Scripts / Programsrsync frames to LDAS cluster
Quote:

Still seems to be running without causing FB issues.

I'm not so sure. I just was experiencing some severe network latency / EPICS channel freezes that was alleviated by killing the rsync job on nodus. It started a few minutes after ten past the hour, when the rysnc job started. 

Unrelated to this, for some odd reason, there is some weirdness going on with ssh'ing to martian machines from the control room computers. I.e. on pianosa, ssh nodus fails with a failure to resolve hostaname message, but ssh nodus.martian succeeds. 

  11301   Mon May 18 16:28:18 2015 ericqUpdateGeneralsome status
Quote:

4) Noticed that DAQD is restarting once per hour on the hour. Why?

It looks like daqd isn't being restarted, but in fact crashing every hour.

Going into the logs in target/fb/logs/old, it looks like at 10 seconds past the hour, every hour, daqd starts spitting out:

[Mon May 18 12:00:10 2015] main profiler warning: 1 empty blocks in the buffer                                     
[Mon May 18 12:00:11 2015] main profiler warning: 0 empty blocks in the buffer                                     
[Mon May 18 12:00:12 2015] main profiler warning: 0 empty blocks in the buffer                                     
[Mon May 18 12:00:13 2015] main profiler warning: 0 empty blocks in the buffer
...
***CRASH***

An ELOG search on this kind of phrase will get you a lot of talk about FB transfer problems. 

I noticed the framebuilder had 100% usage on its internal, non-RAID, non /frames/, HDD, which hosts the root filesystem (OS files, home directory, diskless boot files, etc), largely due to a ~110GB directory of frames from our first RF lock that had been copied over to the home directory. The HDD only has 135GB capacity. I thought that maybe this was somehow a bottleneck for files moving around, but after deleting the huge directory, daqd still died at 4PM. 

The offsite LDAS rsync happens at ten minutes past the hour, so is unlikely to be the culprit. I don't have any other clues at this point. 

  11302   Mon May 18 16:56:12 2015 ericqHowToCDSBypassing the CDSUTILS prefix issue
Quote:

export IFO=''

This makes things act weird:

controls@pianosa|MC 1> z avg 1 "C1:LSC-TRY_OUT"
IFO environment variable not specified.

  11307   Tue May 19 11:15:09 2015 ericqUpdateComputer Scripts / ProgramsChiara Backup Hiccup

Starting on the 14th (five days ago) the local chiara rsync backup of /cvs/cds to an external HDD has been failing:

caltech/c1/scripts/backup/rsync_chiara.backup.log:

2015-05-13 07:00:01,614 INFO       Updating backup image of /cvs/cds
2015-05-13 07:49:46,266 INFO       Backup rsync job ran successfully, transferred 6504 files.
2015-05-14 07:00:01,826 INFO       Updating backup image of /cvs/cds
2015-05-14 07:50:18,709 ERROR      Backup rysnc job failed with exit code 24!
2015-05-15 07:00:01,385 INFO       Updating backup image of /cvs/cds
2015-05-15 08:09:18,527 ERROR      Backup rysnc job failed with exit code 24!
...
 

Code 24 apparently means "Partial transfer due to vanished source files."

Manually running the backup command on chiara worked fine, returning a code of 0 (success), so we are backed up. For completeness, the command is controls@chiara: sudo rsync -av --delete --stats /home/cds/ /media/40mBackup

Are the summary page jobs moving files around at this time of day? If so, one of the two should be rescheduled to not conflict. 

  11308   Tue May 19 11:24:44 2015 ericqUpdateComputer Scripts / ProgramsNotification Scheme

Given some of the things we've facing lately, it occurs to me that we could be better served by having some sort of unified human-alerting scheme in place, for things like:

  • Local/offsite backup failures
  • Vaccumm system problems
  • HDD status for things like /frames/ and /cvs/cds/, whether the disks are full, or their SMART status indicates imminent mechanical failure

Currently, many of these things are just checked sporadically when it occurs to someone to do so, or when debugging random issues. Smoother IFO operation and peace of mind could be gained if we're confident that the relevant people are notified in a timely manner. 

Thoughts? Suggestions on other things to monitor, like maybe frontend/model crashes?

  11310   Tue May 19 14:51:44 2015 ericqUpdateModern ControlBrushing up on Wiener Filtering

As part of preparing for the SURF projects this summer, I grabbed ~50 minutes of MCL and STS_1 data from early this morning to do a little MISO wiener filtering. It was pretty straightforward to use the misofw.m code to achieve an offline subtraction factor of ~10 from 1-3Hz. This isn't the best ever, but doesn't compare so unfavorably to older work, especially given that I did no prefiltering, and didn't use all that long of a data stretch.

Code and plot (but not data) is attached. 

Attachment 1: mclData.png
mclData.png
Attachment 2: mclWiener.zip
  11311   Tue May 19 16:18:57 2015 ericqUpdateGeneralcrons fixed

I wrapped rampdown.py in rampdown.sh, which is just these lines:

#!/bin/bash
source /ligo/cdscfg/workstationrc.sh
/opt/rtcds/caltech/c1/scripts/SUS/rampdown.py > /dev/null 2>&1

This is now what megatron's cron runs. It appears to be working.

I also added the workstationrc line to the n2 and chiara HDD checking scripts that run on nodus, which should resolve the issue from ELOG 11249

  11318   Wed May 20 11:41:59 2015 ericqUpdateGeneralsome status

West cyclinder is empty, east is at 2000 psi; regulated N2 pressure is 64psi. I'll replace the west one after the meeting.

  11319   Fri May 22 11:59:54 2015 ericqUpdateSUSDampRestore script problem

PRM watchdog tripped, but the damprestore.py script wouldn't run. 

It turns out the script tries to import some ezca stuff from /users/yuta (angry), which had been moved to /users/OLD/yuta (crying). 

I've moved the yuta directory back to /users/ until I fix the damprestore script. 

ELOG V3.1.3-