40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 133 of 344  Not logged in ELOG logo
ID Date Author Type Category Subject
  10626   Mon Oct 20 17:50:30 2014 JenneUpdateLSCCARM W/N TFs (Others were all wrong!)

I realized today that I had been plotting the wrong thing for all of my transfer functions for the last few weeks! 

The "CARM offsets" were correct, in that I was moving both ETMs, so all of the calculations were correct (which is good, since those took forever). But, in the plots I was just plotting the transfer function between driving ETMX and the given photodiode.  But, since just driving a single ETM is an admixture of CARM and DARM, the plots don't make any sense.  Ooops.

In these revised plots (and the .mat file attached to this elog), for each PD I extract from sigAC the transfer function between driving ETMX and the photodiode.  I also extract the TF between driving ETMY and the PD.  I then  sum those two transfer functions and divide by 2.  I multiply by the simple pendulum, which is my actuator transfer function to get to W/N, and plot.

The antispring plots don't change in shape, but the spring side plots do.  I think that this means that Rana's plots from last week are still true, so we can use the antispring side of TRX to get down to about 100 pm.

Here are the revised plots:

TFs_TRX_vsCARMoffset_PRFPMI_antispring.pngTFs_TRX_vsCARMoffset_PRFPMI_spring.png

TFs_REFLDC_vsCARMoffset_PRFPMI_antispring.pngTFs_REFLDC_vsCARMoffset_PRFPMI_spring.png

TFs_REFL11I_vsCARMoffset_PRFPMI_antispring.pngTFs_REFL11I_vsCARMoffset_PRFPMI_spring.png

Attachment 1: PDs_vsCARMoffset_20Oct2014.mat.zip
  10625   Fri Oct 17 17:52:55 2014 JenneUpdateLSCRAM offsets

Last night I measured our RAM offsets and looked at how those affect the PRMI situation.  It seems like the RAM is not creating significant offsets that we need to worry about.


Words here about data gathering, calibration and calculations.

Step 1:  Lock PRMI on sideband, drive PRM at 675.13Hz with 100 counts (675Hz notches on in both MICH and PRCL).  Find peak heights for I-phases in DTT to get calibration number.

Step 2:  Same lock, drive ITMs differentially at 675.13Hz with 2,000 counts.  find peak heights for Q-phases in DTT to get calibration number.

Step 3:  Look up actuator calibrations.  PRM = 19.6e-9/f^2 meters/count and ITMs = 4.68e-9/f^2 meters/count.  So, I was driving PRM about 4pm, and the ITMs about 20pm.

Step 4:  Unlock PRMI, allow flashes, collect time series data of REFL RF siganls.

Step 5: Significantly misalign ITMs, collect RAM offset time series data.

Step 6: Close PSL shutter, collect dark offset time series data.

Step 7: Apply calibration to each PD time series.  For each I-phase of PDs, calibration is (PRM actuator / peak height from step 1).  For each Q-phase of PDs, calibration is (ITM actuator / peak height from step 2).

Step 8:  Look at DC difference between RAM offset and dark offset of each PD.  This is the first 4 rows of data in the summary table below.

Step 9:  Look at what peak-to-peak values of signals mean.  For PRCL, I used the largest pk-pk values in the plots below.  For MICH I used a calculation of what a half of a fringe is - bright to dark.  (Whole fringe distance) = (lambda/2), so I estimate that a half fringe is (lambda/4), which is 266nm for IR.  This is the next 4 rows of data in the table.

Step 10: Divide.  This ratio (RAM offset / pk-pk value) is my estimate of how important the RAM offset is to each length degree of freedom. 


Summary table:

  PRCL (I-phase) MICH (Q-phase)
RAM offsets    
11 4e-11 m 2.1e-9 m
33 1.5e-11 m ~2e-9 m
55 2.2e-11 m ~1e-9 m
165 ~1e-11 m ~1e-9 m
Pk-pk (PDH or fringes) PDH pk-pk from flashes MICH fringes from calculation
11 5.5e-9 m 266e-9 m
33 6.9e-9 m 266e-9 m
55 2.5e-9 m 266e-9 m
165 5.8e-9 m 266e-9 m
Ratio: (RAM offset) / (pk-pk)    
11 7e-3 8e-4
33 2e-3 7e-3
55 9e-3 4e-3
165 2e-3 4e-3

 


Plots (Left side is several PRMI flashes, right side is a zoom to see the RAM offset more clearly):

RAM_PRMI_PRMIflashing_REFL11I.pdfRAM_PRMI_Zoom_REFL11I.pdf

RAM_PRMI_PRMIflashing_REFL11Q.pdfRAM_PRMI_Zoom_REFL11Q.pdf

RAM_PRMI_PRMIflashing_REFL33I.pdfRAM_PRMI_Zoom_REFL33I.pdf

RAM_PRMI_PRMIflashing_REFL33Q.pdfRAM_PRMI_Zoom_REFL33Q.pdf

RAM_PRMI_PRMIflashing_REFL55I.pdfRAM_PRMI_Zoom_REFL55I.pdf

RAM_PRMI_PRMIflashing_REFL55Q.pdfRAM_PRMI_Zoom_REFL55Q.pdf

RAM_PRMI_PRMIflashing_REFL165I.pdfRAM_PRMI_Zoom_REFL165I.pdf

RAM_PRMI_PRMIflashing_REFL165Q.pdfRAM_PRMI_Zoom_REFL165Q.pdf

  10624   Fri Oct 17 16:54:11 2014 jamieUpdateCDSDaqd "fixed"?

Quote:

I very tentatively declare that this particular daqd crapfest is "resolved" after Jenne rebooted fb and daqd has been running for about 40 minutes now without crapping itself.  Wee hoo.

I spent a while yesterday trying to figure out what could have been going on.  I couldn't find anything.  I found an elog that said a previous daqd crapfest was finally only resolved by rebooting fb after a similar situation, i.e. there had been an issue that was resolved, daqd was still crapping itself, we couldn't figure out why so we just rebooted, daqd started working again.

So, in summary, totally unclear what the issue was, or why a reboot solved it, but there you go.

Looks like I spoke too soon.  daqd seems to be crapping itself again:

controls@fb /opt/rtcds/caltech/c1/target/fb 0$ ls -ltr logs/old/ | tail -n 20
-rw-r--r-- 1 4294967294 4294967294    11244 Oct 17 11:34 daqd.log.1413570846
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 11:36 daqd.log.1413570988
-rw-r--r-- 1 4294967294 4294967294    11244 Oct 17 11:38 daqd.log.1413571087
-rw-r--r-- 1 4294967294 4294967294    13377 Oct 17 11:43 daqd.log.1413571386
-rw-r--r-- 1 4294967294 4294967294    11481 Oct 17 11:45 daqd.log.1413571519
-rw-r--r-- 1 4294967294 4294967294    11985 Oct 17 11:47 daqd.log.1413571655
-rw-r--r-- 1 4294967294 4294967294    13219 Oct 17 13:00 daqd.log.1413576037
-rw-r--r-- 1 4294967294 4294967294    11150 Oct 17 14:00 daqd.log.1413579614
-rw-r--r-- 1 4294967294 4294967294     5127 Oct 17 14:07 daqd.log.1413580231
-rw-r--r-- 1 4294967294 4294967294    11165 Oct 17 14:13 daqd.log.1413580397
-rw-r--r-- 1 4294967294 4294967294     5440 Oct 17 14:20 daqd.log.1413580845
-rw-r--r-- 1 4294967294 4294967294    11352 Oct 17 14:25 daqd.log.1413581103
-rw-r--r-- 1 4294967294 4294967294    11359 Oct 17 14:28 daqd.log.1413581311
-rw-r--r-- 1 4294967294 4294967294    11195 Oct 17 14:31 daqd.log.1413581470
-rw-r--r-- 1 4294967294 4294967294    10852 Oct 17 15:45 daqd.log.1413585932
-rw-r--r-- 1 4294967294 4294967294    12696 Oct 17 16:00 daqd.log.1413586831
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 16:02 daqd.log.1413586924
-rw-r--r-- 1 4294967294 4294967294    11165 Oct 17 16:05 daqd.log.1413587101
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 16:21 daqd.log.1413588108
-rw-r--r-- 1 4294967294 4294967294    11097 Oct 17 16:25 daqd.log.1413588301
controls@fb /opt/rtcds/caltech/c1/target/fb 0$

The times all indicate when the daqd log was rotated, which happens everytime the process restarts.  It doesn't seem to be happening so consistently, though.  It's been 30 minutes since the last one.  I wonder if it somehow correlated with actual interaction with the NDS process.  Does some sort of data request cause it to crash?

 

  10623   Fri Oct 17 15:17:31 2014 jamieUpdateCDSDaqd "fixed"?

I very tentatively declare that this particular daqd crapfest is "resolved" after Jenne rebooted fb and daqd has been running for about 40 minutes now without crapping itself.  Wee hoo.

I spent a while yesterday trying to figure out what could have been going on.  I couldn't find anything.  I found an elog that said a previous daqd crapfest was finally only resolved by rebooting fb after a similar situation, i.e. there had been an issue that was resolved, daqd was still crapping itself, we couldn't figure out why so we just rebooted, daqd started working again.

So, in summary, totally unclear what the issue was, or why a reboot solved it, but there you go.

  10622   Fri Oct 17 13:19:48 2014 JenneUpdateLSCPOP22 ?!?!

We've seen this before, but we need to figure out why POP22 decreases with decreased CARM offset.  If it's just a demod phase issue, we can perhaps track this by changing the demod phase as we go, but if we are actually losing control of the PRMI, that is something that we need to look into.

In other news, nice work Q!

Quote:

TRdifflock.png

 

  10621   Fri Oct 17 03:05:00 2014 ericqUpdateLSCDARM locked on DC Transmission difference

 I've been able to repeatedly get off of ALS and onto (TRY-TRX)/(TRY+TRX). Nevertheless, lock is lost between arm powers of 10 and 20. 

I do the transition at the same place as the CARM->SqrtInv transition, i.e. arm powers about 1.0 Jenne started a script for the transition, and I've modified it with settings that I found to work, and integrated it into the carm_cm_up script. I've also modified carm_cm_down to zero the DARM normalization elements. 

I was thwarted repeatedly by the frequent crashing of daqd, so I was not able to take OLTFs of CARM or DARM, which would've been nice. As it was, I tuned the DARM gain by looking for gain peaking in the error signal spectrum. I also couldn't really get a good look at the lock loss events. Once the FB is behaving properly, we can learn more. 

Turning over to difference in transmission as an error signal naturally squashes the difference in arm transmissions:

TRdifflock.png


I was able to grab spectra of the error and control signals, though I did not take the time to calibrate them... We can see the high frequency sensing noise for the transmission derived signals fall as the arm power increases. The low frequency mirror motion stays about the same. 

Oct17lock.pdf


So, it seems that DARM was not the main culprit in breaking lock, but it is still gratifying to get off of ALS completely, given its out-of-loop-noise's strong dependence on PSL-alignment. 

  10620   Thu Oct 16 22:35:05 2014 ranaUpdateLSCCARM W/N TFs

In my last CARM loop modelling, all of the plots are phony, so don't trust them. The invbilinear function inside of StefanB's onlinefilter.m was making bogus s-domain representations of the digital filter coefficients.

So now I've just plotted the frequency response directly from the z-domain SOS coeffs using MattE's readFilterFile.m and FotonFilter.m.

Conclusions are less rosy. The anti-spring side is still easier to compensate than the spring side, but it starts to get hopeless below ~130 pm of offset, so there we really need to try to get to REFL_11/(TRX+TRY), pending some noise analysis.

** In order to get the 80 and 40 pm loops to be more stable I've put in a tweak filter called Boost2 (FM8). As you can see, it kind of helps for 80 pm, but its pretty hopeless after that.

Attachment 1: carm_spring.pdf
carm_spring.pdf
Attachment 2: carm_antispring.pdf
carm_antispring.pdf
  10619   Thu Oct 16 21:20:59 2014 ranaUpdateLSCmisleading modelling

 I think these are all very helpful and interesting plots. We should see some better performance using either of the DC DARM signals.

BUT, what we really need (instead of just the DC sweeps) is the DC sweep with the uncertainty/noise displayed as a shaded area on the plot, as Nic did for us in the pre-CESAR modelling.

Otherwise, the DC sweeps mistakenly indicate that many channels are good, whereas they really have an RMS noise larger than 100 pm due to low power levels or normalization by a noisy signal.

  10618   Thu Oct 16 16:21:42 2014 ericqUpdateLSCInterim DARM Signal

I've added (TRX-TRY)/(TRX+TRY) to the DC DARM sweep plots, and it looks like an even better candidate. The slope is closer to linear, and it has a zero crossing within ~10pm of the true DARM zero across the different CARM offsets, so we might not even need to use an intentional DARM offset. 

dcDARMSweep-300.pdfdcDARMSweep-120.pdfdcDARMSweep-50.pdfdcDARMSweep-0.pdf

 

  10617   Thu Oct 16 12:22:43 2014 ericqUpdateCDSDaqd segfaulting again

I've been trying to figure out why daqd keeps crashing, but nothing is fixed yet. 

I commented out the line in /etc/inittab that runs daqd automatically, so I could run it manually. Each time I run it ( with ./daqd -c ./daqdrc while in c1/target/fb), it churns along fine for a little while, but eventually spits out something like:

[Thu Oct 16 12:07:23 2014] main profiler warning: 1 empty blocks in the buffer
[Thu Oct 16 12:07:24 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 12:07:25 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1097521658 to 1097521660
Segmentation fault
 
Or:
 
[Thu Oct 16 11:43:54 2014] main profiler warning: 1 empty blocks in the buffer
[Thu Oct 16 11:43:55 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:56 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:57 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:58 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:59 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:00 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:01 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:02 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1097520250 to 1097520257
FATAL: exception not rethrown
Aborted

I looked for time disagreements between the FB and the frontends, but they all seem fine. Running ntpdate only corrected things by 5ms. However, looking through /var/log/messages on FB, I found that ntp claims to have corrected the FB's time by ~111600 seconds (~31 hours) when I rebooted it on Monday.

Maybe this has something to do with the timing that the FB is getting? The FE IOPs seem happy with their sync status, but I'm not personally currently aware of how the FB timing is set up. 


Addendum:

On Monday, Jamie suggested checking out the situation with FB's RAID. Searching the elog for "empty blocks in the buffer" also brought up posts that mentioned problems with the RAID. 

I went to the JetStor RAID web interface at http://192.168.113.119, and it reports everything as healthy; no major errors in the log. Looking at the SMART status of a few of the drives shows nothing out of the ordinary. The RAID is not mounted in read-only mode either, as was the problem mentioned in previous elogs. 

  10616   Thu Oct 16 03:18:48 2014 JenneUpdateCDSDaqd segfaulting again

 The daqd process on the frame builder looks like it is segfaulting again.  It restarts itself every few minutes.  

The symptoms remind me of elog 9530, but /frames is only 93% full, so the cause must be different.  

Did anyone do anything to the fb today?  If you did, please post an elog to help point us in a direction for diagnostics.

Q!!!!  Can you please help?  I looked at the log files, but they are kind of mysterious to me - I can't really tell the difference between a current (bad) log file and an old (presumably fine) log file.  (I looked at 3 or 4 random, old log files, and they're all different in some ways, so I don't know which errors and warnings are real, and which are to be ignored).

  10615   Thu Oct 16 03:13:23 2014 JenneUpdateLSCPRMI on REFL165, and more

 The first thing I looked at tonight was locking the PRMI on REFL 165.

I locked the PRMI (no arms), and checked the REFL 165 demod phase. I also found the input matrix configuration that allowed me to acquire PRMI lock directly on REFL165.  After locking the arms on ALS, I tried to lock the PRMI with REFL 165 and failed.  So, I rechecked the demod phase and the relative transfer functions between REFL 165 and REFL 33.  The end of the story is that, even with the re-tuned demod phase for CARM offset of a few nanometers, I cannot acquire PRMI lock on REFL 165, nor can I transition from REFL 33 to REFL 165.  We need to revisit this tomorrow.

IFO configuration CARM offset [cts] REFL 165 demod phase [deg]
Found as-is N/A +145
PRMI, no arms N/A -135
PRFPMI +3 +110
PRFPMI +2 +110
PRFPMI +1 +110
PRFPMI +0.5 +120

 

IFO configuration REFL 33 I / REFL 165 I (PRCL) REFL 33 Q / REFL 165 Q (MICH)
PRMI, no arms +0.1 +0.22, although easier to acquire lock with +0.1
PRFPMI, CARM offset = +3 -0.09  (TF measured, no lock) +0.033  (TF measured, no lock)

For the PRMI-only case, I ended up using 0.1's in the input matrix, and I added an FM 1 to the MICH filter bank that is a flat gain of 2.2, and then I had it trigger along with FM2.

I turned this FM1 off (and no triggering) while trying to transition from REFL33 to REFL165 in the PRFPMI case, but that didn't help.  I think that maybe I need to remeasure my transfer functions or something, because I could put values into the REFL165 columns of the input matrix while REFL33 was still 1's, but I couldn't remove (even if done slowly) the REFL33 matrix elements without losing lock of the PRMI.  So, we need to get the input matrix elements correct.


I also recorded some time series for a quick RAM investigation that I will work on tomorrow.  

I left the PRM aligned, but significantly misaligned both ITMs to get data at the REFL port of the RAM that we see.  I also aligned the PRMI (no arms) and let it flash so that I can see the pk-pk size of our PDH signals.  I need to remember to calibrate these from counts to meters.  

Raw data is in /users/jenne/RAM/ .


I have not tried any new DARM signals, since PRMI wasn't working with 3f2.  

We should get to that as soon as we fix the PRMI-3f2 situation.

  10614   Wed Oct 15 22:39:17 2014 JenneUpdateLSCThe Plan

 [Rana, Jenne]

We're summarizing the discussions of the last few days as to the game plan for locking.  

  1. PRMI on REFL165.  The factor of 5 in frequency will give us more MICH signal.    We want this.
    1. Drive CARM, measure coupling to PRCL, MICH while locked on REFL33.
    2. Switch to REFL165, re-measure CARM coupling.
    3. Hopefully this will reduce the AS port fluctuations, and reduce the POP22 power decrease as CARM offset decreases. 
  2. DARM transition from ALSdiff to an intermediate signal.  Simulate, and try empirically.
    1. Maybe try ASDC normalized by sum of transmissions?
    2. Maybe try difference of transmissions divided by sum of transmissions?  
  3. Look at data on disk.
    1. Do we have anything specific causing our locklosses (lately there haven't been obvious loop instabilities causing the locklosses)?
    2. How much do we think our lengths are actually changing right now (particularly DARM on ALSdiff)?
    3. Are there better ways of combining error signals that could be useful?
    4. Do we need to work on angular loops?
      1. Oplevs
      2. POP ASC for sidebands
      3. POP QPD or Trans QPDs for arms
  4.  Think about what could be causing ETMX to be annoying.  The connection that is most suspect has been ziptied, but we're still seeing ETMX move either at locklosses or sometimes just spontaneously.
  5.  RAM.  What kind of RAM levels do we have right now, and how do they affect our locking offsets?  Do we have big offsets, or negligible offsets?
  10613   Wed Oct 15 20:10:29 2014 ericqUpdateLSCInterim DARM Signal

I've done some preliminary modeling to see if there is a good candidate for an IR DARM control signal that is available before the AS55 sign flip. From a DC sweep point of view, ASDC/(TRX+TRY) may be a candidate for further exploration. 


As a reminder, both Finesse and MIST predict a sign flip in the AS55 Q control signal for DARM in the PRFPMI configuration, at a CARM offset of around 118pm.

dcAS55_DARM.pdfAS55Flip.pdf

The CARM offset where this sign flip occurs isn't too far off of where we're currently losing lock, so we have not had the opportunity to switch DARM control off of ALS and over to the quieter IR RF signal of AS55. 


Here are simulated DC DARM sweep plots of our current PRFPMI configuration, with a whole bunch of potential signals that struck me. 

Although the units of most traces are arbitrary in each plot (to fit on the same scale), each plot uses the same arbitrary units (if that makes any sense) so slopes and ratios of values can be read off. 

dcDARMSweep-300.pdfdcDARMSweep-120.pdfdcDARMSweep-50.pdfdcDARMSweep-0.pdf

In the 300 and 120pm plot, you can see that the zero crossing of AS55 is at some considerable DARM offset, and normalizing by TRX doesn't change much about that. "Hold on a second," I hear you say. "Your first plots said that the sign flip happens at around 120pm, so why does the AS55 profile still look bad at 50pm?!" My guess is that, probably due to a combination of Schnupp and arm length asymmetry, CARM offsets move where the peak power is in the DARM coordinate. This picture makes what I mean more clear, perhaps:

2dSweep.pdf

Thus, once we're on the other side of the sign flip, I'm confident that we can use AS55 Q without much problem. 


Now, back to thoughts about an interim signal:

ASDC by itself does not really have the kind of behavior we want; but the power out of AS as a fraction of the ARM power (i.e. ASDC/TRX in the plot) seems to have a rational shape, that is not too unlike what the REFLDC CARM profile looks like.

Why not use POPDC or REFLDC? Well, at the CARM offsets we're currently at, POPDC is dominated by the PRC resonating sidebands, and REFLDC has barely begun to decline, and at lower CARM offsets, they each flatten out before the peak of the little ASDC hill, and so don't do much to improve the shape. Meanwhile, ASDC/TRX has a smooth response to points within some fraction of the DARM line width in all of the plots. 

Thus, as was discussed at today's meeting, I feel it may be possible to lock DARM on ASDC/(TRX+TRY) with some offset, until AS55 becomes feasible.

(In practice, I figure we would divide by the sum of the powers, to reduce the influence of the DARM component of just TRX; we don't want to have DARM/DARM in the error signal for DARM)

Two caveats are:

  • The slope of this signal actually looks more quadratic than linear. Is this ok/manageable?
  • I have not yet made any investigation into the frequency dependent behavior of this thing. Transmission in the denominator will have the CARM pole in it, might get complicated. 

[Code and plots live in /svn/trunk/modeling/PRFPMI_radpressure]

 

  10612   Wed Oct 15 19:56:38 2014 JenneUpdateLSCWhich side of optical spring are we on? Meas vs Model

 I have plotted measured data from last night (elog 10607) with a version of the result from Rana's simulink CARM loop model (elog 10593).

The measured data that was taken last night (open circles in plots) is with an injection into MC2 position, and I'm reading out TRX.  This is for the negative side of the digital CARM offset, which is the one that we can only get to arm powers of 5ish.

The modeled data (solid lines in plots) is derived from what Rana has been plotting the last few days, but it's not quite identical.  I added another excitation point to the simulink model at the same place as the "CARM OUT" measurement point.  This is to match the fact that the measured transfer functions were taken by driving MC2.  I then asked matlab to give me the transfer function between this new excitation point (CARM CTRL point) and the IN1 point of the loop, which should be equivalent to our TRX_OUT.  So, I believe that what I'm plotting is equivalent to TRX/MC2.  The difference between the 2 plots is just that one uses the modeled spring-side optical response, and the other uses the modeled antispring-side response.

AntispringModel_NegOffsetMeas_comparison.pdf

SpringModel_NegOffsetMeas_comparison.pdf

I have zoomed the X-axis of these plots to be between 30 Hz - 3 kHz, which is the range that we had coherence of better than 0.8ish last night in the measurements.  The modeled data is all given the same scale factor (even between plots), and is set so that the lowest arm power traces (pink) line up around 150 Hz. 

I conclude from these plots that we still don't know what side of the CARM resonance we are on. 

 I have not plotted the measurements from the positive side of the digital CARM offset, because those transfer functions were to sqrtInvTRX, not plain TRX, whereas the model only is for plain TRX. There should only be an overall gain difference between them though, no phase difference.  If you look at last night's data, you'll see that the positive side of the CARM offset measured phase has similar characteristics to the negative offset, i.e. the phase is not flat, but it is roughly flat in both modeled cases, so even with that data, I still say that we don't know what side of the CARM resonance we are on.

 

 

  10611   Wed Oct 15 17:18:10 2014 JenneUpdateComputer Scripts / ProgramsDataviewer fix with Ubuntu 12.04

 

 I have modified the Dataviewer launcher (which runs when you either click the icon or type "dataviewer" in the terminal).  

A semi-old problem was that it would open in the file /users/Templates, but our dataviewer templates start in /users/Templates/Dataviewer_Templates.  Now this is the folder that dataviewer opens into.  This was not related to the upgrade to Ubuntu 12, but will be overwritten any time someone does a checkout of the /ligo/apps/launchers folder.

A problem that is related to the Ubuntu 12 situation, which we had been seeing on Ottavia and Pianosa for a few weeks, was that the variable NDSSERVER was set to fb:8088, which is required for cdsutils to work.  However, dataviewer wants this variable to be set to just fb.  So, locally in the dataviewer launcher script, I set NDSSERVER=fb.  NB: I do not export this variable, because I don't want to screw up the cdsutils.  This may need to be undone if we ever upgrade our Dataviewer.

  10610   Wed Oct 15 17:09:49 2014 manasaUpdateGeneralDiode laser test preparation

[EricG, manasa]

The He-Ne laser oplev setup was swapped with a fiber-coupled diode laser from W Bridge. The laser module and its power supply are sitting on a bench in the east side of the SP table. 

  10609   Wed Oct 15 13:38:33 2014 JenneUpdateLSCCARM W/N TFs

Here are the same plots, but the legend also includes the arm power that we expect at that CARM offset.  


Here is what the arm powers look like as a function of CARM offset according to Optickle.  Note that the cyan trace's maximum matches what Q has simulated in Mist with the same high losses.  For illustration I've plotted the single arm power, so that you can see it's normalized to 1.  Then, the other traces are the full PRFPMI buildup, with various amounts of arm loss.  The "no loss" case is with 0ppm loss per ETM.  The "150 ppm loss" case is with 150 ppm of loss per ETM.  The "high loss" case is representative of what Q has measured, so I have put 500 ppm loss for ETMX and 150 ppm loss for ETMY.

ArmPower_vs_Loss.png


And, the transfer functions (all these, as with all TFs in the last week, use the "high loss" situation with 500ppm for ETMX and 150ppm for ETMY).

TFs_TRX_vsCARMoffset_PRFPMI_antispring.pngTFs_TRX_vsCARMoffset_PRFPMI_spring.png

TFs_REFLDC_vsCARMoffset_PRFPMI_antispring.pngTFs_REFLDC_vsCARMoffset_PRFPMI_spring.png

TFs_REFL11I_vsCARMoffset_PRFPMI_antispring.pngTFs_REFL11I_vsCARMoffset_PRFPMI_spring.png

  10608   Wed Oct 15 02:59:04 2014 ranaUpdateLSCCARM W/N TFs

 In my previous elog in this thread, I showed the CARM OLG given some new digital filters and the varying CARM plant (spring side, not anti-spring). Jenne has subsequently produced the TFs for all of the rest of the CARM offsets.

These attached plots for several CARM offsets show that the anti-spring side is much more stable than the spring side and so we should use that. Annecadotedally, we think that positive CARM offsets are more stable when going to arm powers of > 10, so perhaps this means that +CARM = -SPRING.

The first PDF shows the spring OLGs and the 2nd one shows the antispring OLGs. I have put in some gain changes to keep the UGF approximately the same as the offset is changed.

The PDF thumbnails will become visible once Q and Diego install the new nodus.

 UPDATE OCt 16: this is all wrong! bad conversion of filters within the invbilinear.m function.

Attachment 1: spring.pdf
spring.pdf
Attachment 2: antispring.pdf
antispring.pdf
  10607   Wed Oct 15 02:58:03 2014 JenneUpdateLSCWhich side of optical spring are we on?

Some measurements.  Unclear meaning.  

We tried both positive and negative numbers in the CARM offset, and then looked at transfer functions at various arm powers. The hope is to be able to compare these with some simulation to figure out which side of the CARM resonance we are on.

The biggest empirical take-away is that we repeatedly (3 times in a row) lost lock when holding at arm powers of about 5 with negative CARM offsets.  However, we were repeatedly (2+ times tonight) able to sit and hold at arm powers of 10+ with positive CARM offsets.

 

 

 

I am not sure that we get enough information out of these plots to tell us which side of the CARM resonance we are really on.  Q is working on taking some open loop CARM measurements (actuating and measuring at SUS-MC2_LSC) to see if we can compare those more directly to Rana's plots.

Positive number in the digital CARM offset:

PRFPMI_MC2injection_PosCARMoffset_REFLDC_TRX_response_14Oct2014_TRX.pdf

PRFPMI_MC2injection_PosCARMoffset_REFLDC_TRX_response_14Oct2014_REFLDC.pdf

Negative numbers in digital CARM offset:

PRFPMI_MC2injection_NegCARMoffset_REFLDC_TRX_response_14Oct2014_TRX.pdf

PRFPMI_MC2injection_NegCARMoffset_REFLDC_TRX_response_14Oct2014_REFLDC.pdf

  10606   Tue Oct 14 23:44:42 2014 diegoUpdateComputer Scripts / ProgramsRossa and Allegra wiped, Ubuntu 12.04 installed

Allegra and Rossa wiped and updated to Ubuntu 12.04.5 by me and Ericq; the following procedure was followed:

1) create "controls" user with uid=1001, gid=1001
2) setup network configuration (IP, Mask, Gateway, DNS), .bashrc, /etc/resolv.conf
3) add synaptic package manager (Ubuntu Software Center used by default)
4) add package nfs-common (and possibly libnfs1) to mount nfs volumes; mount nfs volume adding the line "chiara:/home/cds/       /cvs/cds/       nfs     rw,bg   0    0" in /etc/fstab
5) add package firmware-linux-nonfree, needed for graphics hardware recognition (ATI Radeon HD 2400 Pro): due to kernel and xorg-server versions of 12.04.5, and because ATI dropped support for legacy cards in their proprietary driver fglrx the only solution is to keep Gallium drivers
6) add packages libmotif3 grace, needed by dataviewer
7) add repository from https://www.lsc-group.phys.uwm.edu/daswg/download/repositories.html (Debian Squeeze); install lcssoft-archive-keyring as first package or apt-get will complain
8) add package lscsoft-metaio libjpeg62, needed by diaggui/awggui (Ericq: used lalmetaio on rossa)
9) add packages python-numpy python-matplotlib python-scipy ipython
10) change ownership of /opt/ to controls:controls
11) add csh package
12) add t1-xfree86-nonfree ttf-xfree86-nonfree xfonts-75dpi xfonts-100dpi, needed by diaggui/awggui (needs reboot)
13) add openssh-server

 

Ubuntu creates the first user during installation with uid=1000 and gid=1000; if needed, they could be changed afterwards using a second user account and the following procedure (twice, if the second users gets the 1001 uid and gid):

sudo usermod -u <NEWUID> <LOGIN>   
sudo groupmod -g <NEWGID> <GROUP>
sudo find /home/ -user <OLDUID> -exec chown -h <NEWUID> {} \;
sudo find /home/ -group <OLDGID> -exec chgrp -h <NEWGID> {} \;
sudo usermod -g <NEWGID> <LOGIN>

  10605   Tue Oct 14 17:38:31 2014 ericqUpdateComputer Scripts / ProgramsRossa and Allegra wiped, Ubuntu 12.04 installed

When I came in, Rossa was booted to Ubuntu 10. I tried rebooting to select 12, but couldn't ever successfully boot again. Since Diego was setting up Allegra from scratch, I've wiped and done the same with Rossa. 

  10604   Mon Oct 13 21:59:47 2014 ranaUpdateComputer Scripts / ProgramsWhich side of optical spring are we on? (No progress)

 

 Since no one was here, I started the Ubuntu 10 - 12 upgrade on Rossa. It didn't run at first because it wanted to remove 'update-manager-kde' even though it was on the blacklist. I removed it from the command line and now its running. Allegra, OTOH, refuses to upgrade. Someone please ask Diego to wipe it and then install Ubuntu 12 LTS on there in the morning...its a good way to learn the Martian CDS setup.

  10603   Mon Oct 13 21:20:56 2014 JenneUpdateLSCWhich side of optical spring are we on? (No progress)

[Jenne, Diego]

In order to distinguish between the spring and antispring sides of the CARM resonance, we need to have transfer function measurements down to at least 100 Hz (although lower is better). 

We tried to get some transfer functions the same way Q did, but noticed that (a) we couldn't get any low frequency coherence, and (b) that when we increased the amplitude of the white (well, lowpass at 5kHz) noise, the coherence between the AO injection and REFL DC went down.  Not clear why this is.

Anyhow, we tried taking good ol' fashioned swept sine transfer functions, although eventually the lightbulb came on that the AO path has a highpass in it.  Duh, Jenne.  So, we started trying to actuate on MC2 position rather than the AO path laser frequency.  We didn't get too far though before El Salvadore decided to have a few 7.4 earthquakes.  We're bored of aftershocks knocking us out of lock, so we're going to come back to this tomorrow.

 

  10602   Mon Oct 13 17:09:38 2014 ericqUpdateCDSFrame builder is mad

 

This CPU load may have been me deleting some old frame files, to see if that would allow daqd to come back to life. 

Daqd was segfaulting, and behaving in a manner similar to what is described here: (stack exchange link). However, I couldn't kill or revive daqd, so I rebooted the FB. 

Things seem ok for now...

 

  10601   Mon Oct 13 16:57:26 2014 KojiUpdateCDSFrame builder is mad

CPU load seems extremely high. You need to reboot it, I think

controls@fb /proc 0$ cat loadavg
36.85 30.52 22.66 1/163 19295

  10600   Mon Oct 13 16:08:49 2014 JenneUpdateCDSFrame builder is mad

I think the daqd process isn't running on the frame builder. 

Daqd_problem_maybe.png

I tried telnetting' to fb's port 8087 (telnet fb 8087) and typing "shutdown", but so far that is hanging and hasn't returned a prompt to me in the last few minutes.  Also, if I do a "ps -ef | grep daqd" in another terminal, it hangs. 

I wasn't sure if this was an ntp problem (although that has been indicated in the past by 1 red block, not 2 red blocks and a white one), so I did "sudo /etc/init.d/ntp-client restart", but that didn't make any change.  I also did an mxstream restart just in case, but that didn't help either. 

I can ssh to the frame builder, but I can't do another telnet (the first one is still hung).  I get an error "telnet: Unable to connect to remote host: Invalid argument"

Thoughts and suggestions are welcome!

  10599   Mon Oct 13 14:44:52 2014 SteveUpdateVACRGA scan pd78 -day 13

 Our first RGA scan since May 27, 2014 elog10585

 The Rga is still warming up. It was turned on 3 days ago as we recovered from the second power outage.

 

Attachment 1: RGAscan13Oct2014.png
RGAscan13Oct2014.png
  10598   Mon Oct 13 12:01:28 2014 ericqUpdateLSCWhich side of optical spring are we on?

 I went back into the DQ channels to look at the TF from AO injection to REFLDC (which is easy to do with this kind of noise injection TF).  

AOInjection_SqrtInv_REFLDC.png

I fear that REFL does not seem to have as much phase under the resonance as we have modeled, lacking about 10-20 degrees. This could result from the zero in the REFL DC response that we've modeled at ~200ish Hz is actually higher. I'll look into what affects the frequency of that feature. 

It is, of course, possible, that this measurement doesn't properly cancel out the various digital effects, but the REFLDC phase curves do seem to settle to (+/-) 90 after the pole as expected. 

DTT XML file is attached. 

Attachment 2: AOinjection_SqrtInv_REFLDC.xml.zip
  10597   Fri Oct 10 14:41:04 2014 SteveUpdateVACPower outage II & recovery

Quote:

Quote:

Quote:

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

 We are pumping again. This is a temporary configuration. The annuloses are at atmosphere. The reset reboot of c1Vac1 and 2 opened everything except the valves that were disconnected.

TP2 lost it's vent solenoid power supply and dry pump during the power outage.

They were replaced but the new small turbo controller is not set up as the old TP2 was so it does not allow V4 to open. 

Tomorrow I will swap back the old controller,  pump down the annuloses and close off the ion pumps.

I removed the beam block from the PSL table and opened the shutter. CC4 has the real pressure 2e-5 Torr  

CC1 is not real.

 Tp2 is controlled by old controller. Annuloses pumped down. Valve configuration: "vacuum normal "

  Ion pumps closed at  <1e-4 mT

Attachment 1: recovery_poweroutage.png
recovery_poweroutage.png
  10596   Fri Oct 10 14:27:44 2014 SteveUpdateCDSc1Vac1 and c1vac2 reboot was a failure

Quote:

Quote:

 

 I have brought back c1auxex and c1auxey.  Hopefully this elog will have some more details to add to Rana's elog 10015, so that in the end, we have the whole process documented.

The old Dell computer was already in a Minicom session, so I didn't have to start that up - hopefully it's just as easy as opening the program.

I plugged the DB9-RJ45 cable into the top of the RJ45 jacks on the computers.  Since the aux end station computers hadn't had their bootChanges done yet, the prompt was "VxWorks Boot" (or something like that).  For a computer that was already configured, for example the psl machine, the prompt was "c1psl", the name of the machine.  So, the indication that work needs to be done is either you get the Boot prompt, or the computer starts to hang while it's trying to load the operating system (since it's not where the computer expects it to be).  If the computer is hanging, key the crate again to power cycle it.  When it gets to the countdown that says "press any key to enter manual boot" or something like that, push some key.  This will get you to the "VxWorks Boot" prompt. 

Once you have this prompt, press "?" to get the boot help menu.  Press "p" to print the current boot parameters (the same list of things that you see with the bootChange command when you telnet in).  Press "c" to go line-by-line through the parameters with the option to change parameters.  I discovered that you can just type what you want the parameter to be next to the old value, and that will change the value.  (ex.  "host name   : linux1   chiara"   will change the host name from the old value of linux1 to the new value that you just typed of chiara). 

After changing the appropriate parameters (as with all the other slow computers, just the [host name] and the [host inet] parameters needed changing), key the crate one more time and let it boot.  It should boot successfully, and when it has finished and given you the name for the prompt (ex. c1auxex), you can just pull out the RJ45 end of the cable from the computer, and move on to the next one.

 

 Koji, Jenne and Steve

 

Preparation to reboot:

1, closed VA6, V5 disconnected cable to valves ( closed all annuloses )

2, closed V1, disconnected it and stopped Maglev rotation

3, closed V4, disconnected its cable

   See Atm1,  This set up is insured us so there can not be any accidental valve switching to vent the vacuum envelope if reboot-caos strikes.[moving=disconnected]

4, RESET c1Vac1 and c1Vac2 one by one and together. They both went at once. We did NOT power recycled.

    Jenne entered the new "carma" words on  the old Dell laptop and checked the good answers. The reboot was done.

    Note: c1Vac1 green-RUN indicator LED is yellow. It is fine as yellow.

5, Checked and TOGGLED valve positions to be correct value ( We did not correct the the small turbo pumps monitor positions, but they  were alive )

6,  V4 was reconnected and opened. Maglev was started.

7,  V1 cable reconnected and opened at full rotation speed of 560 Hz

8,  V5 cable reconnected,  valve opened..............VA6 cable connected and opened........

9,   Vacuum Normal valve configuration was reached.

 

 Yesterday's  reboot was prepared as stated above with one difference. 

  c1Vac1 and c1Vac2 were DOWN before reset. The disconnected valves stayed closed (plus VC1) . This saved us, so the main volume was not vented.

  All others OPENED. PR1 and PR2  rouphing pumps turned ON.  Ion pumps gate valve opened too. The ion pumps did not matter either because they were pump down recently.

  We'll have to rewrite how to reboot vacuum.

  

 

 

 

Attachment 1: c1vac1resetReboot.png
c1vac1resetReboot.png
Attachment 2: c1Vac1&2down.jpg
c1Vac1&2down.jpg
  10595   Fri Oct 10 03:25:11 2014 JenneUpdateLSCWhich side of optical spring are we on (simulation)

I have a simulated version of the differences that we expect to see between the 2 different sides of the CARM resonance.  The point is that we can try to compare these results with Q's measured results (elog 10594) to see if we know if we are on the spring or antispring side.


I calculated the same transfer functions vs CARM offset again, although tonight I do it in steps of 20pm because I was getting bored of waiting forever.  Anyhow, this is important because my previous post (elog 10591) didn't have spring side calculations all the way down to 1pm.

This is similarly true for that elog 10591, but here are some notes on how I am currently getting the W/N units out of Optickle.  First of all, I am still using old Optickle1.  I don't know if there are significant units ramifications for that, but just in case I'll write it down.  Nic tells me that to get [W/N] out of Optickle1, I need to multiply sigAC (units of [W/m]) by my simple pendulum (units of [m/N]).  Both of these "meters" in the last sentence are "mevans meters", which are the meters you would get per actuation if radiation pressure didn't exist.  So, I guess they're supposed to cancel out?  I need to camp out in Nic's office until I figure this out and get it untangled in my head.

Plots of transfer functions for both sides of CARM resonance (same as prev. elog), as well as the ratio between the spring and antispring transfer functions at each CARM offset:

TFs_TRX_vsCARMoffset_PRFPMI_antispring.pngTFs_TRX_vsCARMoffset_PRFPMI_spring.pngTFs_TRX_vsCARMoffset_PRFPMI_differentials.png

 

TFs_REFLDC_vsCARMoffset_PRFPMI_antispring.pngTFs_REFLDC_vsCARMoffset_PRFPMI_spring.pngTFs_REFLDC_vsCARMoffset_PRFPMI_differentials.png

TFs_REFL11I_vsCARMoffset_PRFPMI_antispring.pngTFs_REFL11I_vsCARMoffset_PRFPMI_spring.pngTFs_REFL11_vsCARMoffset_PRFPMI_differentials.png

The take-away message from the 3rd column is that other than a sign flip, we don't expect to see very much difference between the 2 sides of the CARM resonance, particularly above a few hundred Hz.  (Note that we do not see the sign flip in Q's measurements because he is looking at CARM_IN1, which is after the input matrix, and the input matrix elements have opposite signs between the signs of the CARM offsets.  So, the sign flip between spring and antispring around the UGF is implied in the measurements, just not explicit).

Also, something that Rana pointed out to me, and I still don't know why it's true:  The antispring transfer functions (at least for the transmission) don't have all the phase features that we expect to see based on their magnitudes.  If you look at the TRX antispring plot, blue trace (which is about 500pm from resonance), you'll see that the magnitude starts flat at DC, has some slope in an intermediate region, and then at high frequencies has 1/f^2.  However, the phase seems to not know about this intermediate region, and magically waits until the 1kHz resonance to flip the full 180 degrees. 

Attachment 10: ForElog_9Oct2014.zip
  10594   Fri Oct 10 03:05:09 2014 ericqUpdateLSCWhich side of optical spring are we on?

 I made some measurements to try and see if any difference could be seen with different CARM offset signs. 

Specifically, at various offsets, I used a spare DAC channel to drive IN1 of the CM board, as an "AO Exciter." I used CM_SLOW to monitor the signal that was actually on the board. I used the CARM_IN1 error signal to see how the optical plant responded to the AO excitation. Rather than a swept sine, I used a noise injection kind of TF measurement. 

Here are plots of CARM_IN1 / CM_SLOW at different CARM FM offsets; I chose to plot this in an attempt to divide out some of the common things like AA and delays and make the detuned CARM pole more evident). The offsets chosen correspond roughly to powers of 2, 2.5, and 3. I tried to go higher than that, but didn't remain locked for long enough to measure the TF. 

comparison.pdf

By eye, I don't see much of a difference. We can zpk fit the data, and see what happens. 

 

  10593   Fri Oct 10 00:20:37 2014 ranaUpdateLSCCARM W/N TFs

 

 Assuming that these Watts/Newtons TFs are correct, I've modeled the resulting open loop gain for CARM. The goal is to design a loop that is stable under a wide range of offsets and also has enough low frequency gain.

The attached PDF shows this. I used a CARM OLG Simulink model:

carm40.png

I've replaced the 'armTF' block with a digital gain of zero. After measuring the open loop gain of all but this piece, I multiply that 'OLG' with the W/N that Jenne extracted from Optickle for CARM->TR (not sqrtInv)

I plot the resulting estimate of the actual OLG in the following plot. Since the CARM-RSE peak is moving down, we use the LP filter that Den installed for us several months ago. To account for the radiation pressure spring, we use some low frequency boosts but not the crazy FM4 filter.

As you can see, the loop is stable from 500 to 200 pm, but then goes unstable around 110 pm. I expect that we will want to do some fancy shaping there or switch from TRX+TRY into something else.

This assumes we have filters 0, 1, 3, 5, and 7 on in the CARM filter bank - still need to add the digital AA/AI to make the loop phase lag a little more accruate, but I think this is looking promising.

 

Attachment 2: carm.pdf
carm.pdf
  10592   Thu Oct 9 19:14:04 2014 ericqUpdateGeneralPower outage II & recovery

I touched up the PMC alignment. 

While bringing back the MC, I realized IOO got a really old BURT restore again... Restored from midnight last night. WFS still working.

Now aligning IFO for tonight's work

  10591   Thu Oct 9 18:30:59 2014 JenneUpdateLSCCARM W/N TFs

Okay, here (finally) is the optickle version.

I have the antispring case, starting at 501pm and going roughly every 10pm down to 1pm.  I also have the spring case, starting at -501pm and going down every 10pm to roughly -113pm.  Rossa crashed partway through the calculation, which is why it's not all the way.

In the .zip is a .mat file called PDs_vs_CARMoffset_WattsPerNewton.mat, which has (a) a list of the 50 CARM offsets, (b) a frequency vector, and (c) several transfer function arrays.  The transfer function arrays are supposed to be intuitively named, eg. REFLDC_antispring. 

In the .zip file are also the original .mat files that are a result of the tickle calculations, as well as a .m file for loading them and making the plots, etc.  For anyone who is trying to re-create the transfer function variables, I by-hand saved the variable called PD_WperN to the names like REFLDC_antispring.  Just kidding.  Those original mat files are over 100Mb each, and that's just crazy.  Anyhow, I think the .zip has everything needed to use the data from these plots.

Anyhow.  Here are plots of what are in the various transfer function arrays:

 TRX_antispring.pngTRX_spring.png

REFLDC_antispring.pngREFLDC_spring.png

REFL11I_antispring.pngREFL11I_spring.png

Attachment 6: ForElog.zip
  10590   Thu Oct 9 17:33:28 2014 SteveUpdateVACPower outage II & recovery

Quote:

Quote:

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

 We are pumping again. This is a temporary configuration. The annuloses are at atmosphere. The reset reboot of c1Vac1 and 2 opened everything except the valves that were disconnected.

TP2 lost it's vent solenoid power supply and dry pump during the power outage.

They were replaced but the new small turbo controller is not set up as the old TP2 was so it does not allow V4 to open. 

Tomorrow I will swap back the old controller,  pump down the annuloses and close off the ion pumps.

I removed the beam block from the PSL table and opened the shutter. CC4 has the real pressure 2e-5 Torr  

CC1 is not real.

Attachment 1: pumpingAgain.png
pumpingAgain.png
  10589   Thu Oct 9 16:31:53 2014 ericqUpdateLSCCARM W/N TFs

In my previous simulation results, I've always plotted W/m, which isn't exactly straightforward. We often think about the displacement that a given mirror actuator output will induce, but when we're locking the full IFO, radiation pressure effects modify the mechanical response depending on the current detuning, making the meaning of W/m transfer functions a little fuzzy.

So, I've redone my MIST simulations to report Watts of signal response due to actual actuator newtons, which is what we actually control with the digital system. Note, however, that these Watts are those that would be sensed by a detector directly at the given port, and doesn't take into account the power reduction from in-air beamsplitters, etc.

As an example, here are the SqrtInv and REFLDC CARM TFs for the anti-spring case:

carm2SQRTinv.pdfcarm2REFLDC.pdf

 

The units of the SqrtInv plot are maybe a little weird, these TFs are the exact shape of the TRX W/N TFs with the DC value adjusted by the ratio of the DC sweep derivatives of TRX and SqrtInv. 

All of the results live in /svn/trunk/modeling/PRFPMI_radpressure/

 

  10588   Thu Oct 9 13:29:14 2014 JenneUpdatePSLPower outage II & recovery

Quote:

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

 PMC is fine.  There are sliders in the Phase Shifter screen (accessible from the PMC screen) that also needed touching. 

PSL shutter is still closed until Steve is happy with the vacuum system - I guess we don't want to let high power in, in case we come all the way up to atmosphere and particulates somehow get in and get fried on the mirrors. 

  10587   Thu Oct 9 11:56:35 2014 SteveUpdateVACPower outage II & recovery

Quote:

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

  10586   Thu Oct 9 10:52:37 2014 manasaUpdateGeneralPower outage II & recovery

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

  10585   Wed Oct 8 15:31:31 2014 JenneUpdateCDSComputer status

After the Great Computer Meltdown of 2014, we forgot about poor c0rga, which is why the RGA hasn't been recording scans for the past several months (as Steve noted in elog 10548).

Q helped me remember how to fix it.  We added 3 lines to its /etc/fstab file, so that it knows to mount from Chiara and not Linux1.  We changed the resolv.conf file, and Q made some simlinks.

Steve and I ran ..../scripts/RGA/RGAset.py on c0rga to setup the RGA's settings after the power outage, and we're checking to make sure that the RGA will run right now, then we'll set it back to the usual daily 4am run via cron.

EDIT, JCD:  Ran ..../scripts/RGA/RGAlogger.py, saw that it works and logs data again.  Also, c0rga had a slightly off time, so I ran sudo ntpdate -b -s -u pool.ntp.org, and that fixed it.

Quote:

 

In all of the fstabs, we're using chiara's IP instead of name, so that if the nameserver part isn't working, we can still get the NFS mounts.

On control room computers, we mount the NFS through /etc/fstab having lines like:

192.168.113.104:/home/cds /cvs/cds nfs rw,bg 0 0
fb:/frames /frames nfs ro,bg 0 0

Then, things like /cvs/cds/foo are locally symlinked to /opt/foo

For the diskless machines, we edited the files in /diskless/root. On FB, /diskless/root/etc/fstab becomes

master:/diskless/root                   /         nfs     sync,hard,intr,rw,nolock,rsize=8192,wsize=8192    0 0
master:/usr                             /usr      nfs     sync,hard,intr,ro,nolock,rsize=8192,wsize=8192    0 0
master:/home                            /home     nfs     sync,hard,intr,rw,nolock,rsize=8192,wsize=8192    0 0
none                                    /proc     proc    defaults          0 0
none                                    /var/log        tmpfs   size=100m,rw    0 0
none                                    /var/lib/init.d tmpfs   size=100m,rw    0 0
none                                    /dev/pts        devpts  rw,nosuid,noexec,relatime,gid=5,mode=620        0 0
none                                    /sys            sysfs   defaults        0 0
master:/opt                             /opt      nfs    async,hard,intr,rw,nolock  0 0
192.168.113.104:/home/cds/rtcds         /opt/rtcds      nfs     nolock  0 0
192.168.113.104:/home/cds/rtapps        /opt/rtapps     nfs     nolock  0 0

("master" is defined in /diskless/root/etc/hosts to be 192.168.113.202, which is fb's IP)

and /diskless/root/etc/resolv.conf becomes:

search martian

nameserver 192.168.113.104 #Chiara

 

 

 

  10584   Wed Oct 8 08:46:57 2014 SteveUpdateVACVAT valves actuator lubricant

Quote:

 Pump  spool valves V5, V4, V3 sweating a lot. VM3 and VC2 not so much.

They are VAT valves F28-62887-03, 11, 14 and so on ~15-16 years old.

 I'm speculating that some plastic is aging-braking down at the atmospheric-pneumatic side of valves.
The vacuum side is not effected, according to vacuum pressure readings.

May be some condensation from the small turbos? No

I'm looking for an identical valve to examine, but I can not find one.

We are using industrial grade 99.96% Nitrogen to actuate these valves.

Valves are not effected are  dry: VA6, V6, V7 and all annuloses.

 

VAT's answer:

Yes, our engineers are aware of this issue.  They say:

The pneumatic actuator needs lubricant as the O-ring (Viton) slides in the cylinder. Without grease the O-ring would be abraded and leaking after only a relatively few cycles.  The lubricant used in our pneumatic actuators is an emulsion of oil and Teflon flakes.   Vibration, many cycles and sometimes high temperature lead to the separation of the oil and Teflon.   That is apparently the issue you are seeing.

VAT is and has been testing and qualifying new lubricants, and this is one of the factors we are always looking to improve.  The formula we used 15 years ago in these valves seems to have performed reasonable  well.  Our formula today should perform even better.

We realize this explanation does not help you with these existing valves, but 15 years of service is not too bad is it? 

Steve -NOTE:bonnet seal is metal so there is no way this oil can get into our vacuum ( only if the bellow leaks )

  10583   Wed Oct 8 03:49:42 2014 JenneUpdateLSCPRFPMI, other sign of CARM offset

Other thoughts from talking with Rana earlier:

  • Is it possible to suppress CARM motion enough that we can use just a digital loop?  Can we do without the AO path?  What would said digital loop have to look like?
  • Q points out that there is a zero in the relative transfer function between CARM to transmission, and CARM to REFLDC.  Is that zero invertible?
  • We should look at some limits, like saturation limits.  How much will we need to actuate?
  • Rana is looking at making a more detailed CARM loop model in simulink to see if we can stay stable throughout our CARM offset reduction journey.

Also, Q and I squished on the suspension connectors earlier tonight.  MC2 was going wonky, which we feared might be because we were in that area working on Chiara earlier.  Then, after squishing the MC connectors, the PRM started misbehaving, so we went and gave all the corner suspension connectors another squish.  No suspension glitching problems since then.

  10582   Wed Oct 8 03:37:44 2014 ericqUpdateLSCPRFPMI, other sign of CARM offset

 [ericq, Jenne]

We attempted some of the same old CARM offset reduction tonight, but from the other direction. (We have no direct knowledge of which is the spring and which is the anti-spring side)

We we able to get to, and sit at, arm powers on the order of 5. Really, we kind of wanted just to push things to try and inform our current ideas of what our limiting factor is, so as to appropriately expend our efforts. 

Candidates include:

  • ALS noise causing excess DARM motion
    • Means we need to DRMI to widen DARM linewidth, avoid sign flip in AS55, IR lock DARM sooner
  • Intolerable sensor noise makes CARM wander too much, changing our plant more than our loops can handle
    • We should work on having live calibrated CARM spectra during lock attempts, to compare with Jenne's noise estimates, and see where/how/why we exceed it. 
  • detuned CARM pole causes loop instability
    • Maybe some sort of notching can get us by
    • AO path could extend bandwidth, getting the pole into the control band 
  • SqrtInv signals losing low frequency sensitivity due to radiation pressure, or DC sensitivity due to transmission curve flattening out
    • Bring in AO path for supplementary bandwidth, which lets us turn up loop gain / engage big boosts
    • Or, switch to REFLDC in digital land, which is nontrivial, due to different optical plant shapes.

We took many digital CARM OLTFs at different offsets; it never really looked like a burgeoning pole was about to make things unstable. The low frequency OLTF data had bad SNR, so it wasn't clear if we were losing gain there. We weren't at arm powers where we would expect the DC transmission curve to flatten out yet, from simulations (which is above a few tens).

My impression from at least our last lock loss was a DARM excursion. However, using the DRMI won't get rid of the second two points.

 

  10581   Wed Oct 8 03:20:46 2014 JenneUpdateLSCDo we need AO for acquisition?

As part of trying to determine whether we require the AO path for lock acquisition, or if we can survive on just digital loops, I looked at the noise suppression that we can get with a digital loop.

I took a spectrum of POX, and calibrated it using a line driving ETMX to match the ALSX_FINE_PHASE_OUT_HZ channel, and then I converted green Hz to meters. 

I then undid the LSC loop that was engaged at the time (XARM FMs 1,2,3,4,5,8 and the pendulum plant), to infer the free running arm motion. 

I also applied the ALS filters (CARM FMs 1,2,3,5,6) and the pendulum plant to the free running noise to infer what we expect we could do with the current digital CARM filters assuming we were not sensor noise limited.

In the figure, we see that the free running arm displacement is inferred to be about 0.4 micrometers RMS.  The in-loop POX signal is 0.4 picometers RMS, which (although it's in-loop, so we're not really that quiet) is already better than 1/10th the coupled cavity linewidth.  Also, the CARM filters that we use for the ALS lock, and also the sqrtInvTrans lock are able to get us down to about 1 pm RMS, although that is not including sensor noise issues. 

EstimatedNoisePerformance.png

For reference, here are the open loop gains for the LSC filters+pendulum and ALS filters+pendulum that we're currently using.  The overall gain of these loops have been set so the UGF is 150Hz.

 BodeLSCvsALS.png

It seems to me that as long as our sensors are good enough, we should be able to keep the arm motion down to less than 1/10th or 1/20th the coupled cavity linewidth with only the digital system.  So, we should think about working on that rather than focusing on engaging the AO path for a while.

Attachment 3: CARMnoise_7Oct2014.zip
  10580   Tue Oct 7 19:40:58 2014 ericqUpdateLSCCM, REFL11 Wiring

I've changed the LSC rack wiring a little bit, to give us some flexibility when it comes to REFL11. 

Previous, the REFL11 demod I output was fed straight to the CM servo board, and the slow CM board output was hooked up to the REFL11I ADC channel. Thus, it wasn't really practical to ever even look at sensing angles in REFL11, since the I and Q inputs were subject to different signal paths/gains. (Also, doing LSC offsets would do wonky things to refl11 depending on the state of the switches on the CM board screen.)

Thus, I've hooked up the CM board slow output into the the previously existing, aptly named, CM_SLOW channel. The REFL11 demod board I output is split to IN1 of the CM board, and the REFL11 I ADC channel. 

So, there is no longer hidden behavior in behind the REFL11 input filters, channels are what they claim to be, and the CM board output is just as easily accessible to the LSC filters as before. 

  10579   Tue Oct 7 16:55:16 2014 SteveUpdateVACUnexpected sweaty valves

 Pump  spool valves V5, V4, V3 sweating a lot. VM3 and VC2 not so much.

They are VAT valves F28-62887-03, 11, 14 and so on ~15-16 years old.

 I'm speculating that some plastic is aging-braking down at the atmospheric-pneumatic side of valves.
The vacuum side is not effected, according to vacuum pressure readings.

May be some condensation from the small turbos? No

I'm looking for an identical valve to examine, but I can not find one.

We are using industrial grade 99.96% Nitrogen to actuate these valves.

Valves are not effected are  dry: VA6, V6, V7 and all annuloses.

 

Attachment 1: sweatyV5.jpg
sweatyV5.jpg
Attachment 2: sweatyV5f.jpg
sweatyV5f.jpg
  10578   Tue Oct 7 16:41:12 2014 JenneUpdateGeneralChiara not responding

I put a little script into ...../scripts/Admin that will check the fullness of Chiara's disk.  We only have the mailx program installed on Nodus, so for now it runs on Nodus and sends and email when the chiara disk that nodus mounts is more than 97% full.

  10577   Tue Oct 7 16:19:50 2014 ericqUpdateGeneralChiara not responding

We're back! It was entirely my fault.

Some months ago I wrote a script that chiara calls every night, that rsyncs its hard drive to an external drive. With the power outage yesterday, the external drive didn't automatically mount, and thus chiara tried to rsync its disk to the mount point, which was at the time just a local folder, which made it go splat. 

I'm fixing the backup script to only run if the destination of the rsync job is not a local volume. 

ELOG V3.1.3-