Here are a couple of preliminary plots of the noise from a 20minute stretch of data - the new curve is the orange one, labelled sensing, which is the spectrum of the PIT/YAW error signal from the HeNe beam single bounce off a single steering mirror onto the QPD, normalized to account for the difference in QPD sum. The peaky features that were absent in the dark noise are present here.
I am a bit confused about the total sum though - there is ~2.5mW of light incident on the PD, and the transimpedance gain is 10.7kohm. So I would expect 2.5e-3 mW * 0.4A/W * 10.7 kV/A ~ 10.7V over 4 quadrants. The ADC is 16 bit and has a range +/- 10V, so 10.7 V should be ~35,000 cts. But the observed QPD sum is ~14,000 counts. The reflected power was measured to be ~250uW, so ~10% of the total input power. Not sure if this is factored into the photodiode efficiency value of 0.4A/W. I guess there is some fraction of the QPD that doesn't generate any photocurrent (i.e. the grooves defining the quadrants), but is it reasonable that when the Oplev beam is well centered, ~50% of the power is not measured? I couldn't find any sneaky digital gains between the quadrant channels to the sum channel either... But in the Oplev setup, the QPD had ~250uW of power incident on it, and was reporting a sum of ~13,000 counts with a transimpedance gain of 100kohm, so at least the scaling seems to hold...
I guess we wan't to monitor this over a few days, see how stationary the noise profile is etc. I didn't look at the spectrum of the intensity noise during this time.
Here are some pics of the setup: https://photos.app.goo.gl/DHMINAV7aVgayYcf1.None of the existing Oplev input/output steering optics were touched. Steve can make modifications as necessary, perhaps we can make similar mods to the SRM Oplev QPD and the BS one to run the HeNe test for a few days...
I've setup a test setup on the ITMY Oplev table. Details + pics to follow, but for now, be aware that
too complex; just shoot straight from the HeNe to the QPD. We lower the gain of the QPD by changing the resistors; there's no sane reason to keep the existing 100k resistors for a 2 mW beam. The specular reflection of the QPD must be dumped on a black glass V dump (not some flimsy anodized aluminum or dirty razor stack)
I've attempted to visualize the various components of the cost function in the way I've defined it for the current iteration of the Oplev optimal control loop design code. For each term in the cost function, the way the cost is computed depends on the ratio of the abscissa value to some threshold value (set by hand for now) - if this ratio is >1, the cost is the logarithm of the ratio, whereas if the ratio is <1, the cost is the square of the ratio. Continuity is enforced at the point at which this transition happens. I've plotted the cost function for some of the terms entering the code right now - indicated in dashed red lines are the approximate value of each of these costs for our current Oplev loop - the weights were chosen so that each of the costs were O(10) for the current controller, and the idea was that the optimizer could drive these down to hopefully O(1), but I've not yet gotten that to happen.
Based on the meeting yesterday, some possible ideas:
You may want to consult with the cryo Q people (Brittany, Aaron) for a Si QPD. If you want the same QPD architecture, I can look at my QPD circuit stock.
What is the best way to set this test up?
I think we need a QPD to monitor the spot rather than a single element PD, to answer this question about the sensor noise. Ideally, we want to shoot the HeNe beam straight at the QPD - but at the very least, we need a lens to size the beam down to the same size as we have for the return beam on the Oplevs. Then there is the power - Steve tells me we should expect ~2mW at the output of these HeNes. Assuming 100kohm transimpedance gain for each quadrant and Si responsivity of 0.4A/W at 632nm, this corresponds to 10V (ADC limit) for 250uW of power - so it would seem that we need to add some attenuating optics in the way.
Also, does anyone know of spare QPDs we can use for this test? We considered temporarily borrowing one of the vertex OL QPDs (mark out its current location on the optics table, and move it over to the SP table), but decided against it as the cabling arrangement would be too complicated. I'd like to use the same DAQ electronics to acquire the data from this test as that would give us the most direct estimate of the sensor noise for supposedly no motion of the spot, although by adding 3 optics between the HeNe and the QPD, we are introducing possible additional jitter couplings...
For the OL NB, probably don't have to fudge any seismic noise, since that's a thing we want to suppress. More important is "what the noise would be if the suspended mirrors were no moving w.r.t. inertial space".
For that, we need to look at the data from the OL test setup that Steve is putting on the SP table.
For the insulation, I have decided to use this one (Buna-N/PVC Foam Insulation Sheets). We will need 3 of the 1 inch plain backing ones (9349K4) to wrap a few layers around it. I'll try two layers for now, since the insulation seems to be doing quite well according to initial testing.
Updated some values, most importantly, the k-factor. I had assumed that it was in the correct units already, but when converting it to 0.046 W/(m^2*K) from 0.26 BTU/(h*ft^2*F), I got the following plot. The time constant is still a bit larger than what we'd expect, but it's much better with these adjustments.
For our next steps, I will measure the time constant of the heater without any insulation and then decide how many layers of it we will need. I'll need to construct and calibrate a temperature sensor like the ones I've made before and use it to record the values more accurately.
I performed a test with the can last week with one layer of insulation to see how well it worked. First, I soldered two heaters together in series so that the total resistance was 48.6 ohms. I placed the heaters on the sides of the can and secured them. Then I wrapped the sides and top of the can in insulation and sealed the edges with tape, only leavng the handles open. I didn't insulate the bottom. I connected the two ends of the heater directly into the DC source and drove the current as high as possible (around 0.6A). I let the can heat up to a final value of 37.5C, turned off the current and manually measured the temperature, recoding the time every half degree. I then plotted the results, along with a fit. The intersection of the red line with the data marks the time constant and the temperature at which we get the time constant. This came out to be about 1.6 hours, much longer than expected considering that onle one layer instead of four was used. With only one layer, we would expect the time constant to be about 13 min, while for 4 layers it should be 53 min (the area A is 0.74 m^2 and not 2 m^2).
I made a model for our seismometer can using actual data so that we know approximately what the time constant should be when we test it out. I used the appendix in Megan Kelley's report to make a relation for the temperature in terms of time.
In our case, we will heat the can to a certain temerature and wait for it to cool on its own so
We know that where k is the k-factor of the insulation we are using, A is the area of the surface through which heat is flowing, is the change in temperature, d is the thickness of the insulation.
We can take the derivative of this to get
We can guess the solution to be
where tau is the time constant, which we would like to find.
The boundary conditions are and . I assumed we would heat up the can to 40 celcius while the room temp is about 24. Plugging this into our equations,
We can plug everything back into the derivative T'(t)
Equating the exponential terms on both sides, we can solve for tau
Plugging in the values that we have, m = 12.2 kg, c = 500 J/kg*k (stainless steel), d = 0.1 m, k = 0.26 W/(m^2*K), A = 2 m^2, we get that the time constant is 0.326hr. I have attached the plot that I made using these values. I would expect to see something similar to this when I actually do the test.
To set up the experiment, I removed the can (with Steve's help) and will place a few heating pads on the outside and wrap the whole thing in a few layers of insulation to make the total thickness 0.1m. Then, we will attach the heaters to a DC source and heat the can up to 40 celcius. We will wait for it to cool on its own and monitor the temperature to create a plot and find the experimental time constant. Later, we can use the heatng circuit we used for the PSL lab and modify the parts as needed to drive a few amps through the circuit. I calculated that we'd need about 6A to get the can to 50 celcius using the setup we used previously, but we could drive a smaller current by using a higher heater resistance.
Confirmed that this crontab is running - the daily backup of the crontab seems to have successfully executed, and there is now a file crontab_nodus.ligo.caltech.edu.20171122080001 in the directory quoted below. The $HOSTNAME seems to be "nodus.ligo.caltech.edu" whereas it was just "nodus", so the file names are a bit longer now, but I guess that's fine...
I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.
This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.
I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.
I got the the SuperMicro 1U server box from Larry W on Monday and set it up in the CryoLab for initial testing.
The specs: https://www.supermicro.com/products/system/1U/5015/SYS-5015A-EHF-D525.cfm
The processor is an Intel D525 dual core atom processor with 1.8 GHz (i386 architecture, no 64-bit support). The unit has a 250GB SSD and 4GB RAM.
I installed Debian Jessie on it without any problems and compiled the most recent stable versions of EPICS base (3.15.5), asyn drivers (4-32), and modbus module (2-10-1). EPICS and asyn each took about 10 minutes, and modbus about 1 minute.
I copied the database files and port driver definitions for the cryolab from cryoaux, whose modbus services I suspended, and initialized the EPICS modbus IOC on the SuperMicro machine instead. It's working flawlessly so far, but admittedly the box is not under heavy load in the cryolab, as the framebuilder there is logging only the 16 analog channels.
I have recently worked out some kinks in the port driver and channel definitions, most importantly:
Aaron and I set 12/4 as a tentative date when we will be ready to attempt a swap. Until then the cabling needs to be finished and a channel database file needs to be prepared.
The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017
Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. "websvn" is not installed.
Per our discussions in the meetings over the last week, I've tried to put together a simple Oplev noise budget. The only two terms in this for now are the dark noise and a model for the seismic noise, and are plotted together with the measured open-loop error signal spectra.
Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. And "websvn" was also implemented.
The numbers I have from the fitting don't agree very well with the OSEM readouts. Attachment #1 shows the Oplev pitch and yaw channels, and also the OSEM ones, while I swept the ASC_PIT offset. The output matrix is the "naive" one of (+1,+1,-1,-1). SUSPIT_IN1 reports ~30urad of motion, while SUSYAW_IN1 reports ~10urad of motion.
From the fits, the BS calibration factors were ~x8 for pitch and x12 for yaw - so according to the Oplev channels, the applied sweep was ~80urad in pitch, and ~7urad in yaw.
Seems like either (i) neither the Oplev channels nor the OSEMs are well diagonalized and that their calibration is off by a factor of ~3 or (ii) there is some significant imbalance in the actuator gains of the BS coils...
Need to double check against OSEM readout during the sweep.
I calibrated the BS oplev PIT and YAW error signals as follows:
The numbers are:
BS Pitch 15 / 130 (old/new) urad/counts
BS Yaw 14 / 170 (old/new) urad/counts
I bet the calibration is out of date; probably we replaced the OL laser for the BS and didn't fix the cal numbers. You can use the fringe contrast of the simple Michelson to calibrate the OLs for the ITMs and BS.
I noticed yesterday evening that I wasn't able to engage the single arm locking servos - turned out that they weren't getting triggered, which in turn pointed me to the fact that the arm transmssion channels seemed dead. Poking around a little, I found that there was a red light on the CDS overview screen for c1rfm.
Not sure how to debug further...
* Fix seems to be to restart the sender RFM models (c1scx, c1scy, c1asx, c1asy).
Exactly: you'll have to list explicitly what functions those channels had so that we know what we're losing before we make the switch.
I disabled the OL loops for ITMX, ITMY and BS at GPStime 1194897655 to come up with an Oplev noise budget. OL spots were reasonably well centered - by that, I mean that the PIT/YAW error signals were less than 20urad in absolute value.
Attachment #1 is a first look at the DTT spectra - I wonder why the BS Oplev signals don't agree with the ITMs at ~1Hz? Perhaps the calibration factor is off? The sensing noise not really flat above 100Hz - I wonder what all those peaky features are. Recall that the ITM OLs have analog whitening filters before the ADC, but the BS doesn't...
In Attachment #2, I show comparison of the error signal spectra for ITMY and SRM - they're on the same stack, but the SRM channels don't have analog de-whitening before the ADC.
For some reason, DTT won't let me save plots with latex in the axes labels...
I've incorporated the functionality to generate sub-budgets for the various grouped traces in the NBs (e.g. the "A2L" trace is really the quadrature sum of the A2L coupling from 6 different angular servos).
For now, I'm only doing this for the A2L coupling, and the AUX length loop coupling groups. But I've set up the machinery in such a way that doing so for more groups is easy.
Here are the sub-budget plots for last night's lock - for the OL plot, there are only 3 lines (instead of 6) because I group the PIT and YAW contributions in the function that pulls the data from the nds server, and don't ever store these data series individually. This should be rectified, because part of the point of making these sub-budgets is to see if there is a particularly bad offender in a given group.
I'll do a quick OL loop noise budget for the ITM loops tomorrow.
I also wonder if it is necessary to measure the Oplev A2L coupling from lock to lock? This coupling will be dependant on the spot position on the optic, and though I run the dither alignment servos to minimize REFL_DC, AS_DC, I don't have any intuition for how the offset from center of optic varies from lock to lock, and if this is at all significant. I've been using a number from a measurement made in May. Need to do some algebra...
If you go through this thread of elogs, there are lots of pictures of the SOS assembly with the optic in it from the vent last year. I think there are many different perspectives, close ups of the standoffs, and of the OSEMs in their holders in that thread.
This elog has a measurement of the pendulum resonance frequencies with ruby standoffs - although the ruby standoff used was cylindrical, and the newer generation will be in the shape of a prism. There is also a link in there to a document that tells you how to calculate the suspension resonance frequencies using analytic equations.
Pianosa just crashed and ate my elog, along with all the DTT/Foton windows I had open, so more details tomorrow... This workstation has been crashing ~once a month for the last 6 months.
Below ~100Hz, the hypothesis is that the BS oplev A2L contribution dominates the MICH displacement noise. I wanted to see if I could mitigate this my modifying the BS Oplev loop shape.
I've been banging my head against optimal loop shaping, with the OL loop as a test-case, without much success - as was the case with coating PSO, the magic is in smartly defining the cost function, but right now, my optimizer seems to be pushing most of the roots I'm making available for it to place to high frequencies. I've got a term in there that is supposed to guard against this, need to tweak further...
Attachment #2: Eye-fits of measured OL A2L coupling TFs to a 1/f^2 shape, with the gain being the parameter "fitted". I used these value, and the DQ-ed OL error signal in lock, to estimate the red curve labelled "A2L" in Attachment #1. The dots are the measurement, and the lines are the 1/f^2 estimates.
I characterized the black Ithaca 1201 pre-amp that we had sitting in the racks. It works fine and the input referred noise is < 10 nV/rHz. I also checked that the filter selection switches on the front panel did what they claim and that the gain knob gives us the correct gain.
For comparison I have also included the G=100, 1000 input referred noise of one of the best SR560 that we have (s/n 02763) in the lab. Above a few Hz, the SR560 is better, but for low frequency measurements it seems that the 1201 is our friend.
As with the SR560, you don't actually get low noise performance for G < 100, due to some fixed output noise level.
Steve: sn48332 of Ithaca 1201
PSL shutter closed at 6e-6 Torr-it
The foreline pressure of the drypump is 850 mTorr at 8,446 hrs of seal life
V1 will be closed for ~20 minutes for drypump replacement..........
9:30am dry pump replaced, PSL shutter opened at 7.7E-6 Torr-it
Valve configuration: vacuum normal as TP3 is the forepump of the Maglev & annuloses are not pumped.
TP3 drypump replaced at 655 mTorr, no load, tp3 0.3A
This seal lasted only for 33 days at 123,840 hrs
The replacement is performing well: TP3 foreline pressure is 55 mTorr, no load, tp3 0.15A at 15 min [ 13.1 mTorr at d5 ]
Valve configuration: Vacuum Normal, ITcc 8.5E-6 Torr
Dry pump of TP3 replaced after 9.5 months of operation.[ 45 mTorr d3 ]
The annulosses are pumped.
Valve configuration: vac normal, IFO pressure 4.5E-5 Torr [1.6E-5 Torr d3 ] on new ITcc gauge, RGA is not installed yet.
Note how fast the pressure is dropping when the vent is short.
IFO pressure 1.7E-4 Torr on new not logged cold cathode gauge. P1 <7E-4 Torr
Valve configuration: vac.normal with anunulossess closed off.
TP3 was turned off with a failing drypump. It will be replaced tomorrow.
All time stamps are blank on the MEDM screens.
We probably want to get a dedicated machine that will handle the EPICS channel serving for the Acromag system
This is the machine that Larry suggested when I asked him for his opinion on a low workload rack-mount unit. It only has an atom processor, but I don't think it needs anything particularly powerful under the hood. He said that we will likely be able to let us borrow one of his for a couple days to see if it's up to the task. The dual ethernet is a nice touch, maybe we can keep the communication between the server and the DAQ units on their separate local network.
Jamie pointed out that the compile and install instructions are different for c1dnn:
See also: https://nodus.ligo.caltech.edu:8081/40m/13383.
I think these build instructions have to be run on the c1lsc frontend - in the past, I have been able to compile and install models on any computer with the shared drive mounted (including the control room workstations), but I'm guessing that something has changed since the RCG upgrade. Jamie can correct me on this if I'm wrong.
I wanted to use the foton.py utility for my NB tool, and I remember Chris telling me that it was shipping as standard with the newer versions of gds. It wasn't available in the versions of gds available on our workstations - the default version is 2.15.1. So I downloaded gds-2.17.15 from http://software.ligo.org/lscsoft/source/, and installed it to /ligo/apps/linux-x86_64/gds-2.17.15/gds-2.17.15. In it, there is a file at GUI/foton/foton.py.in - this is the one I needed.
Turns out this was more complicated than I expected. Building the newer version of gds throws up a bunch of compilation errors. Chris had pointed me to some pre-built binaries for ubuntu12 on the llo cds wiki, but those versions of gds do not have foton.py. I am dropping this for now.
Gautam and I measured the noise of the ADC for channels 17, 18, and 19. We plan to use those channels for measuring the noise of the temperature sensors, and we need to figure out whether or not we will need whitening and if so, how much. The figure below shows the actual measurements (red, green and blue lines), and a rough fit. I used Gautam's elog here and used the same function, (with units of nV/sqrt(Hz)) to fit our results. I used a = 1, b = 1e6, c = 2000. Since we are interested in measuring at lower frequencies, we must whiten the signal from the temperature sensors enough to have the ADC noise be negligible.
We want to be able to measure to accuracy at 1Hz, which translates to about current from the AD590 (because it gives ). Since we have a 10K resistor and V=IR, the voltage accuracy we want to measure will be . We would need whitening for lower frequencies to see such fluctuations.
To do the measurements, we put a BNC end cap on the channels we wanted to measure, then took measurements from 0-900Hz with a bandwidth of 0.001Hz. This setup is shown in the last two attachments. We used the ADC in 1X7.
There hasn't been a big glitch that misaligns MC1 by so much that the autolocker can't lock for at least 3 months, seems like there was one ~an hour ago.
I disabled autolocker and feedback to the PSL, manually aligned MC1 till the MC_REFL spot looked right on the CCD to me, and then re-engaged the autolocker, all seems to have gone smoothly.
We've been talking about increasing the series resistance for the coil driver path for the test masses. One consequence of this will be that we have reduced actuation range.
This may not be a big deal since for almost all of the LSC loops, we currently operate with a limiter on the output of the control filter bank. The value of the limit varies, but to get an idea of what sort of "threshold" velocities we are looking at, I calculated this for our Finesse 400 arm cavities. The calculation is rather simplistic (see Attachment #1), but I think we can still draw some useful conclusions from it:
So, from this rough calculation, it seems like we would lose ~25% efficiency in locking the arm cavity if we up the series resistance from 400ohm to 1kohm. Doesn't seem like a big deal, becuase currently, the single arm locking
The Oplev trace is missing for now, as I have not re-measured the A2L coupling since modifying the Oplev loop shape (specifically the low pass filter and overall gain) to allow engageing the coil de-whitening.
The averaging for the white noise TFs plotted is computed using median averaging - I have used a python transcription of Sujan's matlab code. I use scipy.signal.spectrogram to compute the fft bins (I've set some defaults like 8s fft length and a tukey window), and then take the median average using np.median(). I've also incorporated the ln(2) correction factor.
It seems like GwPy has some in-built capability to compute median (and indeed various other percentile) averages, but since we aren't using it, I just coded this up.
why no oplev trace in the NB ?
also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method.
#4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz
I tried measuring the coupling of PSL intensity noise by driving some broadband noise bandpassed between 80-300Hz using the spare DAC channel at 1Y3 that I had set up for this purpose a couple of weeks ago (via a battery powered SR560 buffer set to low-noise operation mode because I'm not sure if the DAC output can drive a ~20m long cable). I was monitoring the MC2 TRANS QPD Sum channel spectrum while driving this broadband noise - the "nominal" spectrum isn't very clean, there are a bunch of notches from a 60Hz comb and a forest of peaks over a broad hump from 300Hz-1kHz (see Attachment #1).
I was able to increase the drive to the AOM till the RIN in the band being driven increased by ~10x, and saw no change in the MICH error signal spectrum [see Attachment #1] - during this measurement, the RFPD whitening was turned on for REFL11, REFL55 and AS55, and the ITM coil drivers were de-whitened, so as to get a MICH spectrum that is about as "low-noise" as I've gotten it so far.
I tried increasing the drive further, but at this point, started seeing frequent MC locklosses - I'm not convinced this is entirely correlated to my AOM activities, so I will try some more, but at the very least, this places an upper bound on the coupling from intensity noise to MICH.
I hadn't re-locked the DRMI after the work on the AS55 demod board. Tonight, I was able to recover the DRMI locking with the old settings.
The feature in the PRCL spectrum (uncalibrated, y-axis is cts/rtHz) at ~1.6kHz is mysterious, I wonder what that's about.
Wasted some time tonight futzing around with various settings because I couldn't catch a DRMI lock, thinking I may have to re-tune demod phases etc given that I've been mucking around the LSC rack a fair bit. But fortunately, the problem turned out to be that the correct feedforward filters were not enabled in the angular feedforward path (seems like these are not SDF monitored). Clue was that there was more angular motion of the POP spot on the CCD than I'm used to seeing, even in the PRMI carrier lock.
After fixing this, lock was acquired within seconds, and the locks are as robust as I remember them - I just broke one after ~20mins locked because I went into the lab. I've been putting off looking at this angular feedforward stuff and trying out some ideas rana suggested, seems like it can be really useful.
As part of the pre-lock work, I dither aligned arms, and then ran the PRCL/MICH dithers as well, following which I re-centered the ITM, PRM and BS Oplevspots onto their respective QPDs - they have not been centered for a couple of months now.
I'm now going to try and measure some other couplings like PSL RIN->MICH, Marconi phase noise->MICH etc.
Some days ago, I had tried to measure the SRCL->MICH and PRCL->MICH cross couplings using broadband noise injected between 120-180 Hz, a frequency band chosen arbitrarily, in hindsight, I could have done a more broadband test. I've spent some time including the infrastructure to calculate "White-Noise TFs" in the noise budgeting code, where a transfer function is estimated by injecting a "broadband" excitation into a channel of interest, and looking at the resulting response in MICH. I figured this would be useful to estimate other couplings as well, e.g. laser intensity nosie, oscillator noise etc.
I estimate the transfer function of the coupling using the relation (MICH is the median ASD of the MICH error signal in the below expression, and similarly for PRCL)
Attachments #1 and #2 show the spectra of the MICH, PRCL and SRCL signals during 'quiet' times and during the injection, while Attachment #3 shows the calculated coupling TFs using the above relation. These are significantly different (more than 10dB lower) than the numbers I reported in elog 13367, where the measurement was made using swept sine. As can be seen in the attached plots, the injected broadband excitation is visible above the nominal noise level, and I calculated the white noise TFs using ~5mins of data which should be plenty, so I'm not sure atm what to make of the answers from swept-sine and broadband injections being so different.
Attachment #4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz.
Nov 8 1600: Updating NB to inculde estimated Oplev A2L.
This seems to be limiting us from saturating the dark noise once the coil de-whitening is engaged. But lack of coherence means the mechanism is not re-injection of SRCL/PRCL sensing noise? Need to think about what this means / how we can mitigate it.
This is the current procedure to start the c1dnn model:
Then to shutdown:
The daqd already knows about this model, so nothing should need to be done to the daqd to make the dnn channels available.
Eurocrate key turning reboots today morning for and c1susaux, c1auxex and c1auxey. Usual precautions were taken to minimize risk of ITMX getting stuck.
The IFO hasn't been aligned in ~1week, so I recovered arm and PRM alignment by locking individual arms and also PRMI on carrier. I will try recovering DRMI locking in the evening.
As far as MC1 glitching is concerned, there hasn't been any major one (i.e. one in which MC1 is kicked by such a large amount that the autolocker can't lock the IMC) for the past 2 months - but the MC WFS offsets are an indication of when smaller glitches have taken place, and there were large DC offsets on the MC WFS servo outputs, which I offloaded to the DC MC suspension sliders using the MC WFS relief script.
I'd like for the save-restore routine that runs when the slow machines reboot to set the watchdog state default to OFF (currently, after a key-turning reboot, the watchdogs are enabled by default), but I'm not really sure how this whole system works. The relevant files seem to be in the directory /cvs/cds/caltech/target/c1susaux. There is a script in there called startup.cmd, which seems to be the initialization script that runs when the slow machine is rebooted. But looking at this file, it is not clear to me where the default values are loaded from? There are a few "saverestore" files in this directory as well:
Are the "default" channel values loaded from one of these?
Our new Agilent Technology TwisTorr 84FS AG rack controller ( English Manual pages 195-297 ) RS232/485, product number X3508-64001, sn IT1737C383
This controller, turbo and it's drypump needs to be set up into our existing vacuum system. The intake valve of this turbo (V4) has to have a hardwired interlock that closes V4 when rotation speed is less than 20% of preset RPM speed.
The unit has an analoge 10Vdc output that is proportional to rotation speed. This can be used with a comperator to direct the interlock or there may be set software option in the controller to close the valve if the turbo slows down more than 20%
The last Upgrade of the 40m Vacuum System 1/2/2000 discribes our vauum system LIGO-T000054-00-R
Here the LabView / Metrabus controls were replaced by VME processor and an Epic interface
We do not have schematics of the hardware wiring.
We need help with this.
Eurocrate key turning reboots today morning for c1psl and c1aux.c1auxex and c1auxey are also down but I didn't bother keying them for now. PSL FSS slow loop is now active again (its inactivity was what prompted me to check status of the slow machines).
Note that the EPCIS channels for PSL shutter are hosted on c1aux.But looks like the slow machine became unresponsive at some point during the weekend, so plotting the trend data for the PSL shutter channel would have you believe that the PSL shutter was open all the time. But the MC_REFL DC channel tells a different story - it suggests that the PSL shutter was closed at ~4AM on Sunday, presumably by the vacuum interlock system. I wonder:
IFO pressure 1.2e-5 Torr at 9:30am
Valve configuration: Vacuum normal
Note: Tp2 running at 75Krpm 0.25A 26C has a load high pitch sound today. It's fore line pressure 78 mTorr. Room temp 20C
Atm. 1, This was the vacuum condition this morning.
IFO P1 9.7 mTorr, V1 open, V4 was in closed position , ~37 C warm Maglev at normal 560Hz rotation speed with foreline pressure 3.9 Torr because V4 closed 2 days ago when TP2 failed .....see Atm.3
The error messege at TP2 controller was: fault overtemp.
I did the following to restored IFO pumping: stopped pumping of the annulose with TP3 and valves were configured so TP3 can be the forepump of the Maglev.
closed VM1 to protect the RGA, close PSL shutter .....see Gautam entry
aux fan on to cool down Maglev-TP1, room temp 20 C,
aux drypump turned on and opend to TP3 foreline to gain pumping speed,
closed PAN to isolate annulos pumping,
opened V7 to pump Maglev forline with TP3 running at 50Krpm, It took 10 minutes to reach P2 1mTorr from 3.9 Torr
aux drypump closed off at P2 1 mTorr, TP3 foreline pressure 362 mTorr.......see Atm.2
As we are running now:
IFO pressure 7e-6 Torr at Hornet cold cathode gauge at 15:50 We have no IFO CC1 logging now. Annuloses are in 3-5 mTorr range are not pumped.
TP3 as foreline pump of TP1 at 50 Krpm, 0.24 A, 24 C, it's drypump forline pressure 324 mTorr
V4 valve cable is disconnected.
I need help with wiring up the logging of the Hornet cold cathode gauge.
Udit Kahndelwal received 40m specific basic safety traning on Friday, Oct. 27
Backed up all the wikis. Theyr'e in wiki_backups/*.tar.xz (because xz -9e gives better compression than gzip or bzip2)
Moved old user directories in the /users/OLD/
None of the 3 dd backups I made were bootable - at boot, selecting the drive put me into grub rescue mode, which seemed to suggest that the /boot partition did not exist on the backed up disk, despite the fact that I was able to mount this partition on a booted computer. Perhaps for the same reason, but maybe not.
After going through various StackOverflow posts / blogs / other googling, I decided to try cloning the drives using ddrescue instead of dd.
This seems to have worked for nodus - I was able to boot to console on the machine called rosalba which was lying around under my desk. I deliberately did not have this machine connected to the martian network during the boot process for fear of some issues because of having multiple "nodus"-es on the network, so it complained a bit about starting the elog and other network related issues, but seems like we have a plug-and-play version of the nodus root filesystem now.
chiara and fb1 rootfs backups (made using ddrescue) are still not bootable - I'm working on it.
Nov 6 2017: I am now able to boot the chiara backup as well - although mysteriously, I cannot boot it from the machine called rosalba, but can boot it from ottavia. Anyways, seems like we have usable backups of the rootfs of nodus and chiara now. FB1 is still a no-go, working on it.
Looks to have worked this time around.
controls@fb1:~ 0$ sudo dd if=/dev/sda of=/dev/sdc bs=64K conv=noerror,sync
33554416+0 records in
33554416+0 records out
2199022206976 bytes (2.2 TB) copied, 55910.3 s, 39.3 MB/s
You have new mail in /var/mail/controls
I was able to mount all the partitions on the cloned disk. Will now try booting from this disk on the spare machine I am testing in the office area now. That'd be a "real" test of if this backup is useful in the event of a disk failure.