40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 70 of 337  Not logged in ELOG logo
ID Date Author Type Category Subject
  13443   Wed Nov 22 00:54:18 2017 johannesOmnistructureComputersSlow DAQ replacement computer progress

I got the the SuperMicro 1U server box from Larry W on Monday and set it up in the CryoLab for initial testing.

The specs: https://www.supermicro.com/products/system/1U/5015/SYS-5015A-EHF-D525.cfm

The processor is an Intel D525 dual core atom processor with 1.8 GHz (i386 architecture, no 64-bit support). The unit has a 250GB SSD and 4GB RAM.

I installed Debian Jessie on it without any problems and compiled the most recent stable versions of EPICS base (3.15.5), asyn drivers (4-32), and modbus module (2-10-1). EPICS and asyn each took about 10 minutes, and modbus about 1 minute.

I copied the database files and port driver definitions for the cryolab from cryoaux, whose modbus services I suspended, and initialized the EPICS modbus IOC on the SuperMicro machine instead. It's working flawlessly so far, but admittedly the box is not under heavy load in the cryolab, as the framebuilder there is logging only the 16 analog channels.

I have recently worked out some kinks in the port driver and channel definitions, most importantly:

  • mosbus IOC initialization is performed automatically by systemd on reboot
  • If the IOC crashes or a system reboot is required the Acromag units freeze in their last current state. When the IOC is started a single read operation of all A/D registers is performed and the result taken as the initial value of the corresponding channel, causing no discontinuity in generated voltage EVER (except of course for the rare case when the Acromags themselves have to be restarted)

Aaron and I set 12/4 as a tentative date when we will be ready to attempt a swap. Until then the cabling needs to be finished and a channel database file needs to be prepared.

  13442   Tue Nov 21 23:47:51 2017 gautamConfigurationComputersnodus post OS migration admin

I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.

This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.

I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.

Quote:

The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017

Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. "websvn" is not installed.

 

  13441   Tue Nov 21 23:04:12 2017 gautamUpdateOptical LeversOplev "noise budget"

Per our discussions in the meetings over the last week, I've tried to put together a simple Oplev noise budget. The only two terms in this for now are the dark noise and a model for the seismic noise, and are plotted together with the measured open-loop error signal spectra.

  1. Dark noise
    • Beam was taken off the OL QPD
    • A small DC offset was added to all the oplev segment input filters to make the sum ~20-30 cts [call this testSum] (usually it varies from 4000-13000 for the BS/ITMs, call this nominalSum).
    • I downloaded 20mins of dq-ed error signal data, and computed the ASD, dividing by a factor of nominalSum / testSum to account for the usual light intensity on the QPD.
  2. Seismic noise
    • This is a very simplistic 1/f^2 pendulum TF with a pair of Q=2 poles at 1Hz.
    • I adjusted the overall gain such that the 1Hz peak roughly line up in measurement and model.
    • The stack isn't modelled at all.

Some remarks:

  • The BS oplev doesn't have any whitening electronics, and so has a higher electronics noise floor compared to the ITMs. But it doesn't look like we are limited by this lower noise floor anywhere..
  • I wonder what all those high frequency features seen in the ITM error signal spectra are - mechanical resonances of steering optics? It is definitely above the dark noise floor, so I am inclined to believe this is real beam motion on the QPD, but surely this can't be the test-mass motion? If it were, the measured A2L would be much higher than the level it is adjudged to be at now. Perhaps it's some resonances of steering mirrors?
  • The seismic displacement @100Hz per the GWINC model is ~1e-19 m/rtHz. Assuming the model A2L = d_rms * theta(f) where d_rms is the rms offset of the beam spot from the optic center, and theta(f) is the angular control signal to the optic, for a 5mm rms offset of the spot from the center, theta(f) must be ~1e-17 urad @100Hz. This gives some requirement on the low pass required - I will look into adding this to the global optimization cost.

 

Attachment 1: vertexOL_noises.pdf
vertexOL_noises.pdf vertexOL_noises.pdf vertexOL_noises.pdf
  13440   Tue Nov 21 17:51:01 2017 KojiConfigurationComputersnodus post OS migration admin

The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017

Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. And "websvn" was also implemented.

  13439   Tue Nov 21 16:28:23 2017 gautamUpdateOptical LeversBS OL calibration updated

The numbers I have from the fitting don't agree very well with the OSEM readouts. Attachment #1 shows the Oplev pitch and yaw channels, and also the OSEM ones, while I swept the ASC_PIT offset. The output matrix is the "naive" one of (+1,+1,-1,-1). SUSPIT_IN1 reports ~30urad of motion, while SUSYAW_IN1 reports ~10urad of motion.

From the fits, the BS calibration factors were ~x8 for pitch and x12 for yaw - so according to the Oplev channels, the applied sweep was ~80urad in pitch, and ~7urad in yaw.

Seems like either (i) neither the Oplev channels nor the OSEMs are well diagonalized and that their calibration is off by a factor of ~3 or (ii) there is some significant imbalance in the actuator gains of the BS coils...

Quote:
 

Need to double check against OSEM readout during the sweep.

 

Attachment 1: BS_oplev_sweep.png
BS_oplev_sweep.png
  13438   Tue Nov 21 16:00:05 2017 KiraUpdatePEMseismometer can testing

I performed a test with the can last week with one layer of insulation to see how well it worked. First, I soldered two heaters together in series so that the total resistance was 48.6 ohms. I placed the heaters on the sides of the can and secured them. Then I wrapped the sides and top of the can in insulation and sealed the edges with tape, only leavng the handles open. I didn't insulate the bottom. I connected the two ends of the heater directly into the DC source and drove the current as high as possible (around 0.6A). I let the can heat up to a final value of 37.5C, turned off the current and manually measured the temperature, recoding the time every half degree. I then plotted the results, along with a fit. The intersection of the red line with the data marks the time constant and the temperature at which we get the time constant. This came out to be about 1.6 hours, much longer than expected considering that onle one layer instead of four was used. With only one layer, we would expect the time constant to be about 13 min, while for 4 layers it should be 53 min (the area A is 0.74 m^2 and not 2 m^2).

Quote:

I made a model for our seismometer can using actual data so that we know approximately what the time constant should be when we test it out. I used the appendix in Megan Kelley's report to make a relation for the temperature in terms of time.

\frac{dQ}{dt}=mc\frac{dT(t)}{dt} so T(t)=\frac{1}{mc}\int \frac{dQ}{dt}dt=\frac{1}{mc}\int P_{net}dt and P_{net}=P_{in}-P_{out}

In our case, we will heat the can to a certain temerature and wait for it to cool on its own so P_{in}=0

We know that P_{out}=\frac{kA\Delta T}{d} where k is the k-factor of the insulation we are using, A is the area of the surface through which heat is flowing, \Delta T is the change in temperature, d is the thickness of the insulation.

Therefore,

T(t)=\frac{1}{mc}\int_{0}^{t}\frac{kA}{d}[T_{lab}-T(t')]dt'=\frac{kA}{mcd}(T_{lab}t-\int_{0}^{t}T(t')dt')

We can take the derivative of this to get

T'(t)=\frac{kAT_{lab}}{mcd}-\frac{kA}{mcd}T(t), or T'(t)=B-CT(t) 

We can guess the solution to be

T(t)=C_{1}e^{-t/\tau}+C_{2} where tau is the time constant, which we would like to find.

The boundary conditions are T(0)=40 and T(\infty)=T_{lab}=24. I assumed we would heat up the can to 40 celcius while the room temp is about 24. Plugging this into our equations,

C_{1}+C_{2}=40, C_{2}=24, so C_{1}=16

We can plug everything back into the derivative T'(t)

T'(t)=-\frac{16}{\tau}e^{-t/\tau}=B-C[16e^{-t/\tau}+24]

Equating the exponential terms on both sides, we can solve for tau

\frac{16}{\tau}e^{-t/\tau}=16Be^{-t/\tau}, \frac{1}{\tau}=B, \tau=\frac{1}{B}=\frac{mcd}{kA}

Plugging in the values that we have, m = 12.2 kg, c = 500 J/kg*k (stainless steel), d = 0.1 m, k = 0.26 W/(m^2*K), A = 2 m^2, we get that the time constant is 0.326hr. I have attached the plot that I made using these values. I would expect to see something similar to this when I actually do the test.

To set up the experiment, I removed the can (with Steve's help) and will place a few heating pads on the outside and wrap the whole thing in a few layers of insulation to make the total thickness 0.1m. Then, we will attach the heaters to a DC source and heat the can up to 40 celcius. We will wait for it to cool on its own and monitor the temperature to create a plot and find the experimental time constant. Later, we can use the heatng circuit we used for the PSL lab and modify the parts as needed to drive a few amps through the circuit. I calculated that we'd need about 6A to get the can to 50 celcius using the setup we used previously, but we could drive a smaller current by using a higher heater resistance.

 

Attachment 1: cooling_fit.png
cooling_fit.png
Attachment 2: IMG_20171121_164835.jpg
IMG_20171121_164835.jpg
  13437   Tue Nov 21 11:37:29 2017 gautamUpdateOptical LeversBS OL calibration updated

I calibrated the BS oplev PIT and YAW error signals as follows:

  1. Locked X-arm, ran dither alignment servos to maximize transmission.
  2. Applied an offset to the ASC PIT/YAW filter banks. Set the ramp time to something long, I used 60 seconds.
  3. Monitored the X arm transmission while the offset was being ramped, and also the oplev error signal with its current calibration factor.
  4. Fit the data, oplev error signal vs arm transmission, with a gaussian, and extracted the scaling factor (i.e. the number which the current Oplev error signals have to be multiplied by for the error signal to correspond to urad of angular misalignment as per the overlap of the beam axis to the cavity axis.
  5. Fits are shown in Attachment #1 and #2.
  6. I haven't done any error analysis yet, but the open loop OL spectra for the BS now line up better with the other optics, see Attachment #3 (although their calibration factors may need to be updated as well...). Need to double check against OSEM readout during the sweep.
  7. New numbers have been SDF-ed.

The numbers are:

BS Pitch     15  /  130    (old/new)     urad/counts

BS Yaw       14  /  170    (old/new)     urad/counts

Quote:

I bet the calibration is out of date; probably we replaced the OL laser for the BS and didn't fix the cal numbers. You can use the fringe contrast of the simple Michelson to calibrate the OLs for the ITMs and BS.

 

Attachment 1: OL_calib_BS_PERROR.pdf
OL_calib_BS_PERROR.pdf
Attachment 2: OL_calib_BS_YERROR.pdf
OL_calib_BS_YERROR.pdf
Attachment 3: VertexOLnoise_updated.pdf
VertexOLnoise_updated.pdf
  13436   Tue Nov 21 11:21:26 2017 gautamUpdateCDSRFM network down

I noticed yesterday evening that I wasn't able to engage the single arm locking servos - turned out that they weren't getting triggered, which in turn pointed me to the fact that the arm transmssion channels seemed dead. Poking around a little, I found that there was a red light on the CDS overview screen for c1rfm.

  • The error seems to be in the receiving model only, i.e. c1rfm, all the sending models (e.g. c1scx) don't report any errors, at least on the CDS overview screen.
  • Judging by dataviewer trending of the c1rfm status word, seems like this happened on Sunday morning, around 11am.
  • I tried restarting both sender and receiver models, but error persists.
  • I got no useful information from the dmesg logs of either c1sus (which runs c1rfm), or c1iscex (which runs c1scx).
  • There are no physical red lights in the expansion chassis that I could see - in the past, when we have had some timing errors, this would be a signature.

Not sure how to debug further...

* Fix seems to be to restart the sender RFM models (c1scx, c1scy, c1asx, c1asy).

Attachment 1: RFMerrors.png
RFMerrors.png
  13435   Fri Nov 17 17:10:53 2017 ranaOmnistructureComputersAcromag wired up

Exactly: you'll have to list explicitly what functions those channels had so that we know what we're losing before we make the switch.

  13434   Fri Nov 17 16:31:11 2017 aaronOmnistructureComputersAcromag wired up

Acromag Wireup Update

I finished wiring up the Acromags to replace the VME boxes on the x arm. I still need to cut down the bar and get them all tidy in the box, but I wanted to post the wiring maps I made.
I wanted to note specifically that a few of the connections were assigned to VME boxes but are no longer assigned in this Acromag setup. We should be sure that we actually do not need to use the following channels:

Channels no longer in use

  • From the VME analog output (VMIVME 4116) to the QPD Whitening board (no DCC number on the front), 3 channels are no longer in use
  • From the anti-image filter (D000186) to the ADC (VMIVME 3113A) 5 channels are no longer in use (these are the only channels from the anti-image filter, so this filter is no longer in use at all?)
  • From the universal dewhitening filter (D000183) to a binary I/O adapter (channels 1-16), 4 channels are no longer in use. These are the only channels from the dewhitening filter
  • From a second universal dewhitening filter (D000183) to another the binary I/O adapter (channels 1-16), one channel is no longer in use (this was the only channel from this dewhitening filter).
  • From the opti-lever (D010033) to the VME ADC (VMIVME 3113A), 7 channels are no longer in use (this was all of the channels from the opti lever)
  • From the SUS PD Whitening/Interface board (D000210) to a binary I/O adapter (channels 1-16), 5 channels are no longer in use. 
  • Note that none of the binary I/O adapter channels are in use.

 

Attachment 1: AcromagWiringMaps.pdf
AcromagWiringMaps.pdf AcromagWiringMaps.pdf AcromagWiringMaps.pdf AcromagWiringMaps.pdf AcromagWiringMaps.pdf AcromagWiringMaps.pdf AcromagWiringMaps.pdf
  13433   Thu Nov 16 15:43:01 2017 ranaUpdateOptical LeversOptical lever noise

I bet the calibration is out of date; probably we replaced the OL laser for the BS and didn't fix the cal numbers. You can use the fringe contrast of the simple Michelson to calibrate the OLs for the ITMs and BS.

  13432   Thu Nov 16 13:57:01 2017 gautamUpdateOptical LeversOptical lever noise

I disabled the OL loops for ITMX, ITMY and BS at GPStime 1194897655 to come up with an Oplev noise budget. OL spots were reasonably well centered - by that, I mean that the PIT/YAW error signals were less than 20urad in absolute value.

Attachment #1 is a first look at the DTT spectra - I wonder why the BS Oplev signals don't agree with the ITMs at ~1Hz? Perhaps the calibration factor is off? The sensing noise not really flat above 100Hz - I wonder what all those peaky features are. Recall that the ITM OLs have analog whitening filters before the ADC, but the BS doesn't...

In Attachment #2, I show comparison of the error signal spectra for ITMY and SRM - they're on the same stack, but the SRM channels don't have analog de-whitening before the ADC.

For some reason, DTT won't let me save plots with latex in the axes labels...

Attachment 1: VertexOLnoise.pdf
VertexOLnoise.pdf
Attachment 2: ITMYvsSRM.pdf
ITMYvsSRM.pdf
  13431   Thu Nov 16 00:53:26 2017 gautamUpdateLSCDRMI noise sub-budgets

I've incorporated the functionality to generate sub-budgets for the various grouped traces in the NBs (e.g. the "A2L" trace is really the quadrature sum of the A2L coupling from 6 different angular servos).

For now, I'm only doing this for the A2L coupling, and the AUX length loop coupling groups. But I've set up the machinery in such a way that doing so for more groups is easy.

Here are the sub-budget plots for last night's lock - for the OL plot, there are only 3 lines (instead of 6) because I group the PIT and YAW contributions in the function that pulls the data from the nds server, and don't ever store these data series individually. This should be rectified, because part of the point of making these sub-budgets is to see if there is a particularly bad offender in a given group.

I'll do a quick OL loop noise budget for the ITM loops tomorrow.

I also wonder if it is necessary to measure the Oplev A2L coupling from lock to lock? This coupling will be dependant on the spot position on the optic, and though I run the dither alignment servos to minimize REFL_DC, AS_DC, I don't have any intuition for how the offset from center of optic varies from lock to lock, and if this is at all significant. I've been using a number from a measurement made in May. Need to do some algebra...

Attachment 1: C1NB_a2l_40m_MICH_NB_2017-11-15.pdf
C1NB_a2l_40m_MICH_NB_2017-11-15.pdf
Attachment 2: C1NB_aux_40m_MICH_NB_2017-11-15.pdf
C1NB_aux_40m_MICH_NB_2017-11-15.pdf
  13430   Thu Nov 16 00:45:39 2017 gautamUpdateSUSSOS Sapphire Prism design

 

Quote:
 
  • If I could get pictures of the lower mirror clamp (document D960008), it would be helpful in making solidworks model. Document is unclear. Same for sensor/actuator head assembly. 

If you go through this thread of elogs, there are lots of pictures of the SOS assembly with the optic in it from the vent last year. I think there are many different perspectives, close ups of the standoffs, and of the OSEMs in their holders in that thread.

This elog has a measurement of the pendulum resonance frequencies with ruby standoffs - although the ruby standoff used was cylindrical, and the newer generation will be in the shape of a prism. There is also a link in there to a document that tells you how to calculate the suspension resonance frequencies using analytic equations.

  13429   Thu Nov 16 00:14:47 2017 Udit KhandelwalUpdateSUSSOS Sapphire Prism design

Summary:

  • SOS solidworks model is nearly complete
    • Having trouble with the design of the sensor/actuator head assembly and the lower clamps
  • After Gautam's suggestion, installed Abaqus on computer. Teaching it to myself to eventually do FEM analysis and find resonant frequency of the system
    • Goal is to replicate frequency listed in the SOS documents to confirm accuracy of computer model, then replace guide rods with sapphire prisms and change geometry to get same results

 

Questions:

  • How accurate do the details (like fillet, chamfer, placement of little vent holes), and material of the different SOS parts need to be in the model?
  • If I could get pictures of the lower mirror clamp (document D960008), it would be helpful in making solidworks model. Document is unclear. Same for sensor/actuator head assembly. 
  13428   Wed Nov 15 01:37:07 2017 gautamUpdateLSCDRMI low freq. nosie improved

Pianosa just crashed and ate my elog, along with all the DTT/Foton windows I had open, so more details tomorrow... This workstation has been crashing ~once a month for the last 6 months.

Summary:

Below ~100Hz, the hypothesis is that the BS oplev A2L contribution dominates the MICH displacement noise. I wanted to see if I could mitigate this my modifying the BS Oplev loop shape.

Details:

  • Swept sine TF measurements suggested that the BS A2L contribution is between 10-100x that of the ITM A2L
  • The Oplev loop shape for BS is different from ITMs - specifically, there is a Res-gain centered at ~3.3 Hz. The low frequency ~0.6Hz boost filter present in the ITM Oplev loops was disengaged for the BS Oplves.
  • I turned off the BS OL loops and looked at error signal spectra - didn't seem that different from ITM OL error signals, so I decided to try turning off the res-gain and engage the 0.6Hz boost.
  • This change also gave me much more phase at ~6Hz, which is roughly the UGF of the loop. So I put in another roll-off low pass filter with corner frequency 25Hz. 
  • This worked okay - RMS went down by ~5x (which is even better than the original config), and although the performance between ~3 and 10Hz is slightly worse than with the old combination,this region isn't the dominant contribution to the RMS. PM at the upper UGF is ~30degrees in the new configuration. 
  • I wanted to give DRMI locking a shot with the new OL loop - expectations were that the noise between 30-100Hz would improve, and perhaps the engaging of de-whitening filters on BS would also be easier given the more severe roll-off at high-frequencies.
  • Attachment #1 shows the NB for tonights lock. All MICH optics had their coil drivers de-whitened, and all the LSC PDs were whitened for this measurement.
  • I've edited the NB code to make the A2L calculation more straightforward, I now just make the coupling 1/f^2 and give the function a measured overall gain, so that this curve can now be easily added to all future NBs. I've also transcribed the matlab funciton used for parsing Foton files into python, this allows me to convert the DQ-ed OL error signals to control signals. Will update git with changes.

Remarks:

  1. MICH noise has improved by ~2x between 40-80Hz.
  2. Not sure what to make of the broad hump around 60Hz - scatter shelf?
  3. There is still unexplained noise below 100Hz, the A2L estimate is considerably lower than the measured noise.
  4. We are still more than an order of magnitude away from the estimated seismic noise floor at low frequencies (but getting closer!).

I've been banging my head against optimal loop shaping, with the OL loop as a test-case, without much success - as was the case with coating PSO, the magic is in smartly defining the cost function, but right now, my optimizer seems to be pushing most of the roots I'm making available for it to place to high frequencies. I've got a term in there that is supposed to guard against this, need to tweak further...


Attachment #2: Eye-fits of measured OL A2L coupling TFs to a 1/f^2 shape, with the gain being the parameter "fitted". I used these value, and the DQ-ed OL error signal in lock, to estimate the red curve labelled "A2L" in Attachment #1. The dots are the measurement, and the lines are the 1/f^2 estimates.

Attachment 1: C1NB_disp_40m_MICH_NB_2017-11-15.pdf
C1NB_disp_40m_MICH_NB_2017-11-15.pdf
Attachment 2: OL_A2L_couplings.pdf
OL_A2L_couplings.pdf
  13427   Tue Nov 14 16:02:43 2017 KiraSummaryPEMseismometer can testing

I made a model for our seismometer can using actual data so that we know approximately what the time constant should be when we test it out. I used the appendix in Megan Kelley's report to make a relation for the temperature in terms of time.

\frac{dQ}{dt}=mc\frac{dT(t)}{dt} so T(t)=\frac{1}{mc}\int \frac{dQ}{dt}dt=\frac{1}{mc}\int P_{net}dt and P_{net}=P_{in}-P_{out}

In our case, we will heat the can to a certain temerature and wait for it to cool on its own so P_{in}=0

We know that P_{out}=\frac{kA\Delta T}{d} where k is the k-factor of the insulation we are using, A is the area of the surface through which heat is flowing, \Delta T is the change in temperature, d is the thickness of the insulation.

Therefore,

T(t)=\frac{1}{mc}\int_{0}^{t}\frac{kA}{d}[T_{lab}-T(t')]dt'=\frac{kA}{mcd}(T_{lab}t-\int_{0}^{t}T(t')dt')

We can take the derivative of this to get

T'(t)=\frac{kAT_{lab}}{mcd}-\frac{kA}{mcd}T(t), or T'(t)=B-CT(t) 

We can guess the solution to be

T(t)=C_{1}e^{-t/\tau}+C_{2} where tau is the time constant, which we would like to find.

The boundary conditions are T(0)=40 and T(\infty)=T_{lab}=24. I assumed we would heat up the can to 40 celcius while the room temp is about 24. Plugging this into our equations,

C_{1}+C_{2}=40, C_{2}=24, so C_{1}=16

We can plug everything back into the derivative T'(t)

T'(t)=-\frac{16}{\tau}e^{-t/\tau}=B-C[16e^{-t/\tau}+24]

Equating the exponential terms on both sides, we can solve for tau

\frac{16}{\tau}e^{-t/\tau}=16Be^{-t/\tau}, \frac{1}{\tau}=B, \tau=\frac{1}{B}=\frac{mcd}{kA}

Plugging in the values that we have, m = 12.2 kg, c = 500 J/kg*k (stainless steel), d = 0.1 m, k = 0.26 W/(m^2*K), A = 2 m^2, we get that the time constant is 0.326hr. I have attached the plot that I made using these values. I would expect to see something similar to this when I actually do the test.

To set up the experiment, I removed the can (with Steve's help) and will place a few heating pads on the outside and wrap the whole thing in a few layers of insulation to make the total thickness 0.1m. Then, we will attach the heaters to a DC source and heat the can up to 40 celcius. We will wait for it to cool on its own and monitor the temperature to create a plot and find the experimental time constant. Later, we can use the heatng circuit we used for the PSL lab and modify the parts as needed to drive a few amps through the circuit. I calculated that we'd need about 6A to get the can to 50 celcius using the setup we used previously, but we could drive a smaller current by using a higher heater resistance.

Attachment 1: time_const.png
time_const.png
  13426   Tue Nov 14 08:54:37 2017 SteveUpdateIOOMC1 glitching
Attachment 1: MC1_glitching.png
MC1_glitching.png
  13425   Fri Nov 10 18:57:41 2017 ranaSummaryElectronicsIthaca 1201 vs. SR560

I characterized the black Ithaca 1201 pre-amp that we had sitting in the racks. It works fine and the input referred noise is < 10 nV/rHz. I also checked that the filter selection switches on the front panel did what they claim and that the gain knob gives us the correct gain.

For comparison I have also included the G=100, 1000 input referred noise of one of the best SR560 that we have (s/n 02763) in the lab. Above a few Hz, the SR560 is better, but for low frequency measurements it seems that the 1201 is our friend.

As with the SR560, you don't actually get low noise performance for G < 100, due to some fixed output noise level.

Steve:  sn48332 of Ithaca 1201

Attachment 1: Ithaca1201.pdf
Ithaca1201.pdf
  13424   Fri Nov 10 13:46:26 2017 SteveUpdatePEMM3.1 local earthquake

BOOM SHAKA-LAKA

Attachment 1: 3.1M_local_EQ.png
3.1M_local_EQ.png
  13423   Fri Nov 10 08:52:21 2017 SteveUpdateVACTP3 drypump replaced

PSL shutter closed at 6e-6 Torr-it    

   The foreline pressure of the drypump is 850 mTorr at 8,446 hrs of seal life

V1 will be closed for ~20 minutes for drypump replacement..........

9:30am dry pump replaced, PSL shutter opened at 7.7E-6 Torr-it

  Valve configuration: vacuum normal as  TP3 is the forepump of the Maglev  & annuloses are not pumped.

Quote:

TP3 drypump replaced at 655 mTorr, no load, tp3 0.3A 

This seal lasted only for 33 days at  123,840 hrs

The replacement is performing well: TP3 foreline pressure is 55 mTorr, no load, tp3 0.15A at 15 min  [ 13.1 mTorr at d5 ]

 

Valve configuration: Vacuum Normal, ITcc 8.5E-6 Torr

Quote:

Dry pump of TP3 replaced after 9.5 months of operation.[ 45 mTorr d3 ]

The annulosses are pumped.

Valve configuration: vac normal, IFO pressure 4.5E-5 Torr [1.6E-5 Torr d3 ] on new ITcc gauge, RGA is not installed yet.

Note how fast the pressure is dropping when the vent is short.

Quote:

IFO pressure 1.7E-4 Torr on new not logged cold cathode gauge. P1 <7E-4 Torr

Valve configuration: vac.normal with anunulossess closed off.

TP3 was turned off with a failing drypump. It will be replaced tomorrow.

All time stamps are blank on the MEDM screens.

 

 

  13422   Thu Nov 9 15:33:08 2017 johannesUpdateCDSrevisiting Acromag
Quote:

We probably want to get a dedicated machine that will handle the EPICS channel serving for the Acromag system

http://www.supermicro.com/products/system/1U/5015/SYS-5015A-H.cfm?typ=H

This is the machine that Larry suggested when I asked him for his opinion on a low workload rack-mount unit. It only has an atom processor, but I don't think it needs anything particularly powerful under the hood. He said that we will likely be able to let us borrow one of his for a couple days to see if it's up to the task. The dual ethernet is a nice touch, maybe we can keep the communication between the server and the DAQ units on their separate local network.

  13421   Thu Nov 9 10:51:37 2017 gautamSummaryLSCcurrent procedure for compiling and installing c1dnn code

Jamie pointed out that the compile and install instructions are different for c1dnn:

cd /opt/rtcds/caltech/c1/rtbuild/test/nn-test
make c1dnn
make install-c1dnn

See also: https://nodus.ligo.caltech.edu:8081/40m/13383.

I think these build instructions have to be run on the c1lsc frontend - in the past, I have been able to compile and install models on any computer with the shared drive mounted (including the control room workstations), but I'm guessing that something has changed since the RCG upgrade. Jamie can correct me on this if I'm wrong.

  13420   Wed Nov 8 17:04:21 2017 gautamUpdateCDSgds-2.17.15 [not] installed

I wanted to use the foton.py utility for my NB tool, and I remember Chris telling me that it was shipping as standard with the newer versions of gds. It wasn't available in the versions of gds available on our workstations - the default version is 2.15.1. So I downloaded gds-2.17.15 from http://software.ligo.org/lscsoft/source/, and installed it to /ligo/apps/linux-x86_64/gds-2.17.15/gds-2.17.15. In it, there is a file at GUI/foton/foton.py.in - this is the one I needed. 


Turns out this was more complicated than I expected. Building the newer version of gds throws up a bunch of compilation errors. Chris had pointed me to some pre-built binaries for ubuntu12 on the llo cds wiki, but those versions of gds do not have foton.py. I am dropping this for now.

  13419   Wed Nov 8 16:39:24 2017 KiraUpdatePEMADC noise measurement

Gautam and I measured the noise of the ADC for channels 17, 18, and 19. We plan to use those channels for measuring the noise of the temperature sensors, and we need to figure out whether or not we will need whitening and if so, how much. The figure below shows the actual measurements (red, green and blue lines), and a rough fit. I used Gautam's elog here and used the same function, \sqrt{a^{2}(\frac{b}{f^{2}})+c^{2}} (with units of nV/sqrt(Hz)) to fit our results. I used a = 1, b = 1e6, c = 2000. Since we are interested in measuring at lower frequencies, we must whiten the signal from the temperature sensors enough to have the ADC noise be negligible.

We want to be able to measure to 1mK/\sqrt{Hz} accuracy at 1Hz, which translates to about 1nA/\sqrt{Hz} current from the AD590 (because it gives 1\mu A/K). Since we have a 10K resistor and V=IR, the voltage accuracy we want to measure will be 10^{-5}V/\sqrt{Hz}. We would need whitening for lower frequencies to see such fluctuations.

To do the measurements, we put a 50\Omega BNC end cap on the channels we wanted to measure, then took measurements from 0-900Hz with a bandwidth of 0.001Hz. This setup is shown in the last two attachments. We used the ADC in 1X7.

Attachment 1: ADC-fit.png
ADC-fit.png
Attachment 2: IMG_20171108_162532.jpg
IMG_20171108_162532.jpg
Attachment 3: IMG_20171108_162556.jpg
IMG_20171108_162556.jpg
  13418   Wed Nov 8 14:28:35 2017 gautamUpdateGeneralMC1 glitches return

There hasn't been a big glitch that misaligns MC1 by so much that the autolocker can't lock for at least 3 months, seems like there was one ~an hour ago.

I disabled autolocker and feedback to the PSL, manually aligned MC1 till the MC_REFL spot looked right on the CCD to me, and then re-engaged the autolocker, all seems to have gone smoothly.

 

Attachment 1: MC1_glitchy.png
MC1_glitchy.png
Attachment 2: 6AFDA67D-79B1-469C-A58A-9EC5F8F01D32.jpeg
6AFDA67D-79B1-469C-A58A-9EC5F8F01D32.jpeg
  13417   Wed Nov 8 12:19:55 2017 gautamUpdateSUScoil driver series resistance

We've been talking about increasing the series resistance for the coil driver path for the test masses. One consequence of this will be that we have reduced actuation range.

This may not be a big deal since for almost all of the LSC loops, we currently operate with a limiter on the output of the control filter bank. The value of the limit varies, but to get an idea of what sort of "threshold" velocities we are looking at, I calculated this for our Finesse 400 arm cavities. The calculation is rather simplistic (see Attachment #1), but I think we can still draw some useful conclusions from it:

  • In Attachment #1, I've indicated with dashed vertical lines some series resistances that are either currently in use, or are values we are considering.
  • The table below tabulates the fraction of passages through a resonance we will be able to catch, assuming velocities sampled from a Gaussian with width ~3um/s, which a recent ALS study suggests describes our SOS optic velocity distribution pretty well (with lcoal damping on).
  • I've assumed that the maximum DAC output voltage available for length control is 8V.
  • Presumably, this Gaussian velocity distribution will be modified because of the LSC actuation exerting impulses on the optic on failed attempts to catch lock. I don't have a good model right now for how this modification will look like, but I have some ideas.
  • It would be interesting to compare the computed success rates below with what is actually observed.
  • The implications of different series resistances on DAC noise are computed here (although the non-linear nature of the DAC noise has not been taken into account).
Series resistance [ohms] Predicted Success Rate [%] Optics with this resistance
100 >90 BS, PRM, SRM
400 62 ITMX, ITMY, ETMX, ETMY
1000 45 -
2000 30 -

So, from this rough calculation, it seems like we would lose ~25% efficiency in locking the arm cavity if we up the series resistance from 400ohm to 1kohm. Doesn't seem like a big deal, becuase currently, the single arm locking

Attachment 1: vthresh.pdf
vthresh.pdf
  13416   Wed Nov 8 09:59:12 2017 gautamUpdateLSCDRMI Nosie Budget v3.1

The Oplev trace is missing for now, as I have not re-measured the A2L coupling since modifying the Oplev loop shape (specifically the low pass filter and overall gain) to allow engageing the coil de-whitening.

The averaging for the white noise TFs plotted is computed using median averaging - I have used a python transcription of Sujan's matlab code. I use scipy.signal.spectrogram to compute the fft bins (I've set some defaults like 8s fft length and a tukey window), and then take the median average using np.median(). I've also incorporated the ln(2) correction factor.

It seems like GwPy has some in-built capability to compute median (and indeed various other percentile) averages, but since we aren't using it, I just coded this up. 

Quote:

why no oplev trace in the NB ?

also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method.

 

  13415   Wed Nov 8 09:37:45 2017 ranaUpdateLSCDRMI Nosie Budget v3.1

why no oplev trace in the NB ?

#4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz

also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method.

  13414   Wed Nov 8 00:28:16 2017 gautamUpdateLSCLaser intensity coupling measurement attempt

I tried measuring the coupling of PSL intensity noise by driving some broadband noise bandpassed between 80-300Hz using the spare DAC channel at 1Y3 that I had set up for this purpose a couple of weeks ago (via a battery powered SR560 buffer set to low-noise operation mode because I'm not sure if the DAC output can drive a ~20m long cable). I was monitoring the MC2 TRANS QPD Sum channel spectrum while driving this broadband noise - the "nominal" spectrum isn't very clean, there are a bunch of notches from a 60Hz comb and a forest of peaks over a broad hump from 300Hz-1kHz (see Attachment #1).

I was able to increase the drive to the AOM till the RIN in the band being driven increased by ~10x, and saw no change in the MICH error signal spectrum [see Attachment #1] - during this measurement, the RFPD whitening was turned on for REFL11, REFL55 and AS55, and the ITM coil drivers were de-whitened, so as to get a MICH spectrum that is about as "low-noise" as I've gotten it so far.

I tried increasing the drive further, but at this point, started seeing frequent MC locklosses - I'm not convinced this is entirely correlated to my AOM activities, so I will try some more, but at the very least, this places an upper bound on the coupling from intensity noise to MICH.

Attachment 1: PSL_RIN.pdf
PSL_RIN.pdf
  13413   Tue Nov 7 22:56:21 2017 gautamUpdateLSCDRMI locking recovered

I hadn't re-locked the DRMI after the work on the AS55 demod board. Tonight, I was able to recover the DRMI locking with the old settings.

The feature in the PRCL spectrum (uncalibrated, y-axis is cts/rtHz) at ~1.6kHz is mysterious, I wonder what that's about.

Wasted some time tonight futzing around with various settings because I couldn't catch a DRMI lock, thinking I may have to re-tune demod phases etc given that I've been mucking around the LSC rack a fair bit. But fortunately, the problem turned out to be that the correct feedforward filters were not enabled in the angular feedforward path (seems like these are not SDF monitored). Clue was that there was more angular motion of the POP spot on the CCD than I'm used to seeing, even in the PRMI carrier lock.

After fixing this, lock was acquired within seconds, and the locks are as robust as I remember them - I just broke one after ~20mins locked because I went into the lab. I've been putting off looking at this angular feedforward stuff and trying out some ideas rana suggested, seems like it can be really useful.

As part of the pre-lock work, I dither aligned arms, and then ran the PRCL/MICH dithers as well, following which I re-centered the ITM, PRM and BS Oplevspots onto their respective QPDs - they have not been centered for a couple of months now.

I'm now going to try and measure some other couplings like PSL RIN->MICH, Marconi phase noise->MICH etc.

 

Attachment 1: DRMI_7Nov20178.png
DRMI_7Nov20178.png
  13412   Tue Nov 7 17:45:05 2017 gautamUpdateLSCDRMI Nosie Budget v3.1

Some days ago, I had tried to measure the SRCL->MICH and PRCL->MICH cross couplings using broadband noise injected between 120-180 Hz, a frequency band chosen arbitrarily, in hindsight, I could have done a more broadband test. I've spent some time including the infrastructure to calculate "White-Noise TFs" in the noise budgeting code, where a transfer function is estimated by injecting a "broadband" excitation into a channel of interest, and looking at the resulting response in MICH. I figured this would be useful to estimate other couplings as well, e.g. laser intensity nosie, oscillator noise etc.

I estimate the transfer function of the coupling using the relation (MICH is the median ASD of the MICH error signal in the below expression, and similarly for PRCL)

|H_{cpl}| = \sqrt{\frac{|\mathrm{MICH}^{2}_{\mathrm{exc}} - \mathrm{MICH}^{2}_{\mathrm{quiet}}|}{|\mathrm{PRCL}^{2}_{\mathrm{exc}} - \mathrm{PRCL}^{2}_{\mathrm{quiet}}|}}

Attachments #1 and #2 show the spectra of the MICH, PRCL and SRCL signals during 'quiet' times and during the injection, while Attachment #3 shows the calculated coupling TFs using the above relation. These are significantly different (more than 10dB lower) than the numbers I reported in elog 13367, where the measurement was made using swept sine. As can be seen in the attached plots, the injected broadband excitation is visible above the nominal noise level, and I calculated the white noise TFs using ~5mins of data which should be plenty, so I'm not sure atm what to make of the answers from swept-sine and broadband injections being so different.

Attachment #4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz.

Nov 8 1600: Updating NB to inculde estimated Oplev A2L.

Quote:
 

AUX coupling

This is the other find.

  • While chatting with Gabriele, he suggested measuring the SRCL->MICH and PRCL->MICH cross couplings.
  • I injected a signal in SRCL servo EXC channel, and adjusted amplitude till coherence in MICH_IN1 was good.
  • The actual TF measured was MICH_IN1 / SRCL_IN1 (so units of cts/ct).
  • My multiplying the in-lock PRCL and SRCL IN1 signals by these coupling coefficients (assumed flat in frequency for now, note that measurement was only made between 100Hz and 1kHz), I get the trace labelled "AUX coupling" in Attachment #1 (this is the quadrature sum for SRCL and PRCL couplings).
  • Also repeated for PRCL -> MICH coupling in the same way.
  • Measurements of these TFs and coherence are shown in Attachment #5 (again png screenshot because of DTT).
  • However, there is no significant coherence in MICH/SRCL or MICH/PRCL in this frequency range.

This seems to be limiting us from saturating the dark noise once the coil de-whitening is engaged. But lack of coherence means the mechanism is not re-injection of SRCL/PRCL sensing noise? Need to think about what this means / how we can mitigate it.


Attachment 1: SRCL_MICH_whitenoise_tf.pdf
SRCL_MICH_whitenoise_tf.pdf
Attachment 2: PRCL_MICH_whitenoise_tf.pdf
PRCL_MICH_whitenoise_tf.pdf
Attachment 3: MICH_aux.pdf
MICH_aux.pdf
Attachment 4: C1NB_disp_40m_MICH_NB_2017-10-08.pdf
C1NB_disp_40m_MICH_NB_2017-10-08.pdf
  13411   Mon Nov 6 18:22:48 2017 jamieSummaryLSCcurrent procedure for running c1dnn code

This is the current procedure to start the c1dnn model:

$ ssh c1lsc
$ sudo systemctl start rts-epics@c1dnn
$ sudo systemctl start rts-awgtpman@c1dnn
$ sudo /usr/bin/cset proc -s rts-c1dnn --exec /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -- -m c1dnn
...

Then to shutdown:

...
Ctrl-C
$ sudo systemctl stop rts-awgtpman@c1dnn
$ sudo systemctl stop rts-epics@c1dnn

The daqd already knows about this model, so nothing should need to be done to the daqd to make the dnn channels available.

  13410   Mon Nov 6 11:15:43 2017 gautamUpdateCDSslow machine bootfest + IFO re-alignment

Eurocrate key turning reboots today morning for and c1susaux, c1auxex and c1auxey. Usual precautions were taken to minimize risk of ITMX getting stuck.

The IFO hasn't been aligned in ~1week, so I recovered arm and PRM alignment by locking individual arms and also PRMI on carrier. I will try recovering DRMI locking in the evening.

As far as MC1 glitching is concerned, there hasn't been any major one (i.e. one in which MC1 is kicked by such a large amount that the autolocker can't lock the IMC) for the past 2 months - but the MC WFS offsets are an indication of when smaller glitches have taken place, and there were large DC offsets on the MC WFS servo outputs, which I offloaded to the DC MC suspension sliders using the MC WFS relief script.

I'd like for the save-restore routine that runs when the slow machines reboot to set the watchdog state default to OFF (currently, after a key-turning reboot, the watchdogs are enabled by default), but I'm not really sure how this whole system works. The relevant files seem to be in the directory /cvs/cds/caltech/target/c1susaux. There is a script in there called startup.cmd, which seems to be the initialization script that runs when the slow machine is rebooted. But looking at this file, it is not clear to me where the default values are loaded from? There are a few "saverestore" files in this directory as well:

  • saverestore.sav
  • saverestore.savB
  • saverestore.sav.bu
  • saverestore.req

Are the "default" channel values loaded from one of these?

  13409   Mon Nov 6 09:09:43 2017 SteveUpdateVACsetting up new TP2 turbo

Our new Agilent Technology TwisTorr 84FS AG rack controller ( English Manual pages 195-297 )  RS232/485, product number X3508-64001, sn IT1737C383

This controller, turbo and it's drypump needs to be set up into our existing vacuum system. The intake valve of this turbo (V4) has to have a hardwired interlock that closes V4 when rotation speed is less than 20% of preset RPM speed.

The unit has an analoge 10Vdc output that is proportional to rotation speed. This can be used with a comperator to direct the interlock or there may be set software option in the controller to close the valve if the turbo slows down more than 20%

The last Upgrade of the 40m Vacuum System  1/2/2000 discribes our  vauum system  LIGO-T000054-00-R 

Here the LabView / Metrabus controls were replaced by VME processor and  an Epic interface

We do not have schematics of the hardware wiring.

We need help with this.

 

  13408   Mon Oct 30 11:15:02 2017 gautamUpdateCDSslow machine bootfest + vacuum snafu

Eurocrate key turning reboots today morning for c1psl and c1aux.c1auxex and c1auxey are also down but I didn't bother keying them for now. PSL FSS slow loop is now active again (its inactivity was what prompted me to check status of the slow machines).

Note that the EPCIS channels for PSL shutter are hosted on c1aux.But looks like the slow machine became unresponsive at some point during the weekend, so plotting the trend data for the PSL shutter channel would have you believe that the PSL shutter was open all the time. But the MC_REFL DC channel tells a different story - it suggests that the PSL shutter was closed at ~4AM on Sunday, presumably by the vacuum interlock system. I wonder:

  1. How does the vacuum interlock close the PSL shutter? Is there a non-EPICS channel path? Because if the slow machine happens to be unresponsive when the interlock wants to close the PSL shutter via EPICS commands, it will be unable to. The fact that the PSL shutter did close suggests that there is indeed another path.
  2. We should add some feature to the vacuum interlock (if it doesn't already exist) such that the PSL shutter isn't accidentally re-opened until any vacuum related issues are resolved. Steve was immediately able to identify that the problem was vacuum related, but I think I would have just re-opened the PSL shutter thinking that the issue was slow computer related.
  13407   Mon Oct 30 10:09:41 2017 SteveUpdateVACTP2 failed

 IFO pressure 1.2e-5 Torr at 9:30am

Quote:

Valve configuration: Vacuum normal

Note: Tp2 running at 75Krpm 0.25A 26C has a  load high pitch sound today. It's fore line pressure 78 mTorr. Room temp 20C

 

Atm. 1,  This was the vacuum condition  this morning.

               IFO P1  9.7 mTorr, V1 openV4 was in closed position , ~37 C warm Maglev at normal 560Hz rotation speed with foreline pressure 3.9 Torr because V4 closed 2 days ago when TP2 failed .....see Atm.3

               The error messege at TP2  controller was: fault overtemp.

I did the following to restored IFO pumping: stopped pumping of the annulose with TP3 and valves were configured so TP3 can be the forepump of the Maglev.

closed VM1 to protect the RGA,  close PSL shutter .....see Gautam  entry

aux fan on to cool down Maglev-TP1, room temp 20 C,

aux drypump turned on and opend to TP3 foreline to gain pumping speed,

closed PAN to isolate annulos pumping,

opened V7 to pump Maglev forline with TP3 running at 50Krpm, It took 10 minutes to reach P2  1mTorr from 3.9 Torr

aux drypump closed off at P2  1 mTorr, TP3 foreline pressure 362 mTorr.......see Atm.2

As we are running now:

IFO pressure 7e-6 Torr at Hornet cold cathode gauge at 15:50  We have no IFO CC1 logging now.  Annuloses are in 3-5 mTorr range are not pumped.

TP3 as foreline pump of TP1 at 50 Krpm, 0.24 A, 24 C, it's drypump forline pressure 324 mTorr

V4 valve cable is disconnected.

I need help with wiring up the logging of the Hornet cold cathode gauge.

 

 

 

Attachment 1: tp2failed.png
tp2failed.png
Attachment 2: ifo_1.0E-5_Torrit.png
ifo_1.0E-5_Torrit.png
Attachment 3: tp2failed2dago.png
tp2failed2dago.png
Attachment 4: 4days.png
4days.png
  13406   Mon Oct 30 08:08:06 2017 SteveUpdatesafetysafety training

Udit Kahndelwal received 40m specific basic safety traning on Friday, Oct. 27

  13405   Sun Oct 29 16:40:17 2017 ranaSummaryComputersdisk cleanup

Backed up all the wikis. Theyr'e in wiki_backups/*.tar.xz (because xz -9e gives better compression than gzip or bzip2)

Moved old user directories in the /users/OLD/

  13404   Sat Oct 28 00:36:26 2017 gautamUpdateCDS40m files backup situation - ddrescue

None of the 3 dd backups I made were bootable - at boot, selecting the drive put me into grub rescue mode, which seemed to suggest that the /boot partition did not exist on the backed up disk, despite the fact that I was able to mount this partition on a booted computer. Perhaps for the same reason, but maybe not.

After going through various StackOverflow posts / blogs / other googling, I decided to try cloning the drives using ddrescue instead of dd.

This seems to have worked for nodus - I was able to boot to console on the machine called rosalba which was lying around under my desk. I deliberately did not have this machine connected to the martian network during the boot process for fear of some issues because of having multiple "nodus"-es on the network, so it complained a bit about starting the elog and other network related issues, but seems like we have a plug-and-play version of the nodus root filesystem now.

chiara and fb1 rootfs backups (made using ddrescue) are still not bootable - I'm working on it.

Nov 6 2017: I am now able to boot the chiara backup as well - although mysteriously, I cannot boot it from the machine called rosalba, but can boot it from ottavia. Anyways, seems like we have usable backups of the rootfs of nodus and chiara now. FB1 is still a no-go, working on it.

Quote:

Looks to have worked this time around.

controls@fb1:~ 0$ sudo dd if=/dev/sda of=/dev/sdc bs=64K conv=noerror,sync
33554416+0 records in
33554416+0 records out
2199022206976 bytes (2.2 TB) copied, 55910.3 s, 39.3 MB/s
You have new mail in /var/mail/controls

I was able to mount all the partitions on the cloned disk. Will now try booting from this disk on the spare machine I am testing in the office area now. That'd be a "real" test of if this backup is useful in the event of a disk failure.

 

 

Attachment 1: 415E2F09-3962-432C-B901-DBCB5CE1F6B6.jpeg
415E2F09-3962-432C-B901-DBCB5CE1F6B6.jpeg
Attachment 2: BFF8F8B5-1836-4188-BDF1-DDC0F5B45B41.jpeg
BFF8F8B5-1836-4188-BDF1-DDC0F5B45B41.jpeg
  13403   Fri Oct 27 10:14:11 2017 SteveUpdateVACRGA scan at day 372

Valve configuration: Vacuum normal

Note: Tp2 running at 75Krpm 0.25A 26C has a  load high pitch sound today. It's fore line pressure 78 mTorr. Room temp 20C

 

Attachment 1: RGA_scan_d372.png
RGA_scan_d372.png
  13402   Fri Oct 27 09:34:20 2017 SteveUpdatePEMearthquakes

Lompoc 4.3M and 3.7M Avalon

 

Attachment 1: recentEQs.png
recentEQs.png
  13401   Wed Oct 25 09:32:14 2017 GabrieleSummaryLSCfurther testing of c1dnn integration; plugged in to DAQ
Quote:

 

 

We'll need to set the phase rotation of the demodulated RF PD signals (REFL11, REFL55, AS55, POP22) to match them with what the NN expects...

Here are the demodulation phases and rotation matrices tuned for the network. For the matrices, I am assuming that the input is [I, Q] and the output is [I,Q].

POP22
phi = 153 degrees
[[-0.894, 0.447],
 [-0.447, -0.894]]

REFL11
phi = 93 degrees
[[-0.058, 0.998],
 [-0.998, -0.058]]

REFL55
phi = -90 degrees
[[0.000, -1.000],
 [1.000, 0.000]]

AS55
phi = 7 degrees
[[0.993, 0.122],
 [-0.122, 0.993]]

  13400   Tue Oct 24 20:14:21 2017 jamieSummaryLSCfurther testing of c1dnn integration; plugged in to DAQ

In order to try to isolate CPU6 for the c1dnn neural network reconstruction model, I set the CPUAffinity in /etc/systemd/system.conf to "0" for the front end machines.  This sets the cpu affinity for the init process, so that init and all child processes are run on CPU0.  Unfortunately, this does not affect the kernel threads.  So after reboot all user space processes where on CPU0, but the kernel threads were still spread around.  Will continue trying to isolate the kernel as well...

In any event, this amount of isolation was still good enough to get the c1dnn user space model running fairly stably.  It's been running for the last hour without issue.

I added the c1dnn channel and testpoint files to the daqd master file, and restarted daqd_dc on fb1, so now the c1dnn channels and test points are available through dataviewer etc.  We were then able to observe the reconstructed signals:

We'll need to set the phase rotation of the demodulated RF PD signals (REFL11, REFL55, AS55, POP22) to match them with what the NN expects...

  13399   Tue Oct 24 16:43:11 2017 SteveUpdateCDSslow machine bootfest

[ Gautam , Steve ]

c1susaux & c1iscaux were rebooted manually.

Quote:

Had to reboot c1psl, c1susaux, c1auxex, c1auxey and c1iscaux today. PMC has been relocked. ITMX didn't get stuck. According to this thread, there have been two instances in the last 10 days in which c1psl and c1susaux have failed. Since we seem to be doing this often lately, I've made a little script that uses the netcat utility to check which slow machines respond to telnet, it is located at /opt/rtcds/caltech/c1/scripts/cds/testSlowMachines.bash.

The script can be executed by ./testSlowMachines.bash.

 

  13398   Tue Oct 24 16:22:53 2017 gautamUpdateCDSToy DARM model setup in c1tst

[alex, gautam]

Alex is going to have an undergrad work on a calibration optimization project on the 40m RTCDS system. For this purpose, we wanted to setup a "Simulated DARM loop". Today, Alex and I set this up. I figured we can use the c1tst model for this purpose. We basically copied the topology from Figure 2 of the h(t) paper. Attached are screenshots of the MEDM screens of the system we setup, and the simulink block diagram - the main screen can be accessed from the "SIM PLANT" tab in the sitemp.

It remains to setup the appropriate filters in the filter banks, and an EPICS channel monitor for monitoring the single excitation testpoint in the model. We also did not set up any DQ channels for the time being, as it is not even clear to me what channels need to be DQ-ed.

Attachment 1: TOY_DARM.png
TOY_DARM.png
Attachment 2: TOY_DARM_SIMULINK.png
TOY_DARM_SIMULINK.png
  13397   Mon Oct 23 09:17:41 2017 SteveUpdatePEMants alart

Do not leave organic trash or food boxes in the 40m to attrack ants !

 

Attachment 1: ants.jpg
ants.jpg
Attachment 2: ants2.jpg
ants2.jpg
  13396   Fri Oct 20 16:30:17 2017 gautamUpdateCDSFB1 installed on shelves

[steve, jamie, gautam]

The machine that now serves as out Frame Builder, FB1, was sitting on top of megatron. I decided that this wasn't ideal, and asked Steve to get some alternative mounting solution. Today, he procured some shelves to put FB1 on. Jamie suggested looking for the slider-rail that came with the machine, and using that instead, as it will allow us to slide FB1 out of the rack as we do megatron and the old FB. But as luck would have it, the distance between the rack vertical posts is 26 inches, but the rail is 27 inches. So we had to accept using the less ideal solution of putting FB1 on two shelves, with no sliding option. Photo to be uploaded shortly.

For this work, I had to shutdown FB1 for about 1 hour between 3pm and 4pm. It seems to have come back up fine now.

  13395   Thu Oct 19 15:42:03 2017 jamieSummaryLSCMICH/PRCL reconstruction neural network running on c1lsc

Gabriele's PRCL/MICH reconstruction neural network is now running on c1lsc.  Summary:

  • front-end model is called c1dnn, and is running as an experimental user-space process
  • c1dnn is getting most of it's needed inputs from existing SHMEM IPC outputs from c1lsc
  • none of the output signals from the network are being sent anywhere yet (grounded)
  • c1dnn has not been integrated in any way, into the DAQ etc.  it is being run manually by hand, and will be completely shut down after this test

Simple MEDM screen I made to monitor the input/output signals:

The RTS process seems to run fine, but there is quite a bit of jitter in the CPU_METER, at the 50% level:

It's not running over the limit, but it is jumping around more than I think it should be.  Will look into that...

cpuset for cpu isolation for user-space model

The c1dnn model is running on CPU6 on c1lsc.  CPU6 was isolated from the rest of the system using cpuset.  The "cset" utility was used to create a "system" CPU set that was assigned to CPU0, and the kernel was instructed to move all running processes to that set:

controls@c1lsc:~ 2$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y   343    0 /
controls@c1lsc:~ 0$ sudo cset set -c 0 -s system --cpu_exclusive
cset: --> created cpuset "system"
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y   342    1 /
       system          0 y       0 n     0    0 /system
controls@c1lsc:~ 0$ sudo cset proc --move -f root -t system -k
cset: moving all tasks from root to /system
cset: moving 292 userspace tasks to /system
cset: moving 0 kernel threads to: /system
cset: --> not moving 50 threads (not unbound, use --force)
[==================================================]%
cset: done
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    50    1 /
       system          0 y       0 n   292    0 /system
controls@c1lsc:~ 0$ sudo cset proc --move -f root -t system -k --force
cset: moving all tasks from root to /system
cset: moving 50 kernel threads to: /system
[==================================================]%
cset: **> 29 tasks are not movable, impossible to move
cset: done
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    29    1 /
       system          0 y       0 n   313    0 /system
controls@c1lsc:~ 0$

I then created a set for the RTS process ("rts-c1dnn") on CPU6, and executed the c1dnn model in that set:

controls@c1lsc:~ 0$ sudo cset set -c 6 -s rts-c1dnn --cpu_exclusive
cset: --> created cpuset "rts-c1dnn"
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    24    2 /
    rts-c1dnn          6 y       0 n     0    0 /rts-c1dnn
       system          0 y       0 n   340    0 /system
controls@c1lsc:~ 0$ sudo cset proc -s rts-c1dnn --exec /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -- -m c1dnn
cset: --> last message, executed args into cpuset "/rts-c1dnn", new pid is: 27572
sysname = c1dnn
....

When done I just hit Ctrl-C.

I left the cpusets as they are, with all system processes in the "system" set.  This should not pose any problems since it's the identical configuration as would be if a normal kernel-level model was running in CPU6.

The c1dnn process and it's EPICS sequencer were shutdown after this test.

  13394   Wed Oct 18 23:11:53 2017 gautamUpdateCDSFEs unresponsive

This happened again just now - it was roughly this time when this happened last night as well.

There was certainly an EPICS freeze of the kind we were used to seeing prior to replacing the martian wireless router sometime in late 2015 (or early 2016?). I was trying to run the dither alignment servos on the Y-arm at this time, and all the StripTool traces flatlined.

I took the opportunity to try accessing testpoints from the iscey ADCs - specifically C1:SUS-TRY_OUT, and it seemed to work just fine. However, I couldn't ssh into c1iscey.

Looking at the dmesg once I was able to ssh in eventually (~2 minutes deadtime tonight, I feel like it was longer yesterday but can't quantify), I see the following: not sure if there are any clues in here, or whether this is the correct log to check. But there are many instances of the nfs server related message in the log. Note that the system time-stamp corresponds to when this freeze happened.

[5461308.784018] nfs: server 192.168.113.201 not responding, still trying
[5461412.936284] nfs: server 192.168.113.201 OK
[5461412.937130] systemd[1]: Starting Journal Service...
[5461412.947947] systemd-journald[20281]: Received SIGTERM from PID 1 (systemd).
[5461412.996063] systemd[1]: Unit systemd-journald.service entered failed state.
[5461413.002627] systemd[1]: systemd-journald.service has no holdoff time, scheduling restart.
[5461413.008983] systemd[1]: Stopping Journal Service...
[5461413.014664] systemd[1]: Starting Journal Service...
[5461413.044262] systemd[1]: Started Journal Service.
[5461413.694838] systemd-journald[400]: Received request to flush runtime journal from PID 1

 

ELOG V3.1.3-