ID |
Date |
Author |
Type |
Category |
Subject |
13413
|
Tue Nov 7 22:56:21 2017 |
gautam | Update | LSC | DRMI locking recovered | I hadn't re-locked the DRMI after the work on the AS55 demod board. Tonight, I was able to recover the DRMI locking with the old settings.
The feature in the PRCL spectrum (uncalibrated, y-axis is cts/rtHz) at ~1.6kHz is mysterious, I wonder what that's about.
Wasted some time tonight futzing around with various settings because I couldn't catch a DRMI lock, thinking I may have to re-tune demod phases etc given that I've been mucking around the LSC rack a fair bit. But fortunately, the problem turned out to be that the correct feedforward filters were not enabled in the angular feedforward path (seems like these are not SDF monitored). Clue was that there was more angular motion of the POP spot on the CCD than I'm used to seeing, even in the PRMI carrier lock.
After fixing this, lock was acquired within seconds, and the locks are as robust as I remember them - I just broke one after ~20mins locked because I went into the lab. I've been putting off looking at this angular feedforward stuff and trying out some ideas rana suggested, seems like it can be really useful.
As part of the pre-lock work, I dither aligned arms, and then ran the PRCL/MICH dithers as well, following which I re-centered the ITM, PRM and BS Oplevspots onto their respective QPDs - they have not been centered for a couple of months now.
I'm now going to try and measure some other couplings like PSL RIN->MICH, Marconi phase noise->MICH etc.
|
Attachment 1: DRMI_7Nov20178.png
|
|
13414
|
Wed Nov 8 00:28:16 2017 |
gautam | Update | LSC | Laser intensity coupling measurement attempt | I tried measuring the coupling of PSL intensity noise by driving some broadband noise bandpassed between 80-300Hz using the spare DAC channel at 1Y3 that I had set up for this purpose a couple of weeks ago (via a battery powered SR560 buffer set to low-noise operation mode because I'm not sure if the DAC output can drive a ~20m long cable). I was monitoring the MC2 TRANS QPD Sum channel spectrum while driving this broadband noise - the "nominal" spectrum isn't very clean, there are a bunch of notches from a 60Hz comb and a forest of peaks over a broad hump from 300Hz-1kHz (see Attachment #1).
I was able to increase the drive to the AOM till the RIN in the band being driven increased by ~10x, and saw no change in the MICH error signal spectrum [see Attachment #1] - during this measurement, the RFPD whitening was turned on for REFL11, REFL55 and AS55, and the ITM coil drivers were de-whitened, so as to get a MICH spectrum that is about as "low-noise" as I've gotten it so far.
I tried increasing the drive further, but at this point, started seeing frequent MC locklosses - I'm not convinced this is entirely correlated to my AOM activities, so I will try some more, but at the very least, this places an upper bound on the coupling from intensity noise to MICH. |
Attachment 1: PSL_RIN.pdf
|
|
13415
|
Wed Nov 8 09:37:45 2017 |
rana | Update | LSC | DRMI Nosie Budget v3.1 | why no oplev trace in the NB ?
#4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz
|
also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method. |
13416
|
Wed Nov 8 09:59:12 2017 |
gautam | Update | LSC | DRMI Nosie Budget v3.1 | The Oplev trace is missing for now, as I have not re-measured the A2L coupling since modifying the Oplev loop shape (specifically the low pass filter and overall gain) to allow engageing the coil de-whitening.
The averaging for the white noise TFs plotted is computed using median averaging - I have used a python transcription of Sujan's matlab code. I use scipy.signal.spectrogram to compute the fft bins (I've set some defaults like 8s fft length and a tukey window), and then take the median average using np.median(). I've also incorporated the ln(2) correction factor.
It seems like GwPy has some in-built capability to compute median (and indeed various other percentile) averages, but since we aren't using it, I just coded this up.
Quote: |
why no oplev trace in the NB ?
also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method.
|
|
13417
|
Wed Nov 8 12:19:55 2017 |
gautam | Update | SUS | coil driver series resistance | We've been talking about increasing the series resistance for the coil driver path for the test masses. One consequence of this will be that we have reduced actuation range.
This may not be a big deal since for almost all of the LSC loops, we currently operate with a limiter on the output of the control filter bank. The value of the limit varies, but to get an idea of what sort of "threshold" velocities we are looking at, I calculated this for our Finesse 400 arm cavities. The calculation is rather simplistic (see Attachment #1), but I think we can still draw some useful conclusions from it:
- In Attachment #1, I've indicated with dashed vertical lines some series resistances that are either currently in use, or are values we are considering.
- The table below tabulates the fraction of passages through a resonance we will be able to catch, assuming velocities sampled from a Gaussian with width ~3um/s, which a recent ALS study suggests describes our SOS optic velocity distribution pretty well (with lcoal damping on).
- I've assumed that the maximum DAC output voltage available for length control is 8V.
- Presumably, this Gaussian velocity distribution will be modified because of the LSC actuation exerting impulses on the optic on failed attempts to catch lock. I don't have a good model right now for how this modification will look like, but I have some ideas.
- It would be interesting to compare the computed success rates below with what is actually observed.
- The implications of different series resistances on DAC noise are computed here (although the non-linear nature of the DAC noise has not been taken into account).
Series resistance [ohms] |
Predicted Success Rate [%] |
Optics with this resistance |
100 |
>90 |
BS, PRM, SRM |
400 |
62 |
ITMX, ITMY, ETMX, ETMY |
1000 |
45 |
- |
2000 |
30 |
- |
So, from this rough calculation, it seems like we would lose ~25% efficiency in locking the arm cavity if we up the series resistance from 400ohm to 1kohm. Doesn't seem like a big deal, becuase currently, the single arm locking |
Attachment 1: vthresh.pdf
|
|
13418
|
Wed Nov 8 14:28:35 2017 |
gautam | Update | General | MC1 glitches return | There hasn't been a big glitch that misaligns MC1 by so much that the autolocker can't lock for at least 3 months, seems like there was one ~an hour ago.
I disabled autolocker and feedback to the PSL, manually aligned MC1 till the MC_REFL spot looked right on the CCD to me, and then re-engaged the autolocker, all seems to have gone smoothly.
|
Attachment 1: MC1_glitchy.png
|
|
Attachment 2: 6AFDA67D-79B1-469C-A58A-9EC5F8F01D32.jpeg
|
|
13419
|
Wed Nov 8 16:39:24 2017 |
Kira | Update | PEM | ADC noise measurement | Gautam and I measured the noise of the ADC for channels 17, 18, and 19. We plan to use those channels for measuring the noise of the temperature sensors, and we need to figure out whether or not we will need whitening and if so, how much. The figure below shows the actual measurements (red, green and blue lines), and a rough fit. I used Gautam's elog here and used the same function, (with units of nV/sqrt(Hz)) to fit our results. I used a = 1, b = 1e6, c = 2000. Since we are interested in measuring at lower frequencies, we must whiten the signal from the temperature sensors enough to have the ADC noise be negligible.
We want to be able to measure to accuracy at 1Hz, which translates to about current from the AD590 (because it gives ). Since we have a 10K resistor and V=IR, the voltage accuracy we want to measure will be . We would need whitening for lower frequencies to see such fluctuations.
To do the measurements, we put a BNC end cap on the channels we wanted to measure, then took measurements from 0-900Hz with a bandwidth of 0.001Hz. This setup is shown in the last two attachments. We used the ADC in 1X7. |
Attachment 1: ADC-fit.png
|
|
Attachment 2: IMG_20171108_162532.jpg
|
|
Attachment 3: IMG_20171108_162556.jpg
|
|
13420
|
Wed Nov 8 17:04:21 2017 |
gautam | Update | CDS | gds-2.17.15 [not] installed | I wanted to use the foton.py utility for my NB tool, and I remember Chris telling me that it was shipping as standard with the newer versions of gds. It wasn't available in the versions of gds available on our workstations - the default version is 2.15.1. So I downloaded gds-2.17.15 from http://software.ligo.org/lscsoft/source/, and installed it to /ligo/apps/linux-x86_64/gds-2.17.15/gds-2.17.15. In it, there is a file at GUI/foton/foton.py.in - this is the one I needed.
Turns out this was more complicated than I expected. Building the newer version of gds throws up a bunch of compilation errors. Chris had pointed me to some pre-built binaries for ubuntu12 on the llo cds wiki, but those versions of gds do not have foton.py. I am dropping this for now. |
13422
|
Thu Nov 9 15:33:08 2017 |
johannes | Update | CDS | revisiting Acromag |
Quote: |
We probably want to get a dedicated machine that will handle the EPICS channel serving for the Acromag system
|
http://www.supermicro.com/products/system/1U/5015/SYS-5015A-H.cfm?typ=H
This is the machine that Larry suggested when I asked him for his opinion on a low workload rack-mount unit. It only has an atom processor, but I don't think it needs anything particularly powerful under the hood. He said that we will likely be able to let us borrow one of his for a couple days to see if it's up to the task. The dual ethernet is a nice touch, maybe we can keep the communication between the server and the DAQ units on their separate local network. |
13423
|
Fri Nov 10 08:52:21 2017 |
Steve | Update | VAC | TP3 drypump replaced | PSL shutter closed at 6e-6 Torr-it
The foreline pressure of the drypump is 850 mTorr at 8,446 hrs of seal life
V1 will be closed for ~20 minutes for drypump replacement..........
9:30am dry pump replaced, PSL shutter opened at 7.7E-6 Torr-it
Valve configuration: vacuum normal as TP3 is the forepump of the Maglev & annuloses are not pumped.
Quote: |
TP3 drypump replaced at 655 mTorr, no load, tp3 0.3A
This seal lasted only for 33 days at 123,840 hrs
The replacement is performing well: TP3 foreline pressure is 55 mTorr, no load, tp3 0.15A at 15 min [ 13.1 mTorr at d5 ]
Valve configuration: Vacuum Normal, ITcc 8.5E-6 Torr
Quote: |
Dry pump of TP3 replaced after 9.5 months of operation.[ 45 mTorr d3 ]
The annulosses are pumped.
Valve configuration: vac normal, IFO pressure 4.5E-5 Torr [1.6E-5 Torr d3 ] on new ITcc gauge, RGA is not installed yet.
Note how fast the pressure is dropping when the vent is short.
Quote: |
IFO pressure 1.7E-4 Torr on new not logged cold cathode gauge. P1 <7E-4 Torr
Valve configuration: vac.normal with anunulossess closed off.
TP3 was turned off with a failing drypump. It will be replaced tomorrow.
|
All time stamps are blank on the MEDM screens.
|
|
|
13424
|
Fri Nov 10 13:46:26 2017 |
Steve | Update | PEM | M3.1 local earthquake | BOOM SHAKA-LAKA |
Attachment 1: 3.1M_local_EQ.png
|
|
13426
|
Tue Nov 14 08:54:37 2017 |
Steve | Update | IOO | MC1 glitching | |
Attachment 1: MC1_glitching.png
|
|
13428
|
Wed Nov 15 01:37:07 2017 |
gautam | Update | LSC | DRMI low freq. nosie improved | Pianosa just crashed and ate my elog, along with all the DTT/Foton windows I had open, so more details tomorrow... This workstation has been crashing ~once a month for the last 6 months.
Summary:
Below ~100Hz, the hypothesis is that the BS oplev A2L contribution dominates the MICH displacement noise. I wanted to see if I could mitigate this my modifying the BS Oplev loop shape.
Details:
- Swept sine TF measurements suggested that the BS A2L contribution is between 10-100x that of the ITM A2L
- The Oplev loop shape for BS is different from ITMs - specifically, there is a Res-gain centered at ~3.3 Hz. The low frequency ~0.6Hz boost filter present in the ITM Oplev loops was disengaged for the BS Oplves.
- I turned off the BS OL loops and looked at error signal spectra - didn't seem that different from ITM OL error signals, so I decided to try turning off the res-gain and engage the 0.6Hz boost.
- This change also gave me much more phase at ~6Hz, which is roughly the UGF of the loop. So I put in another roll-off low pass filter with corner frequency 25Hz.
- This worked okay - RMS went down by ~5x (which is even better than the original config), and although the performance between ~3 and 10Hz is slightly worse than with the old combination,this region isn't the dominant contribution to the RMS. PM at the upper UGF is ~30degrees in the new configuration.
- I wanted to give DRMI locking a shot with the new OL loop - expectations were that the noise between 30-100Hz would improve, and perhaps the engaging of de-whitening filters on BS would also be easier given the more severe roll-off at high-frequencies.
- Attachment #1 shows the NB for tonights lock. All MICH optics had their coil drivers de-whitened, and all the LSC PDs were whitened for this measurement.
- I've edited the NB code to make the A2L calculation more straightforward, I now just make the coupling 1/f^2 and give the function a measured overall gain, so that this curve can now be easily added to all future NBs. I've also transcribed the matlab funciton used for parsing Foton files into python, this allows me to convert the DQ-ed OL error signals to control signals. Will update git with changes.
Remarks:
- MICH noise has improved by ~2x between 40-80Hz.
- Not sure what to make of the broad hump around 60Hz - scatter shelf?
- There is still unexplained noise below 100Hz, the A2L estimate is considerably lower than the measured noise.
- We are still more than an order of magnitude away from the estimated seismic noise floor at low frequencies (but getting closer!).
I've been banging my head against optimal loop shaping, with the OL loop as a test-case, without much success - as was the case with coating PSO, the magic is in smartly defining the cost function, but right now, my optimizer seems to be pushing most of the roots I'm making available for it to place to high frequencies. I've got a term in there that is supposed to guard against this, need to tweak further...
Attachment #2: Eye-fits of measured OL A2L coupling TFs to a 1/f^2 shape, with the gain being the parameter "fitted". I used these value, and the DQ-ed OL error signal in lock, to estimate the red curve labelled "A2L" in Attachment #1. The dots are the measurement, and the lines are the 1/f^2 estimates. |
Attachment 1: C1NB_disp_40m_MICH_NB_2017-11-15.pdf
|
|
Attachment 2: OL_A2L_couplings.pdf
|
|
13429
|
Thu Nov 16 00:14:47 2017 |
Udit Khandelwal | Update | SUS | SOS Sapphire Prism design | Summary:
- SOS solidworks model is nearly complete
- Having trouble with the design of the sensor/actuator head assembly and the lower clamps
- After Gautam's suggestion, installed Abaqus on computer. Teaching it to myself to eventually do FEM analysis and find resonant frequency of the system
- Goal is to replicate frequency listed in the SOS documents to confirm accuracy of computer model, then replace guide rods with sapphire prisms and change geometry to get same results
Questions:
- How accurate do the details (like fillet, chamfer, placement of little vent holes), and material of the different SOS parts need to be in the model?
- If I could get pictures of the lower mirror clamp (document D960008), it would be helpful in making solidworks model. Document is unclear. Same for sensor/actuator head assembly.
|
13430
|
Thu Nov 16 00:45:39 2017 |
gautam | Update | SUS | SOS Sapphire Prism design |
Quote: |
- If I could get pictures of the lower mirror clamp (document D960008), it would be helpful in making solidworks model. Document is unclear. Same for sensor/actuator head assembly.
|
If you go through this thread of elogs, there are lots of pictures of the SOS assembly with the optic in it from the vent last year. I think there are many different perspectives, close ups of the standoffs, and of the OSEMs in their holders in that thread.
This elog has a measurement of the pendulum resonance frequencies with ruby standoffs - although the ruby standoff used was cylindrical, and the newer generation will be in the shape of a prism. There is also a link in there to a document that tells you how to calculate the suspension resonance frequencies using analytic equations. |
13431
|
Thu Nov 16 00:53:26 2017 |
gautam | Update | LSC | DRMI noise sub-budgets | I've incorporated the functionality to generate sub-budgets for the various grouped traces in the NBs (e.g. the "A2L" trace is really the quadrature sum of the A2L coupling from 6 different angular servos).
For now, I'm only doing this for the A2L coupling, and the AUX length loop coupling groups. But I've set up the machinery in such a way that doing so for more groups is easy.
Here are the sub-budget plots for last night's lock - for the OL plot, there are only 3 lines (instead of 6) because I group the PIT and YAW contributions in the function that pulls the data from the nds server, and don't ever store these data series individually. This should be rectified, because part of the point of making these sub-budgets is to see if there is a particularly bad offender in a given group.
I'll do a quick OL loop noise budget for the ITM loops tomorrow.
I also wonder if it is necessary to measure the Oplev A2L coupling from lock to lock? This coupling will be dependant on the spot position on the optic, and though I run the dither alignment servos to minimize REFL_DC, AS_DC, I don't have any intuition for how the offset from center of optic varies from lock to lock, and if this is at all significant. I've been using a number from a measurement made in May. Need to do some algebra... |
Attachment 1: C1NB_a2l_40m_MICH_NB_2017-11-15.pdf
|
|
Attachment 2: C1NB_aux_40m_MICH_NB_2017-11-15.pdf
|
|
13432
|
Thu Nov 16 13:57:01 2017 |
gautam | Update | Optical Levers | Optical lever noise | I disabled the OL loops for ITMX, ITMY and BS at GPStime 1194897655 to come up with an Oplev noise budget. OL spots were reasonably well centered - by that, I mean that the PIT/YAW error signals were less than 20urad in absolute value.
Attachment #1 is a first look at the DTT spectra - I wonder why the BS Oplev signals don't agree with the ITMs at ~1Hz? Perhaps the calibration factor is off? The sensing noise not really flat above 100Hz - I wonder what all those peaky features are. Recall that the ITM OLs have analog whitening filters before the ADC, but the BS doesn't...
In Attachment #2, I show comparison of the error signal spectra for ITMY and SRM - they're on the same stack, but the SRM channels don't have analog de-whitening before the ADC.
For some reason, DTT won't let me save plots with latex in the axes labels... |
Attachment 1: VertexOLnoise.pdf
|
|
Attachment 2: ITMYvsSRM.pdf
|
|
13433
|
Thu Nov 16 15:43:01 2017 |
rana | Update | Optical Levers | Optical lever noise | I bet the calibration is out of date; probably we replaced the OL laser for the BS and didn't fix the cal numbers. You can use the fringe contrast of the simple Michelson to calibrate the OLs for the ITMs and BS. |
13436
|
Tue Nov 21 11:21:26 2017 |
gautam | Update | CDS | RFM network down | I noticed yesterday evening that I wasn't able to engage the single arm locking servos - turned out that they weren't getting triggered, which in turn pointed me to the fact that the arm transmssion channels seemed dead. Poking around a little, I found that there was a red light on the CDS overview screen for c1rfm.
- The error seems to be in the receiving model only, i.e. c1rfm, all the sending models (e.g. c1scx) don't report any errors, at least on the CDS overview screen.
- Judging by dataviewer trending of the c1rfm status word, seems like this happened on Sunday morning, around 11am.
- I tried restarting both sender and receiver models, but error persists.
- I got no useful information from the dmesg logs of either c1sus (which runs c1rfm), or c1iscex (which runs c1scx).
- There are no physical red lights in the expansion chassis that I could see - in the past, when we have had some timing errors, this would be a signature.
Not sure how to debug further...
* Fix seems to be to restart the sender RFM models (c1scx, c1scy, c1asx, c1asy). |
Attachment 1: RFMerrors.png
|
|
13437
|
Tue Nov 21 11:37:29 2017 |
gautam | Update | Optical Levers | BS OL calibration updated | I calibrated the BS oplev PIT and YAW error signals as follows:
- Locked X-arm, ran dither alignment servos to maximize transmission.
- Applied an offset to the ASC PIT/YAW filter banks. Set the ramp time to something long, I used 60 seconds.
- Monitored the X arm transmission while the offset was being ramped, and also the oplev error signal with its current calibration factor.
- Fit the data, oplev error signal vs arm transmission, with a gaussian, and extracted the scaling factor (i.e. the number which the current Oplev error signals have to be multiplied by for the error signal to correspond to urad of angular misalignment as per the overlap of the beam axis to the cavity axis.
- Fits are shown in Attachment #1 and #2.
- I haven't done any error analysis yet, but the open loop OL spectra for the BS now line up better with the other optics, see Attachment #3 (although their calibration factors may need to be updated as well...). Need to double check against OSEM readout during the sweep.
- New numbers have been SDF-ed.
The numbers are:
BS Pitch 15 / 130 (old/new) urad/counts
BS Yaw 14 / 170 (old/new) urad/counts
Quote: |
I bet the calibration is out of date; probably we replaced the OL laser for the BS and didn't fix the cal numbers. You can use the fringe contrast of the simple Michelson to calibrate the OLs for the ITMs and BS.
|
|
Attachment 1: OL_calib_BS_PERROR.pdf
|
|
Attachment 2: OL_calib_BS_YERROR.pdf
|
|
Attachment 3: VertexOLnoise_updated.pdf
|
|
13438
|
Tue Nov 21 16:00:05 2017 |
Kira | Update | PEM | seismometer can testing | I performed a test with the can last week with one layer of insulation to see how well it worked. First, I soldered two heaters together in series so that the total resistance was 48.6 ohms. I placed the heaters on the sides of the can and secured them. Then I wrapped the sides and top of the can in insulation and sealed the edges with tape, only leavng the handles open. I didn't insulate the bottom. I connected the two ends of the heater directly into the DC source and drove the current as high as possible (around 0.6A). I let the can heat up to a final value of 37.5C, turned off the current and manually measured the temperature, recoding the time every half degree. I then plotted the results, along with a fit. The intersection of the red line with the data marks the time constant and the temperature at which we get the time constant. This came out to be about 1.6 hours, much longer than expected considering that onle one layer instead of four was used. With only one layer, we would expect the time constant to be about 13 min, while for 4 layers it should be 53 min (the area A is 0.74 m^2 and not 2 m^2).
Quote: |
I made a model for our seismometer can using actual data so that we know approximately what the time constant should be when we test it out. I used the appendix in Megan Kelley's report to make a relation for the temperature in terms of time.
so and 
In our case, we will heat the can to a certain temerature and wait for it to cool on its own so 
We know that where k is the k-factor of the insulation we are using, A is the area of the surface through which heat is flowing, is the change in temperature, d is the thickness of the insulation.
Therefore,
![T(t)=\frac{1}{mc}\int_{0}^{t}\frac{kA}{d}[T_{lab}-T(t')]dt'=\frac{kA}{mcd}(T_{lab}t-\int_{0}^{t}T(t')dt')](https://latex.codecogs.com/gif.latex?T%28t%29%3D%5Cfrac%7B1%7D%7Bmc%7D%5Cint_%7B0%7D%5E%7Bt%7D%5Cfrac%7BkA%7D%7Bd%7D%5BT_%7Blab%7D-T%28t%27%29%5Ddt%27%3D%5Cfrac%7BkA%7D%7Bmcd%7D%28T_%7Blab%7Dt-%5Cint_%7B0%7D%5E%7Bt%7DT%28t%27%29dt%27%29)
We can take the derivative of this to get
, or
We can guess the solution to be
where tau is the time constant, which we would like to find.
The boundary conditions are and . I assumed we would heat up the can to 40 celcius while the room temp is about 24. Plugging this into our equations,
, so 
We can plug everything back into the derivative T'(t)
![T'(t)=-\frac{16}{\tau}e^{-t/\tau}=B-C[16e^{-t/\tau}+24]](https://latex.codecogs.com/gif.latex?T%27%28t%29%3D-%5Cfrac%7B16%7D%7B%5Ctau%7De%5E%7B-t/%5Ctau%7D%3DB-C%5B16e%5E%7B-t/%5Ctau%7D+24%5D)
Equating the exponential terms on both sides, we can solve for tau

Plugging in the values that we have, m = 12.2 kg, c = 500 J/kg*k (stainless steel), d = 0.1 m, k = 0.26 W/(m^2*K), A = 2 m^2, we get that the time constant is 0.326hr. I have attached the plot that I made using these values. I would expect to see something similar to this when I actually do the test.
To set up the experiment, I removed the can (with Steve's help) and will place a few heating pads on the outside and wrap the whole thing in a few layers of insulation to make the total thickness 0.1m. Then, we will attach the heaters to a DC source and heat the can up to 40 celcius. We will wait for it to cool on its own and monitor the temperature to create a plot and find the experimental time constant. Later, we can use the heatng circuit we used for the PSL lab and modify the parts as needed to drive a few amps through the circuit. I calculated that we'd need about 6A to get the can to 50 celcius using the setup we used previously, but we could drive a smaller current by using a higher heater resistance.
|
|
Attachment 1: cooling_fit.png
|
|
Attachment 2: IMG_20171121_164835.jpg
|
|
13439
|
Tue Nov 21 16:28:23 2017 |
gautam | Update | Optical Levers | BS OL calibration updated | The numbers I have from the fitting don't agree very well with the OSEM readouts. Attachment #1 shows the Oplev pitch and yaw channels, and also the OSEM ones, while I swept the ASC_PIT offset. The output matrix is the "naive" one of (+1,+1,-1,-1). SUSPIT_IN1 reports ~30urad of motion, while SUSYAW_IN1 reports ~10urad of motion.
From the fits, the BS calibration factors were ~x8 for pitch and x12 for yaw - so according to the Oplev channels, the applied sweep was ~80urad in pitch, and ~7urad in yaw.
Seems like either (i) neither the Oplev channels nor the OSEMs are well diagonalized and that their calibration is off by a factor of ~3 or (ii) there is some significant imbalance in the actuator gains of the BS coils...
Quote: |
Need to double check against OSEM readout during the sweep.
|
|
Attachment 1: BS_oplev_sweep.png
|
|
13441
|
Tue Nov 21 23:04:12 2017 |
gautam | Update | Optical Levers | Oplev "noise budget" | Per our discussions in the meetings over the last week, I've tried to put together a simple Oplev noise budget. The only two terms in this for now are the dark noise and a model for the seismic noise, and are plotted together with the measured open-loop error signal spectra.
- Dark noise
- Beam was taken off the OL QPD
- A small DC offset was added to all the oplev segment input filters to make the sum ~20-30 cts [call this testSum] (usually it varies from 4000-13000 for the BS/ITMs, call this nominalSum).
- I downloaded 20mins of dq-ed error signal data, and computed the ASD, dividing by a factor of nominalSum / testSum to account for the usual light intensity on the QPD.
- Seismic noise
- This is a very simplistic 1/f^2 pendulum TF with a pair of Q=2 poles at 1Hz.
- I adjusted the overall gain such that the 1Hz peak roughly line up in measurement and model.
- The stack isn't modelled at all.
Some remarks:
- The BS oplev doesn't have any whitening electronics, and so has a higher electronics noise floor compared to the ITMs. But it doesn't look like we are limited by this lower noise floor anywhere..
- I wonder what all those high frequency features seen in the ITM error signal spectra are - mechanical resonances of steering optics? It is definitely above the dark noise floor, so I am inclined to believe this is real beam motion on the QPD, but surely this can't be the test-mass motion? If it were, the measured A2L would be much higher than the level it is adjudged to be at now. Perhaps it's some resonances of steering mirrors?
- The seismic displacement @100Hz per the GWINC model is ~1e-19 m/rtHz. Assuming the model A2L = d_rms * theta(f) where d_rms is the rms offset of the beam spot from the optic center, and theta(f) is the angular control signal to the optic, for a 5mm rms offset of the spot from the center, theta(f) must be ~1e-17 urad @100Hz. This gives some requirement on the low pass required - I will look into adding this to the global optimization cost.
|
Attachment 1: vertexOL_noises.pdf
|
|
13444
|
Wed Nov 22 05:41:32 2017 |
rana | Update | Optical Levers | Oplev "noise budget" | For the OL NB, probably don't have to fudge any seismic noise, since that's a thing we want to suppress. More important is "what the noise would be if the suspended mirrors were no moving w.r.t. inertial space".
For that, we need to look at the data from the OL test setup that Steve is putting on the SP table. |
13446
|
Wed Nov 22 12:13:15 2017 |
Kira | Update | PEM | seismometer can testing | Updated some values, most importantly, the k-factor. I had assumed that it was in the correct units already, but when converting it to 0.046 W/(m^2*K) from 0.26 BTU/(h*ft^2*F), I got the following plot. The time constant is still a bit larger than what we'd expect, but it's much better with these adjustments.
For our next steps, I will measure the time constant of the heater without any insulation and then decide how many layers of it we will need. I'll need to construct and calibrate a temperature sensor like the ones I've made before and use it to record the values more accurately.
Quote: |
I performed a test with the can last week with one layer of insulation to see how well it worked. First, I soldered two heaters together in series so that the total resistance was 48.6 ohms. I placed the heaters on the sides of the can and secured them. Then I wrapped the sides and top of the can in insulation and sealed the edges with tape, only leavng the handles open. I didn't insulate the bottom. I connected the two ends of the heater directly into the DC source and drove the current as high as possible (around 0.6A). I let the can heat up to a final value of 37.5C, turned off the current and manually measured the temperature, recoding the time every half degree. I then plotted the results, along with a fit. The intersection of the red line with the data marks the time constant and the temperature at which we get the time constant. This came out to be about 1.6 hours, much longer than expected considering that onle one layer instead of four was used. With only one layer, we would expect the time constant to be about 13 min, while for 4 layers it should be 53 min (the area A is 0.74 m^2 and not 2 m^2).
Quote: |
I made a model for our seismometer can using actual data so that we know approximately what the time constant should be when we test it out. I used the appendix in Megan Kelley's report to make a relation for the temperature in terms of time.
so and 
In our case, we will heat the can to a certain temerature and wait for it to cool on its own so 
We know that where k is the k-factor of the insulation we are using, A is the area of the surface through which heat is flowing, is the change in temperature, d is the thickness of the insulation.
Therefore,
![T(t)=\frac{1}{mc}\int_{0}^{t}\frac{kA}{d}[T_{lab}-T(t')]dt'=\frac{kA}{mcd}(T_{lab}t-\int_{0}^{t}T(t')dt')](https://latex.codecogs.com/gif.latex?T%28t%29%3D%5Cfrac%7B1%7D%7Bmc%7D%5Cint_%7B0%7D%5E%7Bt%7D%5Cfrac%7BkA%7D%7Bd%7D%5BT_%7Blab%7D-T%28t%27%29%5Ddt%27%3D%5Cfrac%7BkA%7D%7Bmcd%7D%28T_%7Blab%7Dt-%5Cint_%7B0%7D%5E%7Bt%7DT%28t%27%29dt%27%29)
We can take the derivative of this to get
, or
We can guess the solution to be
where tau is the time constant, which we would like to find.
The boundary conditions are and . I assumed we would heat up the can to 40 celcius while the room temp is about 24. Plugging this into our equations,
, so 
We can plug everything back into the derivative T'(t)
![T'(t)=-\frac{16}{\tau}e^{-t/\tau}=B-C[16e^{-t/\tau}+24]](https://latex.codecogs.com/gif.latex?T%27%28t%29%3D-%5Cfrac%7B16%7D%7B%5Ctau%7De%5E%7B-t/%5Ctau%7D%3DB-C%5B16e%5E%7B-t/%5Ctau%7D+24%5D)
Equating the exponential terms on both sides, we can solve for tau

Plugging in the values that we have, m = 12.2 kg, c = 500 J/kg*k (stainless steel), d = 0.1 m, k = 0.26 W/(m^2*K), A = 2 m^2, we get that the time constant is 0.326hr. I have attached the plot that I made using these values. I would expect to see something similar to this when I actually do the test.
To set up the experiment, I removed the can (with Steve's help) and will place a few heating pads on the outside and wrap the whole thing in a few layers of insulation to make the total thickness 0.1m. Then, we will attach the heaters to a DC source and heat the can up to 40 celcius. We will wait for it to cool on its own and monitor the temperature to create a plot and find the experimental time constant. Later, we can use the heatng circuit we used for the PSL lab and modify the parts as needed to drive a few amps through the circuit. I calculated that we'd need about 6A to get the can to 50 celcius using the setup we used previously, but we could drive a smaller current by using a higher heater resistance.
|
|
|
Attachment 1: cooling_fit_1.png
|
|
13447
|
Wed Nov 22 14:47:03 2017 |
Kira | Update | PEM | seismometer can testing | For the insulation, I have decided to use this one (Buna-N/PVC Foam Insulation Sheets). We will need 3 of the 1 inch plain backing ones (9349K4) to wrap a few layers around it. I'll try two layers for now, since the insulation seems to be doing quite well according to initial testing.
Quote: |
Updated some values, most importantly, the k-factor. I had assumed that it was in the correct units already, but when converting it to 0.046 W/(m^2*K) from 0.26 BTU/(h*ft^2*F), I got the following plot. The time constant is still a bit larger than what we'd expect, but it's much better with these adjustments.
For our next steps, I will measure the time constant of the heater without any insulation and then decide how many layers of it we will need. I'll need to construct and calibrate a temperature sensor like the ones I've made before and use it to record the values more accurately.
|
|
13448
|
Wed Nov 22 15:29:23 2017 |
gautam | Update | Optical Levers | Oplev "noise budget" | [steve, gautam]
What is the best way to set this test up?
I think we need a QPD to monitor the spot rather than a single element PD, to answer this question about the sensor noise. Ideally, we want to shoot the HeNe beam straight at the QPD - but at the very least, we need a lens to size the beam down to the same size as we have for the return beam on the Oplevs. Then there is the power - Steve tells me we should expect ~2mW at the output of these HeNes. Assuming 100kohm transimpedance gain for each quadrant and Si responsivity of 0.4A/W at 632nm, this corresponds to 10V (ADC limit) for 250uW of power - so it would seem that we need to add some attenuating optics in the way.
Also, does anyone know of spare QPDs we can use for this test? We considered temporarily borrowing one of the vertex OL QPDs (mark out its current location on the optics table, and move it over to the SP table), but decided against it as the cabling arrangement would be too complicated. I'd like to use the same DAQ electronics to acquire the data from this test as that would give us the most direct estimate of the sensor noise for supposedly no motion of the spot, although by adding 3 optics between the HeNe and the QPD, we are introducing possible additional jitter couplings...
Quote: |
For the OL NB, probably don't have to fudge any seismic noise, since that's a thing we want to suppress. More important is "what the noise would be if the suspended mirrors were no moving w.r.t. inertial space".
For that, we need to look at the data from the OL test setup that Steve is putting on the SP table.
|
|
Attachment 1: OplevTest.jpg
|
|
13449
|
Wed Nov 22 16:40:00 2017 |
Koji | Update | Optical Levers | Oplev "noise budget" | You may want to consult with the cryo Q people (Brittany, Aaron) for a Si QPD. If you want the same QPD architecture, I can look at my QPD circuit stock. |
13450
|
Wed Nov 22 17:52:25 2017 |
gautam | Update | Optimal Control | Visualizing cost functions | I've attempted to visualize the various components of the cost function in the way I've defined it for the current iteration of the Oplev optimal control loop design code. For each term in the cost function, the way the cost is computed depends on the ratio of the abscissa value to some threshold value (set by hand for now) - if this ratio is >1, the cost is the logarithm of the ratio, whereas if the ratio is <1, the cost is the square of the ratio. Continuity is enforced at the point at which this transition happens. I've plotted the cost function for some of the terms entering the code right now - indicated in dashed red lines are the approximate value of each of these costs for our current Oplev loop - the weights were chosen so that each of the costs were O(10) for the current controller, and the idea was that the optimizer could drive these down to hopefully O(1), but I've not yet gotten that to happen.
Based on the meeting yesterday, some possible ideas:
- For minimizing the control noise injection - we know the transfer function from the Oplev control signal coupling to MICH from measurements, and we also have a model for the seismic noise. So one term could be a weighted integral of (coupling - seismic) - the weight can give more importance to f>30Hz, and even more importance to f>100Hz. Right now, I don't have any suc frequency dependant requirement on the control signal.
- Try a simpler problem first - pendulum local damping. The position damping controller for example has fewer roots in the complex plane. Although it too has some B/R notches, which account for 16 complex roots, and hence, 32 parameters, so maybe this isn't really a simpler problem?
- How do we pick the number of excess poles compared to zeros in the overall transfer function? The OL loop low-pass filters are elliptic filters, which achieve the fastest transition between the passband and stopband, but for the Oplev loop roll-off, perhaps its better to have a just have some poles to roll off the HF response?
|
Attachment 1: globalCosts.pdf
|
|
13451
|
Wed Nov 22 19:20:01 2017 |
rana | Update | Optical Levers | Oplev "noise budget" | too complex; just shoot straight from the HeNe to the QPD. We lower the gain of the QPD by changing the resistors; there's no sane reason to keep the existing 100k resistors for a 2 mW beam. The specular reflection of the QPD must be dumped on a black glass V dump (not some flimsy anodized aluminum or dirty razor stack) |
13452
|
Wed Nov 22 23:56:14 2017 |
gautam | Update | Optical Levers | Oplev "noise budget" | Do not turn on BS/ITMY/SRM/PRM Oplev servos without reading this elog and correcting the needful!
I've setup a test setup on the ITMY Oplev table. Details + pics to follow, but for now, be aware that
- I've turned off the HeNe that is used for the SRM and ITMY Oplev.
- Moved one of the HeNe's Steve setup on the SP table to the ITMY Oplev table.
- Output power was 2.5mW, whereas normal power incident on this PD was ~250uW.
- So I changed all transimpedance gains on the ITMY Oplev QPD from 100k to 10k thin film - these should be changed back when we want to use this QPD for Oplev purposes again. Note that I did not change the compensation capacitors C3-C6, as with 10k transimpedance, and assuming they are 2.2nF, we get a corner frequency of 6.7kHz. The original schematic recommends 0.1uF. In hindsight, I should have changed these to 22nF to keep roughly the same corner frequency of ~700Hz.
I've implemented this change as of ~5pm Nov 23 2017 - C3-C6 are now 22nF, so the corner frequency is 676Hz, as opposed to 723Hz before... This should also be undone when we use this QPD as an Oplev QPD again...
- I marked the position of the ITMY Oplev QPD with sharpie and also took pics so it should be easy enough to restore when we are done with this test.
- I couldn't get the HeNe to turn on with any of the power supplies I found in the cabinet, so I borrowed the one used to power the BS/PRM. So these oplevs are out of commission until this test is done.
- There is a single steering mirror in a Polaris mount which I used to center the spot on the QPD.
- The specular reflection (~250uW, i.e. 10% of the power incident on the QPD) is dumped onto a clean razor beam stack. Steve can put in a glass beam dump on Monday.
- Just in case someone accidentally turns on some servos - I've disabled the inputs to the BS, PRM and SRM oplevs, and set the limiter on the ITMY servo to 0.
Here are some pics of the setup: https://photos.app.goo.gl/DHMINAV7aVgayYcf1.None of the existing Oplev input/output steering optics were touched. Steve can make modifications as necessary, perhaps we can make similar mods to the SRM Oplev QPD and the BS one to run the HeNe test for a few days...
Quote: |
too complex; just shoot straight from the HeNe to the QPD. We lower the gain of the QPD by changing the resistors; there's no sane reason to keep the existing 100k resistors for a 2 mW beam. The specular reflection of the QPD must be dumped on a black glass V dump (not some flimsy anodized aluminum or dirty razor stack)
|
|
13453
|
Thu Nov 23 18:03:52 2017 |
gautam | Update | Optical Levers | Oplev "noise budget" | Here are a couple of preliminary plots of the noise from a 20minute stretch of data - the new curve is the orange one, labelled sensing, which is the spectrum of the PIT/YAW error signal from the HeNe beam single bounce off a single steering mirror onto the QPD, normalized to account for the difference in QPD sum. The peaky features that were absent in the dark noise are present here.
I am a bit confused about the total sum though - there is ~2.5mW of light incident on the PD, and the transimpedance gain is 10.7kohm. So I would expect 2.5e-3 mW * 0.4A/W * 10.7 kV/A ~ 10.7V over 4 quadrants. The ADC is 16 bit and has a range +/- 10V, so 10.7 V should be ~35,000 cts. But the observed QPD sum is ~14,000 counts. The reflected power was measured to be ~250uW, so ~10% of the total input power. Not sure if this is factored into the photodiode efficiency value of 0.4A/W. I guess there is some fraction of the QPD that doesn't generate any photocurrent (i.e. the grooves defining the quadrants), but is it reasonable that when the Oplev beam is well centered, ~50% of the power is not measured? I couldn't find any sneaky digital gains between the quadrant channels to the sum channel either... But in the Oplev setup, the QPD had ~250uW of power incident on it, and was reporting a sum of ~13,000 counts with a transimpedance gain of 100kohm, so at least the scaling seems to hold...
I guess we wan't to monitor this over a few days, see how stationary the noise profile is etc. I didn't look at the spectrum of the intensity noise during this time.
Quote: |
I've setup a test setup on the ITMY Oplev table. Details + pics to follow, but for now, be aware that
Here are some pics of the setup: https://photos.app.goo.gl/DHMINAV7aVgayYcf1.None of the existing Oplev input/output steering optics were touched. Steve can make modifications as necessary, perhaps we can make similar mods to the SRM Oplev QPD and the BS one to run the HeNe test for a few days...
|
|
Attachment 1: ITMY_P_noise.pdf
|
|
Attachment 2: ITMY_Y_noise.pdf
|
|
13454
|
Sun Nov 26 19:38:40 2017 |
Steve | Update | VAC | TP3 drypump replaced again | The TP3 foreline pressure was 4.8 Torr, 50K rpm 0.54A and 31C........Maglev rotation normal 560 Hz....... IFO pressure 7.2e- 6 Torrit was not effected
V1 closed ......replaced drypump.........V1 opened
IFO 6.9e-6 Torrit at 19:55, TP3fl 18 mT, 50Krpm 0.15A 24C
VM1 is still closed
|
Attachment 1: after_replacement.png
|
|
13455
|
Tue Nov 28 16:02:32 2017 |
rana | Update | PEM | seismometer can testing | I've ordered 4 of these from McMaster. Should be delivered to the 40m by noon tomorrow.
Quote: |
For the insulation, I have decided to use this one (Buna-N/PVC Foam Insulation Sheets). We will need 3 of the 1 inch plain backing ones (9349K4) to wrap a few layers around it. I'll try two layers for now, since the insulation seems to be doing quite well according to initial testing.
|
Kira and I also discussed the issiue. It would be good if someone can hunt aroun on the web and get some free samples of non-shedding foam with R~4. |
13457
|
Wed Nov 29 15:33:16 2017 |
rana | Update | PSL | PMC locking | PMC wasn't locking. Had to power down c1psl. Did burt restore. Still not great.
I think many of the readbacks on the PMC MEDM screen are now bogus and misleading since the PMC RF upgrade that Gautam did awhile ago. We ought to fix the screen and clearly label which readbacks and actuators are no longer valid.
Also, the locking procedure is not so nice. The output V adjust doesn't work anymore with BLANK enabled. Would be good to make an autolocker script if we find a visitor wanting to do something fun. |
13459
|
Thu Nov 30 10:38:39 2017 |
Steve | Update | VAC | annuloses are not pumped | Annuloses are not pumped for 30 days, since TP2 failed. IFO pressure 7e-6 Torr it, Rga 2.6e-6 Torr
Valve configuration: Vacuum Normal as TP3 is the forepump of Maglev, annuloses are not puped at 1.1 Torr
TP3 50K rpm, 0.15A 24C, foreline pressure 16.1 mTorr
Quote: |
The TP3 foreline pressure was 4.8 Torr, 50K rpm 0.54A and 31C........Maglev rotation normal 560 Hz....... IFO pressure 7.2e- 6 Torrit was not effected
V1 closed ......replaced drypump.........V1 opened
IFO 6.9e-6 Torrit at 19:55, TP3fl 18 mT, 50Krpm 0.15A 24C
VM1 is still closed
|
|
Attachment 1: AnnulosesNotPumped.png
|
|
13467
|
Thu Dec 7 16:28:06 2017 |
Koji | Update | IOO | Lots of red on the FE status screen | Once the RT machines were back, we launched only the five IOPs. They had bunch of red lights, but we continued to run essential models for the IFO. SOme of the lights were fixed by "global diag reset" and "mxstream restart".
The suspension were damped. We could restore the IMC lock. The locking became OK and the IMC was aligned. The REFL spot came back.
At least, I could confirm that the WFS ASC signals were not transmitted to c1mcs. There must be some disconneted links of IPC. |
13471
|
Wed Dec 13 09:49:23 2017 |
johannes | Update | ASS | wiring diagram | I attached a wiring schematic from the slow DAQ to the eurocrate modules. Of these, pins 1-32 (or 1A-16C) and pins 33-64 (17A-32C) are on separate DSub connectors. Therefore the easiest solution is to splice the slow DIO channels into the existing breakouts so we can proceed with the transition. This will still remove a lot of the current cable salad. For the YEND we can start thinking about a more elegant solution (For example a connector on the front panel of the Acromag chassis for the fast DIO) now that the problem is better defined. |
Attachment 1: 1Y9.pdf
|
|
13473
|
Thu Dec 14 00:32:56 2017 |
johannes | Update | ASS | Acromag new crate; c1auxex2 configured as gateway server for acromag | This splicing in of fast binary channels we discussed at yesterday's and today's meetings is getting messy with the current chassis. Cleaning up the cable mess was a key point, so I got a 4U height DEEP chassis from Rich and drew up a front panel for a modular approach that we can use at the other 40m locations as well. The front panel will have slots for smaller slot panels to which we can mount the breakout boards as before, so all the wiring that I've done can be transfered to this design. If some new connector standard is required it will be easy to draw a new slot panel from a template, for now I'll make some with two DSub37 and IDC50. Since this chassis is so huge it will have ample space for cross-connects.
I also moved the communication of c1auxex2 with the Acromag units off the martian network, connecting them with a direct cable connection out of the second ethernet port. To test if this works I configured the second ethernet port of c1auxex2 to have the IP address 192.168.114.1 and one of the acromag units to have 192.168.114.11, and initialized an IOC with some test channels. Much to my surprise this actually worked straight out of the box, and the test channels can be accessed from the control room computers without having a direct ethernet link to the acromag modules. huzzah!
Steve: it would be nice to have all plugs- connectors lockable
|
Attachment 1: fp_mod_4U.pdf
|
|
Attachment 2: IMG_20171213_171541850_HDR.jpg
|
|
13474
|
Thu Dec 14 07:07:09 2017 |
rana | Update | IOO | Lots of red on the FE status screen | I had to key the c1psl crate to get the PMC locking again. Without this, it would still sort of lock, but it was very hard to turn on the loop; it would push itself off the fringe. So probably it was stuck in some state with the gain wrong. Since the RF stuff is now done in a separate electronics chain, I don't think the RF phase can be changed by this. Probably the sliders are just not effective until power cycling.
Quote: |
Once the RT machines were back, we launched only the five IOPs. They had bunch of red lights, but we continued to run essential models for the IFO. SOme of the lights were fixed by "global diag reset" and "mxstream restart".
The suspension were damped. We could restore the IMC lock. The locking became OK and the IMC was aligned. The REFL spot came back.
At least, I could confirm that the WFS ASC signals were not transmitted to c1mcs. There must be some disconneted links of IPC.
|
I then tried to get the MC WFS back, but running rtcds restart --all would make some of the computers hang. For c1ioo I had to push the reset button on the computer and then did 'rtcds start --all' after it came up. Still missing IPC connections.
I'm going to get in touch with Rolf. |
13475
|
Thu Dec 14 08:59:17 2017 |
Steve | Update | General | we are here | |
Attachment 1: 8_days.png
|
|
13477
|
Thu Dec 14 19:41:00 2017 |
gautam | Update | CDS | CDS recovery, NFS woes | [Koji, Jamie(remote), gautam]
Summary: The CDS system seems to be back up and functioning. But there seems to be some pending problems with the NFS that should be looked into.
We locked Y-arm, hand aligned transmission to 1 . Some pending problems with ASS model (possibly symptomatic of something more general). Didn't touch Xarm because we don't know what exactly the status of ETMX is.
Problems raised in elogs in the thread of 13474 and also 13436 seem to be solved.
I would make a detailed post with how the problems were fixed, but unfortunately, most of what we did was not scientific/systematic/repeatable. Instead, I note here some general points (Jamie/Koji can addto /correct me):
- There is a "known" problem with unloading models on c1lsc. Sometimes, running rtcds stop <model> will kill the c1lsc frontend.
- Sometimes, when one machine on the dolphin network goes down, all 3 go down.
- The new FB/RCG means that some of the old commands now no longer work. Specifically, instead of telnet fb 8087 followed by shutdown (to fix DC errors) no longer works. Instead, ssh into fb1, and run sudo systemctl restart daqd_*.
- Timing error on c1sus machine was linked to the mx_stream processes somehow not being automatically started. The "!mxstream restart" button on the CDS overview MEDM screen should run the necessary commands to restart it. However, today, I had to manually run sudo systemctl start mx_stream on c1sus to fix this error. It is a mystery why the automatic startup of this process was disabled in the first place. Jamie has now rectified this problem, so keep an eye out.
- c1oaf persistently reported DC errors (0x2bad) that couldn't be addressed by running mxstream restart or restarting the daqd processes on FB1. Restarting the model itself (i.e. rtcds restart c1oaf) fixed this issue (though of course I took the risk of having to go into the lab and hard-reboot 3 machines).
- At some point, we thought we had all the CDS lights green - but at that point, the END FEs crashed, necessitating Koji->EX and Gautam->EY hard reboots. This is a new phenomenon. Note that the vertex machines were unaffected.
- At some point, all the DC lights on the CDS overview screen went white - at the same time, we couldn't ssh into FB1, although it was responding to ping. After ~2mins, the green lights came back and we were able to connect to FB1. Not sure what to make of this.
While trying to run the dither alignment scripts for the Y-arm, we noticed some strange behaviour:
Even when there was no signal (looking at EPICS channels) at the input of the ASS servos, the output was fluctuating wildly by ~20cts-pp.
This is not simply an EPICS artefact, as we could see corresponding motion of the suspension on the CCD.
A possible clue is that when I run the "Start Dither" script from the MEDM screen, I get a bunch of error messages (see Attachment #2).
Similar error messages show up when running the LSC offset script for example. Seems like there are multiple ports open somehow on the same machine?
There are no indicator lights on the CDS overview screen suggesting where the problem lies.
Will continue investigating tomorrow.
Some other general remarks:
- ETMX watchdog remains shutdown.
- ITMY and BS oplevs have been hijacked for HeNe RIN / Oplev sensing noise measurement, and so are not enabled.
- Y arm trans QPD (Thorlabs) has large 60Hz harmonics. These can be mitigated by turning on a 60Hz comb filter, but we should check if this is some kind of ground loop. The feature is much less evident when looking at the TRANS signal on the QPD.
UPDATE 8:20pm:
Koji suggested trying to simply retsart the ASS model to see if that fixes the weird errors shown in Attachment #2. This did the trick. But we are now faced with more confusion - during the restart process, the various indicators on the CDS overview MEDM screen froze up, which is usually symptomatic of the machines being unresponsive and requiring a hard reboot. But we waited for a few minutes, and everything mysteriously came back. Over repeated observations and looking at the dmesg of the frontend, the problem seems to be connected with an unresponsive NFS connection. Jamie had noted sometime ago that the NFS seems unusually slow. How can we fix this problem? Is it feasible to have a dedicated machine that is not FB1 do the NFS serving for the FEs? |
Attachment 1: CDS_14Dec2017.png
|
|
Attachment 2: CDS_errors.png
|
|
13478
|
Thu Dec 14 23:27:46 2017 |
johannes | Update | DAQ | aux chassis design | Made a front and back panel and slot panels for DSub and IDC breakouts. I want to send this out soon, are there any comments? Preferences for color schemes? |
Attachment 1: auxdaq_40m_4U_front.pdf
|
|
Attachment 2: auxdaq_40m_4U_rear.pdf
|
|
Attachment 3: auxdaq_40m_4U_DSub37x2.pdf
|
|
Attachment 4: auxdaq_40m_4U_IDC50.pdf
|
|
13479
|
Fri Dec 15 00:26:40 2017 |
johannes | Update | CDS | Re: CDS recovery, NFS woes |
Quote: |
Didn't touch Xarm because we don't know what exactly the status of ETMX is.
|
The Xarm is currently in its original state, all cables are connected and c1auxex is hosting the slow channels. |
13480
|
Fri Dec 15 01:53:37 2017 |
jamie | Update | CDS | CDS recovery, NFS woes |
Quote: |
I would make a detailed post with how the problems were fixed, but unfortunately, most of what we did was not scientific/systematic/repeatable. Instead, I note here some general points (Jamie/Koji can addto /correct me):
- There is a "known" problem with unloading models on c1lsc. Sometimes, running rtcds stop <model> will kill the c1lsc frontend.
- Sometimes, when one machine on the dolphin network goes down, all 3 go down.
- The new FB/RCG means that some of the old commands now no longer work. Specifically, instead of telnet fb 8087 followed by shutdown (to fix DC errors) no longer works. Instead, ssh into fb1, and run sudo systemctl restart daqd_*.
|
This should still work, but the address has changed. The daqd was split up into three separate binaries to get around the issue with the monolithic build that we could never figure out. The address of the data concentrator (DC) (which is the thing that needs to be restarted) is now 8083.
Quote: |
UPDATE 8:20pm:
Koji suggested trying to simply retsart the ASS model to see if that fixes the weird errors shown in Attachment #2. This did the trick. But we are now faced with more confusion - during the restart process, the various indicators on the CDS overview MEDM screen froze up, which is usually symptomatic of the machines being unresponsive and requiring a hard reboot. But we waited for a few minutes, and everything mysteriously came back. Over repeated observations and looking at the dmesg of the frontend, the problem seems to be connected with an unresponsive NFS connection. Jamie had noted sometime ago that the NFS seems unusually slow. How can we fix this problem? Is it feasible to have a dedicated machine that is not FB1 do the NFS serving for the FEs?
|
I don't think the problem is fb1. The fb1 NFS is mostly only used during front end boot. It's the rtcds mount that's the one that sees all the action, which is being served from chiara. |
13481
|
Fri Dec 15 11:19:11 2017 |
gautam | Update | CDS | CDS recovery, NFS woes | Looking at the dmesg on c1iscex for example, at least part of the problem seems to be associated with FB1 (192.168.113.201, see Attachment #1). The "server" can be unresponsive for O(100) seconds, which is consistent with the duration for which we see the MEDM status lights go blank, and the EPICS records get frozen. Note that the error timestamped ~4000 was from last night, which means there have been at least 2 more instances of this kind of freeze-up overnight.
I don't know if this is symptomatic of some more widespread problem with the 40m networking infrastructure. In any case, all the CDS overview screen lights were green today morning, and MC autolocker seems to have worked fine overnight.
I have also updated the wiki page with the updated daqd restart commands.
Unrelated to this work - Koji fixed up the MC overview screen such that the MC autolocker button is now visible again. The problem seems to do with me migrating some of the c1ioo EPICS channels from the slow machine to the fast system, as a result of which the EPICS variable type changed from "ENUM" to something that was not "ENUM". In any case, the button exists now, and the MC autolocker blinky light is responsive to its state.
Quote: |
I don't think the problem is fb1. The fb1 NFS is mostly only used during front end boot. It's the rtcds mount that's the one that sees all the action, which is being served from chiara.
|
|
Attachment 1: NFS.png
|
|
Attachment 2: MCautolocker.png
|
|
13482
|
Fri Dec 15 17:05:55 2017 |
gautam | Update | PEM | Trillium seismometer DC offset | Yesterday, while we were bringing the CDS system back online, we noticed that the control room wall StripTool traces for the seismic BLRMS signals did not come back to the levels we are used to seeing even after restarting the PEM model. There are no red lights on the CDS overview screen indicative of DAQ problems. Trending the DQ-ed seismometer signals (these are the calibrated (?) seismometer signals, not the BLRMS) over the last 30 days, it looks like
- On ~1st December, the signals all went to 0 - this is consistent with signals in the other models, I think this is when the DAQ processes crashed.
- On ~8 December, all the signals picked up a DC offset of a few 100s (counts? or um/s? this number is after a cts2vel calibration filter). I couldn't find anything in the elog on 8 December that may be related to this problem.
I poked around at the electronics rack (1X5/1X6) which houses the 1U interface box for these signals - on its front panel, there is a switch that has selectable positions "UVW" and "XYZ". It is currently set to the latter. I am assuming the former refers to velocities in the xyz directions, and the latter is displacement in these directions. Is this the nominal state? I didn't spend too much time debugging the signal further for now.
|
Attachment 1: Trillium.png
|
|
13483
|
Fri Dec 15 18:23:03 2017 |
rana | Update | PEM | Trillium seismometer DC offset | UVW refers to the 3 internal, orthogonal velocity sensors which are not aligned with the vertical or horizontal directions. XYZ refers to the linear combinations of UVW which correspond to north, east, and up. |
13485
|
Fri Dec 15 19:09:49 2017 |
gautam | Update | IOO | IMC lockloss correlated with PRM alignment? | Motivation:
To test the hypothesis that the IMC lock duty cycle is affected by the PRM alignment. Rana pointed out today that the input faraday has not been tuned to maximize the output->input isolation in a while, so the idea is that perhaps when the PRM is aligned, some of the reflected light comes back towards the PSL through the Faraday and hence, messes with the IMC lock.
A script to test this hypothesis is running over the weekend (in case anyone was thinking of doing anything with the IFO over the weekend).
Methodology:
I've made a simple script - the pseudocode is the following:
- Align PRM
- For the next half hour, look for downward transitions in the EPICS record for MC TRANS > 5000 cts - this is a proxy for an MC lockloss
- At the end of 30 minutes, record number of locklosses in the last 30 minutes
- Misalign PRM, repeat the above 3 bullets
The idea is to keep looping the above over the weekend, so we can expect ~100 datapoints, 50 each for PRM misaligned/aligned. The times at which PRM was aligned/misaligned is also being logged, so we can make some spectrograms of PC drive RMS (for example) with PRM aligned/misaligned. The script lives at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/FaradayIsolCheck.py. Script is being run inside a tmux session on pianosa, hopefully the machine doesn't crash over the weekend and MC1/CDS stays happy.
A more direct measurement of the input Faraday isolation can be made by putting a photodiode in place of the beam dump shown in Attachment #1 (borrowed from this elog). I measured ~100uW of power leaking through this mirror with the PRM misaligned (but IMC locked). I'm not sure what kind of SNR we can expect for a DC measurement, but if we have a chopper handy, we could put a chopper (in the leaked beam just before the PD so as to allow the IMC to be locked) and demodulate at that frequency for a cleaner measurement? This way, we could also measure the contribution from prompt reflections (up to the input side of the Faraday) by simply blocking the beam going into the vacuum. The window itself is wedged so that shouldn't be a big contributor. |
Attachment 1: PSL_layout.JPG
|
|
13486
|
Mon Dec 18 16:45:44 2017 |
gautam | Update | IOO | IMC lockloss correlated with PRM alignment? | I stopped the test earlier today morning around 11:30am. The log file is located at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/PRM_stepping.txt. It contains the times at which the PRM was aligned/misaligned for lookback, and also the number of MC unlocks during every 30 minute period that the PRM alignment was toggled. This was computed by:
- continuously reading the current value of the EPICS record for MC Trans.
- comparing its current value to its values 3 seconds ago.
- If there is a downward step in this comparison greater than 5000 counts, increment a counter variable by 1.
- Reset counter at the end of 30 minute period.
I think this method is a pretty reliable proxy, because the MC autolocker certainly takes >3 seconds to re-acquire the lock (it has to run mcdown, wait for the next cavity flash, and run mcup in the meantime).
Preliminary analysis suggests no obvious correlation between MC lock duty cycle and PRM alignment.
I leave further analysis to those who are well versed in the science/art of PRM/IMC statistical correlations. |
|