40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 242 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  4470   Wed Mar 30 21:21:15 2011 BryanConfigurationGreen LockingThe wonderful world of mode-matching

Step 4: Matching into the oven

 

 

Now that the astigmatism is substantially reduced, we can work out a lens solution to obtain a 50um waist *anywhere* on the bench as long as there's enough room to work with the beam afterwards. The waist after the Faraday and lens is at position 22.5 on the bench. A 50 mm lens placed 18 cm after this position (position 14.92 on the bench) should give a waist of 50 um at  24.57 cm after the waist (position 12.83 on the bench). This doesn't give much room to measure the beam waist in though - the Beamscan head has a fairly large finite size… wonder if there's a slightly less strong lens I could use…

OK. With a 66 mm lens at 23 cm (position 13.45 on the bench) after the waist we get a 50 um waist at 31.37 cm after the waist (position 10.15 on the bench). 

 

Oven_Lens_Solution_66mm.png

 

Closest lens I found was 62.9mm which will put the 50um point a bit further towards the wall, but on the X-arm the oven is at position 8.75 ish. So anything around there is fine.

 

Using this lens and after a bit of manual fiddling and checking with the Beamscan, I figured we needed a close in, fine-grained measurement so set the Beamscan head up on a micrometer stage Took a whoie bunch of data around position 9 on the bench:

 

 

Position A1_13.5%_width A2_13.5%_width

(mm) (um mean) (um mean)

-15 226.8 221.9

-14 210.9 208.3

-13 195.5 196.7

-12 181.0 183.2

-11 166.0 168.4

-10 154.0 153.1

-9 139.5 141.0

-8 127.5 130.0

-7 118.0 121.7

-6 110.2 111.6

-5 105.0 104.8

-4 103.1 103.0

-3 105.2 104.7

-2 110.9 110.8

-1 116.8 117.0

0 125.6 125.6

0 125.6 125.1

1 134.8 135.3

2 145.1 145.6

3 155.7 157.2

4 168.0 168.1

5 180.5 180.6

6 197.7 198.6

7 211.4 209.7

8 224.0 222.7

9 238.5 233.7

10 250.9 245.8

11 261.5 256.4

12 274.0 270.4

13 291.3 283.6

14 304.2 296.5

15 317.9 309.5

 

Matching_Into_Green_Oven_zoomed_out.pngMatching_Into_Green_Oven_zoomed_in.png

 

And at this point the maximum power available at the oven-waist is 298mW. With 663mW available from the laser with a desired power setting of 700mW on the supply. Should make sure we understand where the power is being lost. The beam coming through the FI looks clean and unclipped, but there is some stray light around.

 

Position A1_13.5%_width A2_13.5%_width

(bench) (um mean) (um mean)

7 868.5   739.9

6 1324 1130

5 1765 1492

4 2214 1862

 

The plot looks pretty good, but again, there looks to be an offset on the 'fitted' curve. Taking a couple of additional points further on to make sure it all works out as the beam propagates. I took a few extra points at the suggestion of Kiwamu and Koji - see the zoomed out plot.  The zoomed in plot has by-eye fit lines - again, because to get the right shape to fit the points there appears to be an offset. Where is that coming from? My suspicion is that the Beamscan doesn't take account of the any background zero offsets when calculating the 13.5% and we've been using low power when doing these measurements - very small focussed beams and didn't want to risk damage to the profiler head.

 

Decided to take a few measurements to test this theory. Trying different power settings and seeing if it gives different offset and/or a changed width size

 

7 984.9 824.0 very low power

7 931.9 730.3 low power

7 821.6 730.6 higher power

7 816.4 729.5 as high as I'm comfortable going

 

Trying this near the waist…

 

8.75 130.09 132.04 low power

8.75 106.58 105.46 higher power

8.75 102.44 103.20 as high as it can go without making it's saturated

 

So it looks like offset *is* significant and the Beamscan measurements are more accurate with more power to make the offsets less significant. Additionally, if this is the case then we can do a fit to the previous data (which was all taken with the same power setting) and simply allow the offset to be a free parameter without affecting the accuracy of the waist calculation. This fit and data coming to an e-log near you soon.

 

Of course, it looks from the plots above (well... the code that produces the plots above) that the waist is actually a little bit small (around 46um) so some adjustment of the last lens back along the beam by about half a cm or so might be required.

 
  4473   Thu Mar 31 02:59:49 2011 KojiConfigurationGreen LockingThe wonderful world of mode-matching

 I went through the entries.

1. Give us a photo of the day. i.e. Faraday, tilted lens, etc...

2. After all, where did you put the faraday in the plot of the entry 4466?

3. Zoomed-in plot for the SHG crystal show no astigmatism. However, the zoomed out plot shows some astigmatism.
How consistent are they? ==> Interested in seeing the fit including the zoomed out measurements.

  4476   Thu Mar 31 14:10:00 2011 BryanConfigurationGreen LockingThe wonderful world of mode-matching

Quote:

 I went through the entries.

1. Give us a photo of the day. i.e. Faraday, tilted lens, etc...

2. After all, where did you put the faraday in the plot of the entry 4466?

3. Zoomed-in plot for the SHG crystal show no astigmatism. However, the zoomed out plot shows some astigmatism.
How consistent are they? ==> Interested in seeing the fit including the zoomed out measurements.

 OK. Taking these completely out of order in the easiest first...

2. The FI is between positions 27.75 and 32 on the bench - i.e. this is where the input and output apertures are. (corresponds to between 0.58 and 0.46 on the scale of those two plotsand just before both the vertical and horizontal waists) At these points the beam radius is around 400um and below, and the aperture of the Faraday is 4.8mm (diameter).

1. Photos...

Laser set up - note the odd angles of the mirrors. This is where we're losing a goodly chunk of the light. If need be we could set it up with an extra mirror and send the light round a square to provide alignment control AND reduce optical power loss...

P3310028.JPG

 

Faraday and angled lens - note that the lens angle is close to 45 degrees. In principle this could be replaced with an appropriate cylindrical lens, but as long as there's enough light passing through to the oven I think we're OK.

P3310029.JPG

3. Fitting... coming soon once I work out what it's actually telling me. Though I hasten to point out that the latter points were taken with a different laser power setting and might well be larger than the actual beam width which would lead to astigmatic behaviour.

  4477   Thu Mar 31 15:23:14 2011 BryanConfigurationGreen LockingThe wonderful world of mode-matching

Quote:

3. Zoomed-in plot for the SHG crystal show no astigmatism. However, the zoomed out plot shows some astigmatism.

How consistent are they? ==> Interested in seeing the fit including the zoomed out measurements.

Right. Fitting to the data. Zoomed out plots first. I used the general equation f(x) = w_o.*sqrt(1 + (((x-z_o)*1064e-9)./(pi*w_o.^2)).^2)+c for each fit which is basically just the Gaussian beam width parameter calculation but with an extra offset parameter 'c'

Vertical fit for zoomed out data:

Coefficients (with 95% confidence bounds):

       c =   7.542e-06  (5.161e-06, 9.923e-06)

       w_o =   3.831e-05  (3.797e-05, 3.866e-05)

       z_o =       1.045  (1.045, 1.046)

 

Goodness of fit:

  SSE: 1.236e-09

  R-square: 0.9994

 
Horizontal fit for zoomed out data:
 

Coefficients (with 95% confidence bounds):

       c =   1.083e-05  (9.701e-06, 1.195e-05)

       w_o =   4.523e-05  (4.5e-05, 4.546e-05)

       z_o =       1.046  (1.046, 1.046)

 

Goodness of fit:

  SSE: 2.884e-10

  R-square: 0.9998

  Adjusted R-square: 0.9998

  RMSE: 2.956e-06

 

Zoomed_out_fitting01.png

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

 

OK. Looking at the plots and residuals for this, the deviation of the fit around the waist position, and in fact all over, looks to be of the order 10um. A bit large but is it real? Both w_o values are a bit lower than the 50um we'd like, but… let's check using only the zoomed in data -  hopefully more consistent since it was all taken with the same power setting.

 

 

Vertical data fit using only the zoomed in data:

 

Coefficients (with 95% confidence bounds):

       c =   1.023e-05  (9.487e-06, 1.098e-05)

       w_o =   4.313e-05  (4.252e-05, 4.374e-05)

       z_o =       1.046  (1.046, 1.046)

 

Goodness of fit:

  SSE: 9.583e-11

  R-square: 0.997

 

Horizontal data fit using only the zoomed in data:

 

Coefficients (with 95% confidence bounds):

       c =   1.031e-05  (9.418e-06, 1.121e-05)

       w_o =    4.41e-05  (4.332e-05, 4.489e-05)

       z_o =       1.046  (1.046, 1.046)

 

Goodness of fit:

  SSE: 1.434e-10

  R-square: 0.9951

 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Zoomed_in_fitting01.png

 

The waists are both fairly similar this time 43.13um and 44.1um and the offsets are similar too  - residuals are only spread by about 4um this time.

 

I'm inclined to trust the zoomed in measurement more due to the fact that all the data was obtained under the same conditions, but either way, the fitted waist is a bit smaller than the 50um we'd like to see. Think it's worthwhile moving the 62.9mm lens back along the bench by about 3/4 -> 1cm to increase the waist size.

 

 

 

 

 

  4485   Mon Apr 4 14:20:32 2011 BryanConfigurationGreen LockingThe wonderful world of mode-matching

Last bit of oven matching for now.

 

I moved the lens before the oven position back along the beam path by about 1cm - waist should be just above position 9 in this case. Note - due to power-findings from previous time I'm maximising the power into the head to reduce the effect of offsets.

 

From position 9:

Position A1_13.5%_width A2_13.5%_width

(mm) (um mean) (um mean)

-1 121.1 123.6

0 112.5 113.8

1 106.4 106.1

2 102.9 103.4

3 103.6 103.6

4 106.6 107.4

5 111.8 112.5

6 118.2 120.1

7 126.3 128.8

8 134.4 137.1

9 143.8 146.5

10 152.8 156.1

11 163.8 167.1

12 175.1 176.4

13 186.5 187.0

14 197.1 198.4

15 210.3 208.9

16 223.5 218.7

17 237.3 231.0

18 250.2 243.9

19 262.8 255.4

20 274.7 269.0

21 290.4 282.3

22 304.3 295.5

23 316.7 303.1

 

Note - had to reduce power due to peak saturation at 15mm - don't think scale changed, but be aware just in case. And saturated again at 11. And again at 7. A little bit of power adjustment each time to make sure the Beamscan head wasn't saturating. Running the fit gives...

 

Waist_Fits_from_laser.pngWaist_Fits_Bench_Position.png

 

OK. The fit is reasonably good. Residuals around the area of interest (with one exception) are <+/- 2um and the waists are 47.5um (vertical) and 50.0um (horizontal) at a position of 9.09 on the bench. And the details of the fitting output are given below.

 

-=-=-=-=-=-=-=-=-=-=-=-

Vertical Fit

 

cf_ =

 

     General model:

       cf_(x) = w_o.*sqrt(1 + (((x-z_o)*1064e-9)./(pi*w_o.^2)).^2)+c

     Coefficients (with 95% confidence bounds):

       c =   5.137e-06  (4.578e-06, 5.696e-06)

       w_o =   4.752e-05  (4.711e-05, 4.793e-05)

       z_o =        1.04  (1.039, 1.04)

 

 

cfgood_ = 

 

           sse: 1.0699e-11

       rsquare: 0.9996

           dfe: 22

    adjrsquare: 0.9996

          rmse: 6.9738e-07

 

-=-=-=-=-=-=-=-=-=-=-=-

Horizontal Fit

 

cf_ =

 

     General model:

       cf_(x) = w_o.*sqrt(1 + (((x-z_o)*1064e-9)./(pi*w_o.^2)).^2)+c

     Coefficients (with 95% confidence bounds):

       c =    3.81e-06  (2.452e-06, 5.168e-06)

       w_o =   5.006e-05  (4.909e-05, 5.102e-05)

       z_o =        1.04  (1.04, 1.04)

 

 

cfgood_ = 

 

           sse: 4.6073e-11

       rsquare: 0.9983

           dfe: 22

    adjrsquare: 0.9981

          rmse: 1.4471e-06

 

 

 

  9476   Sun Dec 15 20:37:41 2013 ranaSummaryTreasureThere is a Wagonga in the container that Steve does not believe in

From Linda and Bram:

  10272   Thu Jul 24 19:28:43 2014 AkhilUpdateGeneralThermal Actuator Transfer Functions

 As a part of temperature actuator characterization, today Eric Q and I made some measurements for the open loop TF of both the X-arm and Y-arm  thermal actuators. 

For this, we gave an input  of random excitation for the temperature offset input( since we faced some serious issues when we gave in Swept sine yesterday) and observed the PZT actuation signal keeping the arm to be locked all the time of our measurements and ensuring that the PZT signal doesn't saturate.

The  channels used for the measurement were  C1:ALS-X_SLOW_SERVO2_EXC as the input and C1:ALS-X_SLOW_SERVO1_IN1  as the output.

The random noise used for the measurement :

Y-ARM:  Gain- 6000;  Filter - butterworth-first order - band-pass filter with start frequency= 1 Hz stop frequency = 5 Hz.

X-ARM: Gain -3000; Filter - butterworth- first order- band-pass filter with start frequency 3 Hz and stop frequency = 30 Hz and  notch(1,10,20).

The Y-ARM measurement was stable but for the X-ARM, the PZT was saturating too often so Eriq Q went inside the lab and placed a 20dB attenuator in the path of the  X-ARM PZT signal readout to carry out the stable measurements.

The units of the TF of these measurements are not calibrated and are in count/count. I will have to calibrate the units by measuring the PZT count by changing the cavity length so that I can get a standard conversion into Hz/count. I will elog the calibrated TFs in my next elog after I take the cavity length and PZT TFs.

The attached are the bode plots for both the X-ARM and Y-ARM thermal actuators(non-calibrated). I will work on finding the poles and zeroes of this system once I finish calibration of the TF measurements.

  10275   Sat Jul 26 13:10:14 2014 AkhilUpdateGeneralThermal Actuator Transfer Functions

Koji said that the method we used for X-arm thermal actuator TF measurement was not correct and suggested us to make measurements separately for high and low frequencies( ensuring coherence at those frequencies is high).

(Edit by KA: The previous measurements for X/Y arm thermal actuators were done with each arm individually locked. This imposes the MC stability to the arm motion. The MC stability is worse than the arm stability due to shorter length and more number of the mirrors. Thus the arm motions were actually amplified rather than stabilized. The correct configuration was to stabilize MC using the other arm and control the measurement arm with the arm cavity length.)

So I and Eric Q took some improved TF measurements last night for the X-arm. The input excitation and the filters used were similar to that of the previous measurement . The attached are the TF plots showing two different frequency measurements.The data was saved and will be used to generate a complete TF. The attached (TFX_new.pdf)shows the independent TF measurement for X-arm temperature actuator. The black legend shows the TF at high frequencies(>1 Hz) and the red at low frequencies(<1 Hz). The final TF plots( from the data) will be posted in my next elog. 

We also made the measurements needed for calibration of these actuator Transfer functions. For this we gave some excitation for the arm length( separately for X arm and Y arm) and measured the PZT response. I will eLog with the details of the measurement and results shortly.

  2094   Thu Oct 15 01:21:31 2009 ranaSummaryCOCThermal Lensing in the ITM

Thermal lensing formula:

Untitled.png

from (T090018 by A. Abramovici (which references another doc).

In the above equation:

w        1/e^2 beam radius

k        thermal conductivity (not the wave vector) = 1.3 W / m/ K

alpha    absorption coefficient (~10 ppm/cm for our glass)

NP       power in the glass (alpha*NP = absorbed power)

dn/dT    index of refraction change per deg  (12 ppm/K)

d        mirror thickness (25 mm for all of our SOS)

I'm attaching a plot showing the focal length as a function of recycling cavity power for both our current MOS and future SOS designs.

I've assumed a 10 ppm/cm absorption here. It may actually be less for our current ITMs which are made of Heraeus low absorption glass - our new ITMs are Corning 7980-A (measured to have an absorption of 13 ppm/cm ala the iLIGO COC FDD). I expect that our thermal lens focal length will always be longer than 1 km and so I guess this isn't an issue.

  470   Thu May 8 02:06:13 2008 ranaSummaryCOCThermal Lensing in the ITMs and BS may be a problem
The iLIGO interferometers start to see thermal lensing effects with ~2W into the MC, a recycling
gain of ~50, and a beam waist on the ITMs of ~3.5 cm.

At the 40m, the laser power into the MC is 1/2 as much, the recycling gain is 4-5x less, but the
beam on the ITM has a 3 mm waist. So the power in the ITM bulk is 10x less but the power density
is 100x more
. Seems like the induced lens in the ITM bulk might be larger and that if there's
significant absorption on the ITM face (remember our Finesse is 4-5x higher) the beam size in the
arm cavity may also change enough to measure.

Someone (like Andrey) should calculate how much the beam sizes change with absorbed power.
  14041   Fri Jul 6 12:12:09 2018 AnnalisaConfigurationThermal CompensationThermal compensation setup

I tried to put together a rudimentary heater setup. 

As a heating element, I used the soldering iron tip heated up to ~800°C.

To make a reflector, I used the small basket which holds the cork of champains battles (see figure 1), and I covered it with alumnum foil. Of course, it cannot be really considered as a parabolic reflector, but it's something close (see figure 2).

Then, I put a ZnSe 1 inch lens, 3.5 inch FL (borrowed from TCS lab) right after the reflector, in order to collect as much as possible the radiation and focus it onto an image (figure 3). In principle, if the heat is collimated by the reflector, the lens should focus it in a pretty small image. Finally, in order to see the image, I put a screen and a small piece of packaging sponge (because it shouldn't diffuse too much), and I tried to see the projected pattern with a thermal camera (also borrowed from Aidan). However, putting the screen in the lens focal plane didn't really give a sharp image, maybe because the reflector is not exactly parabolic and the heater not in its focus. However, light is still focused on the focal plane, although the image appears still blurred. Perahps I should find a better material (with less dispersion) to project the thermal image onto. (figure 4)

Finally, I measured the transmitted power with a broadband power meter, which resulted to be around 10mW in the focal plane. 

  14071   Fri Jul 13 23:39:46 2018 AnnalisaConfigurationThermal CompensationThermal compensation setup - power supply

[Annalisa, Rana]

In order to power the heater setup to be installed in the ETMY chamber, we took the Sorensen DSC33-33E power supply from the Xend rack which was supposed to power the heater for the seismometer setup.

We modified the J3 connector behind in such a way to allow a remote control (unsoldered pins 9 and 8). 

Now pins 9 and 12 need to be connected to a BNC cable running to the EPICS.


RXA update: the Sorensen's have the capability to be controlled by an external current source, voltage source, or resistive load. We have configured it so that 0-5V moves the output from 0-33 V. There is also the possibility to make it a current source and have the output current (rather than voltage) follow the control voltage. This might be useful since out heater resistance is changing with temperature.

  3189   Fri Jul 9 20:16:19 2010 ranaSummaryPSLThings I did to the PSL today: Refcav, PMC, cameras, etc.

I re-aligned the beam into the PMC. I got basically no improvement. So I instead changed the .LOW setting so that PMCTRANS would no longer go yellow and make the donkey sound.

I did the same for the MOPA's AMPMON because its decayed state is now nominal.

 

Steve and I removed the thermal insulation from around the reference cavity vacuum chamber. It wasn't really any good anyways.

Here are the denuded photos:

 

Steve and I are now planning to replace the foam with some good foam, but before that we will wrap the RC chamber with copper sheets like you would wrap a filet mignon with applewood bacon.

This should reduce the thermal gradients across the can. We will then mount the sensors directly to the copper sheet using thermal epoxy. We will also use copper to cover most of this hugely

oversized window flange - we only need a ~1" hole to get the 0.3 mm beam out of there.

 

My hope is that all of this will improve the temperature stability of this cavity. Right now the daily frequency fluctuations of the NPRO (locked to the RC) are ~100 MHz. This implies

that the cavity dT = (100 MHz) / (299792458 / 1064e-9) / (5e-7) = 1 deg.    That's sad....

 

I also changed the RC_REFL cam to manual gain from AGC. I cranked it to max gain so that we can see the REFL image better.

  3196   Mon Jul 12 14:22:36 2010 JenneSummaryPSLThings I did to the PSL today: Refcav, PMC, cameras, etc.

Quote:

I re-aligned the beam into the PMC. I got basically no improvement. So I instead changed the .LOW setting so that PMCTRANS would no longer go yellow and make the donkey sound.

I did the same for the MOPA's AMPMON because its decayed state is now nominal.

[Jenne, Chip]

The alarm was still going, because the LOLO setting was higher than the LOW, which is a little bit silly.  So we changed the .LOLO setting to 0.80 (the LOW was set to 0.82)

We also changed psl.db to reflect these values, so that they'll be in there the next time c1psl gets rebooted.

  12225   Wed Jun 29 00:09:36 2016 AakashUpdateGeneralThings from past | SURF 2016

I have taken out the heaters and temperature sensors from the enclosure which was made by Megan last summer. Soon I will test and configure those heaters.

  3639   Fri Oct 1 18:53:33 2010 josephb, kiwamuUpdateCDSThings needing to be done next week

We realized we cannot build code with the current RCG compiler on c1ioo or c1iscex, since these are not Gentoo machines.  We need either to get a backwards compatible code generator, or change the boot priority (removing the harddrives also probably works) for c1ioo and c1iscex so they do the diskless Gentoo thing.  This would involve adding some MAC address to the framebuilder dhcpd.conf file in /etc/dhcp along with the computer IPs, and then modifying the /diskless/root/etc/rtsystab with the right machine names and models to start.

I also need to bring some of the older, neglected models up to current build standards. I.e. use cdsIPCx_RFM instead of cdsIPCx and so forth. 

Need to fix the binary outputs for c1sus/c1mcs.  Need to actually get the RFM running, since Kiwamu was having some issues with his green RFM test model.  We have the latest checkout from Rolf, but we have no proof that it actually works.

  680   Wed Jul 16 11:26:47 2008 Max JonesUpdate This Week
Baffles.

I got a battery for the magnetometer today which is slightly too large (~2 mm) in one dimension. Not sure what I'm going to do.

I'm attempting to calibrate the magnetometer but I'm having a hard time calibrating the axis that I cannot simply put through a coil parallel to the coils length. I have attempted to use the end fields of the solenoid but the measurements from the magnetometer are significantly different from the theoretical calculations.

I would appreciate suggestions. - Max.
  16308   Thu Sep 2 19:28:02 2021 KojiUpdate This week's FB1 GPS Timing Issue Solved

After the disk system trouble, we could not make the RTS running at the nominal state. A part of the troubleshoot FB1 was rebooted. But the we found that the GPS time was a year off from the current time

controls@fb1:/diskless/root/etc 0$ cat /proc/gps 
1283046156.91
controls@fb1:/diskless/root/etc 0$ date
Thu Sep  2 18:43:02 PDT 2021
controls@fb1:/diskless/root/etc 0$ timedatectl 
      Local time: Thu 2021-09-02 18:43:08 PDT
  Universal time: Fri 2021-09-03 01:43:08 UTC
        RTC time: Fri 2021-09-03 01:43:08
       Time zone: America/Los_Angeles (PDT, -0700)
     NTP enabled: no
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2021-03-14 01:59:59 PST
                  Sun 2021-03-14 03:00:00 PDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2021-11-07 01:59:59 PDT
                  Sun 2021-11-07 01:00:00 PST


Paco went through the process described in Jamie's elog [40m ELOG 16299] (except for the installation part) and it actually made the GPS time even strange

controls@fb1:~ 0$ cat /proc/gps
967861610.89

I decided to remove the gpstime module and then load it again. This made the gps time back to normal again.

controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 0$ cat /proc/gps
cat: /proc/gps: No such file or directory
controls@fb1:~ 1$ sudo modprobe gpstime
controls@fb1:~ 0$ cat /proc/gps
1314671254.11

 

  16309   Thu Sep 2 19:47:38 2021 KojiUpdateCDSThis week's FB1 GPS Timing Issue Solved

After the reboot daqd_dc was not working, but manual starting of open-mx / mx services solved the issue.

sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*

 

  9338   Mon Nov 4 15:46:17 2013 JenneUpdateLSCThoughts and Conclusions from last week's PRMI+2arms attempt

5:31pm - This is still a work in progress, but I'm going to submit so that I save my writing so far. I think I'm done writing now.


First, a transcription of some of the notes that I took last Tuesday night, then a few looks at the data, and finally some thoughts on things to investigate.


MICH and PRCL Transfer Functions while arms brought in to resonance (both arms locked to ALS beatnotes):

This is summarized in elog 9317, which I made as we were finishing up Tuesday night.  Here's the full story though.  Note that I didn't save the data for these, I just took notes (and screenshots for the 1st TF).

POP22I was ~140 counts, POP110I was ~100 counts.

MICH gain = -2.0, PRCL gain = 0.070. 

First TF (used as reference for 2-10), PRMI locked on REFL165, Xarm transmission = 0.03, Yarm transmission = 0.05 (both arms off resonance).  MICH UGF~40Hz, PRCL UGF~80Hz.

MICH_40Hz.pngPRCL_80Hz.png

2: X=off-res (xarm not moved), Y=0.13, no change in TF

3: X=off-res (xarm not moved), Y=0.35, no change in TF

4: X=off-res (xarm not moved), Y=0.60, MICH high freq gain went up a little, otherwise no change (no change in either UGF)

5:  X=off-res (xarm not moved), Y=0.95, same as TF#4.

6: X=0.20, Y=1.10 (yarm not moved), same as TF#4

7:  X=0.40, Y=1.30 (yarm not moved), same as TF#4

8:  X=0.70, Y=1.55 (yarm not moved), same as TF#4

9: X=1.40, Y=2.20 (yarm not moved), same as TF#4

10: X=4.0, Y=4.0 (yarm not moved), PRCL UGF is 10Hz higher than TF#4, MICH UGF is 20Hz lower than TF#4.

11: (No TF taken), Xarm and Yarm transmission both around 20!  To get this, MICH FMs that were triggered, are no longer triggered to turn on.  Also, MICH gain was lowered to -0.15 and PRCL gain was increased to 0.1

12: (No TF taken), Xarm and Yarm transmissions both around 40!  The peaks could be higher, but we don't have the QPD ready yet.

After that, we started moving away from resonance, but we didn't take any more transfer functions.


OpLev spectra for different arm resonance values:

We were concerned that the ETMs and ITMs might be moving more, when the arms are resonating high power, due to some optical spring / radiation pressure effects, so I took spectra of oplevs at various arm transmissions.

I titled the first file "no lock", and unfortunately I don't remember what wasn't locked.  I think, however, that nothing at all was locked.  No PRMI, no arm ALS, no nothing.  Anyhow, here's the spectrum:

ALS_noLock.pdf

I have a measurement when the Yarm's transmission was 1, and the Xarm's transmission was 1.75.  This was a PRMI lock, with ALS holding the arms partially on resonance:

ALS_X1pt75Y1.pdf

Next up, I have a measurement when Yarm was 0.8, Xarm was 2.  Again, PRMI with the arms held by ALS:

ALS_X2Y0pt8.pdf

And finally, a measurement when Xarm was 5, Yarm was 4:

ALS_X5Y4.pdf

Just so we have a "real" reference, I have just now taken a set of oplev spectra, with the ITMs, ETMs and PRM restored, but I shut the PSL shutter, so there was no light flashing around pushing on things.  I noticed, when taking this data, that if the PSL shutter was open, so the PRFPMI is flashing (but LSC is off), the PRM oplev looks much like the original "no Lock" spectra, but when I closed the shutter, the oplev looks like the others.  So, perhaps when we're getting to really high powers, the PRM is getting pushed around a bit?

ALS_noLock_noLaser.pdf

Conclusions from OpLev Spectra:  At least up to these resonances (which is, admittedly, not that much), I do not see any difference in the oplev spectra at the different buildup power levels.  What I need to do is make sure to take oplev spectra next time we do the PRMI+2arms test when the arms are resonating a lot. 


Time series while bringing arms into resonance:

PRMI_2arms_29Oct2013_POPrin.png

I had wondered if, since the POP 22 and 110 values looked so shakey, we were increasing the PRCL RIN while we brought the arms into resonance.  You can see in the above time series that that's not true.  The left side of the plot is PRMI locked, arms held out of resonance using ALS.  First the Yarm is brought close to resonance, then the Xarm follows.  The RIN of the arms is maybe increasing a little bit as we get closer to resonance, but not by that much.  But there seems to be no correlation between arm power and RIN of the power recycling cavity.

Alternatively, here is some time series when the arm powers got pretty high:

PRMI_2arms_29Oct2013_POPrin_highArmPowers.png


Possible Saturation of Signals:

One possibility for our locklosses of PRMI is that some signal somewhere is saturating, so here are some plots showing that that's not true for the error and control signals for the PRMI:

PRMI_2arms_29Oct2013_LSCcontrolSignals.png

Here, for the exact same time, is a set of time series for every optic except the SRM.  We can see that none of the signals are saturating, and I don't see any big differences for the ITMs or ETMs in the times that the PRMI is locked with high arm powers (center of the x-axis on the plot) and times that the PRMI is not locked, so we don't have high arm powers (edges of the plot - first half second, and last full second).  You can definitely see that the PRM moves much more when the PRMI is locked though, in both pitch and yaw. 

PRMI_2arms_29Oct2013_OpLevs_highArmPowers.png

DCPD signals at the same time:

PRMI_2arms_29Oct2013_DCPDs.png

NB:  These latest 3 plots were created with the getdata script, with arguments "-s 1067163405 -d 7".  It may be a good idea to take some spectra starting at, say 1067163406, 1 second in, and going for ~2 seconds. (It turns out that this is kind of a pain, and I can't convince DTT to give me a sensible spectrum of very short duration....we'll just need to do this live next time around).


Things to think about and investigate:

Why are we losing lock? 

On paper, is the (will the) optical spring a problem once we get high resonance in the arms? 

Spectra of oplevs when we're resonating high arm power.

What is the coupling between 110MHz and 165MHz on the REFL165 PD?  Do we need a stronger bandpass? 

Why are things so shakey when the arm power builds up?

Why do PRCL and MICH have different UGFs when the arms are controlled by ALS vs. ETMs misaligned?

Does QPD for arm transmissions switching work?  Can we then start using TRX and TRY for control?

What is the meaning of the similar features in both transmission signals, and the power recycling cavity?  Power fluctuation in the PRC due to PRM motion? 

  9339   Mon Nov 4 17:08:23 2013 JenneUpdateLSCThoughts on Transition to IR

Gabriele and I talked for a while on Wednesday afternoon about ideas for transitioning to IR control, from ALS. 

I think one of the baseline ideas was to use the sqrt(transmission) as an error signal.  Gabriele pointed out to me that to have a linear signal, really what we need is sqrt( [max transmission] - [current transmission] ), and this requires good knowledge of the maximum transmission that we expect.  However, we can't really measure this max transmission, since we aren't yet able to hold the arms that close to resonance.  If we get this number wrong, the error signal close to the resonance won't be very good.

Gabriele suggested maybe using just the raw transmission signal.  When we're near the half-resonance point, the transmission gives us an approximately linear signal, although it becomes totally non-linear as we get close to resonance.  Using this technique, however, requires lowering the finesse of PRCL by putting in a medium-large MICH offset, so that the PRC is lossy.  This lowering of the PRC finesse prevents the coupled-cavity linewidth of the arm to get too tiny.  Apparently this trick was very handy for Virgo when locking the PRFPMI, but it's not so clear that it will work for the DRFPMI, because the signal recycling cavity complicates things.

I need to look at, and meditate over, some Optickle simulations before I say much else about this stuff.

  9340   Mon Nov 4 18:24:15 2013 KojiUpdateLSCThoughts on Transition to IR

 You have the data. Why don't you just calculate 1/SQRT(TRX)?

...yeah, you can calculate it but of course you don't have no any reference for the true displacement...

  9344   Tue Nov 5 16:39:54 2013 GabrieleUpdateLSCThoughts on Transition to IR

Quote:

Gabriele and I talked for a while on Wednesday afternoon about ideas for transitioning to IR control, from ALS. 

I think one of the baseline ideas was to use the sqrt(transmission) as an error signal.  Gabriele pointed out to me that to have a linear signal, really what we need is sqrt( [max transmission] - [current transmission] ), and this requires good knowledge of the maximum transmission that we expect.  However, we can't really measure this max transmission, since we aren't yet able to hold the arms that close to resonance.  If we get this number wrong, the error signal close to the resonance won't be very good.

Gabriele suggested maybe using just the raw transmission signal.  When we're near the half-resonance point, the transmission gives us an approximately linear signal, although it becomes totally non-linear as we get close to resonance.  Using this technique, however, requires lowering the finesse of PRCL by putting in a medium-large MICH offset, so that the PRC is lossy.  This lowering of the PRC finesse prevents the coupled-cavity linewidth of the arm to get too tiny.  Apparently this trick was very handy for Virgo when locking the PRFPMI, but it's not so clear that it will work for the DRFPMI, because the signal recycling cavity complicates things.

I need to look at, and meditate over, some Optickle simulations before I say much else about this stuff.

 The idea of introducing a large MICH offset to reduce the PRC finesse might help us to get rid of the transmitted power signal. We might be able to increase enough the line width of the double cavity to make it larger than the ASL length fluctuations. Then we can switch from ASL to the IR demodulated signal without transitioning through the power signal.

  9636   Fri Feb 14 00:58:41 2014 JenneUpdateLSCThoughts on Transition to IR

[Koji, Jenne, EricQ, Manasa]

We had a short discussion this evening about what our game plan should be for transitioning from using the ALS system to IR-generated error signals. 


The most fundamental piece is that we want to, instead of having a completely separate ALS locking system, integrate the ALS into the LSC.  Some time ago, Koji did most of the structural changes to the LSC model (elog 9430), and exposed those changes on the LSC screen (elog 9449).  Tonight, I have thrown together a new ALS screen, which should eventually replace our current ALS screen.  My goal is to retain all the functionality of the old screen, but instead use the LSC-version of the error signals, so that it's smoother for our transition to IR.  Here is a screenshot of my new screen:

Screenshot-Untitled_Window-1.png

You will notice that there are several white blocks in the center of the screen. From our discussion this evening, it sounds like we may want to add 4 more locking servo paths to the LSC (ALS for each individual arm, and then ALS for CARM and DARM signals).  The reason these should be separate is that the ALS and the "regular" PDH signals have different noise characteristics, so we will want different servo shapes.  I am proposing to add these 4 new servo blocks to the c1lsc model.  If I don't hear an objection, I'll do this on Monday during the day, unless someone else beats me to it.  The names for these filter modules should be C1:LSC-ALS_XARM, C1:LSC-ALS_YARM, C1:LSC-ALS_DARM and C1:LSC_ALS_CARM.  This will add new rows to the input matrix, and new columns to the output matrix, so the LSC screen will need to be modified to reflect all of these changes.  The new ALS screen should automatically work, although the icons for the input and output matrices will need to be updated. 

The other major difference between this new paradigm and the old, is the place of the offset in the path.  Formerly, we had auxiliary filter banks, and the summation was done by entering multiple values in the ALS input matrix.  Now, since there is a filter bank in the c1lsc model for each of the ALS signals precisely where we want to add our offsets, and I don't expect us to need to put any filters into those filter modules, I have used the offset and TRAMP of those filter banks for the offsets.  Also, you can access the offset value, and the ramp time, as well as the "clear history" button for the phase tracker, all from the main screen, which should help reduce the number of different screens we need to have open at once when locking with ALS.  Anyhow, the actual point where the offset is added has not changed, just the way it happens has. 

When we make the move to using the ALS in the LSC, we'll also need to make sure our "watch arm" and "scan arm" scripts are updated appropriately.

As an intermediary locking step, we want to try to use the ALS system to actuate in a CARM and DARM way, not XARM and YARM.  We will transition from using each ALS signal to feed back to its own ETM, to having DARM feed back to the ETMs, and CARM feed back to MC2.  We may want to break this into smaller steps, first lock the arms to the beatnotes, then find the IR resonance points.  Transition to CARM and DARM feedback, but only using the ETMs.  After we've done that, then we can switch to actuating on MC2.  If we do this, then we'll be using the MC to reduce the CARM offset.

Once we can do this, and are able to reduce the CARM offset, we want to switch CARM over to a combination of the 1/sqrt(transmission) signals.  The CARM loop has a tighter noise requirement, so we can do this, but leave DARM locked to the beatnotes for a while.

After continuing to reduce the CARM offset, we will switch CARM over to one of the RF PDs, for its final low-noise state. 

We'll then do a quick swap of the DARM error signal to the AS port (maybe around the same time as CARM goes over to a PDH signal, before the CARM offset is zero?). 

During all of this, we hope that the vertex has stayed locked. If our 3f sensing matrix elements are totally degenerate when the arms are out of resonance, then we may need to acquire lock using REFL 1f signals, and as we approach the delicate point in the CARM offset reduction, move to 3f signals, and then move back to 1f signals after the arm reflection has done its phase flip.  Either way, we'll have to move from 3f to 1f for the final state.

  10900   Wed Jan 14 03:42:31 2015 JenneUpdateLSCThoughts on going forward with variable finesse

[Jenne, Rana]

We tried locking with the variable finesse MICH offset technique again today. 

A daytime task tomorrow will be to figure out where we are in MICH and CARM offset spaces.  This will require some thinking, and perhaps some modelling.

We were using the UGF servos and checking out their step resonses, and had the realization that we don't want the gain multiplication to happen before the offsets are applied, in the case of MICH and CARM.  Otherwise, as the UGF servo adjusts the gain, the offset is changed.  I think this is what ChrisW and I saw earlier on in the evening, when it seemed like the CARM offset spontaneously zoomed toward zero even though I didn't think I was touching any buttons or parameters.  Anyhow, we no longer used the MICH and CARM UGF servos for the rest of the night.  We need to think about where we want the offset to happen, and where we want the UGF servo multiplication to happen (maybe at the control point, with a very low bandwidth?) such that this is not an issue.

Also, I'm no longer sure that the sqrt(I^2 + Q^2) instead of the usual demodulation is going to work for the UGF servos (Q made this change the other day, after we had talked about it).  When the numbers going into the I and Q servo banks are small (around 1e-5), the total UGF servo gets the answer wrong by a factor of 10 or so.  If I made the "sin gain" and "cos gain" 1000 instead of the usual 1, the numbers were of the order 1e-2, and the servo worked like normal.  So, I think we were perhaps running into some kind of numerical error somehow.  We first noticed this when we lowered the DARM excitation by a factor of 10, and the servo no longer functioned.  We should take out this non-linear math and go back to linear math tomorrow.

During the evening tomorrow, we should try locking the PRMI with a large MICH offset, and then leaving CARM and DARM on ALS, and seeing how far we can get.  Is it possible to just jump over to RF signals, since we won't have to worry about the detuned cavity pole?

Tonight, the locking procedure was the same as usual, but stopping the carm_up script before it starts to lower the CARM offset at all.  Only difference was that MICH triggered FMs were 2,3,7 rather than the usual 2,6,8. 

So, assuming you have the IFO with CARM and DARM on ALS held at +3 CARM offset counts (which we think is about 3nm), and the PRMI is locked on REFL33I&Q with no offsets, here's what we did:

  • PRCL UGF servo on
  • MICH offset goes to -20
  • MICH transitions to ASDC (0.27*ASDC, then normalize by POPDC)
  • DARM UGF servo on
  • CARM offset to 1 (arms about 0.25)
  • CARM transition to SqrtInvTrans
  • Lower CARM gain to 4
  • CARM offset to 0.6 (arms about 1)
  • DARM transition to DC transmission
  • Increase MICH offset to about -650 or -670
  • Lower CARM offset, see what happens

Something else to think about:  Should we normalize our DC transmission signals by POPDC, so that the arm powers will change when we change the MICH offset (for a constant CARM offset)?

The best we got was holding for a few minutes at arm powers of 7.5, but since the MICH offset was large and the power recycling was low, this was perhaps pretty far.  This is why we need some calibration action.

Also, earlier today I copied the CARM and DARM "slide" filter module screens so that we have the same thing for MICH.  Now all 3 of these degrees of freedom have slider versions of the filter module screens, which are called from the ctrl_compact screen.

  10903   Thu Jan 15 04:41:01 2015 JenneUpdateLSCThoughts on going forward with variable finesse

[Jenne, Diego]

Life would be easier with the UGF servos working.  As Diego already elogged, we aren't sure why the demod phases are changing, but that is certainly causing the I-signals to dip below zero, which the log function can't handle (there is a limiter before the log, so that the signal can't go below 1e-3).  Anyhow, this is causing the UGF servos to freak out, so we have not been using them for tonight's locking.

Our goal tonight was to see if we could introduce a nice big MICH offset, and then lower the CARM offset while keeping the arms locked on ALS.  We hope to see some kind of sign of a PDH signal in some RF PD. 

In the end, the highest we got to was -460 MICH offset counts, which we think is about 29nm (if our rough calibration is accurate). The MICH half fringe should be 188nm. With this offset, we got down to 0.3 CARM offset counts while locked on ALS.  We think that this is around 300pm, plus or minus a lot.  Note that while yesterday I had a pretty easy time getting to -660 counts of MICH offset, tonight I struggled to get past -200.  The only way we ended up getting farther was by lowering the CARM offset.  Although, as I type this, I realized that last night's work already had a lower CARM offset, so maybe that's key to being able to increase the MICH offset. 

We watched REFL11I and REFL11I/(TRX+TRY) on striptool, but we didn't see any evidence of a PDH signal.  We lost lock when I tried to transition CARM over to REFLDC, but I wasn't careful about my offset-setting, so I am not convinced that REFLDC is hopeless.

So.  Tonight, we didn't make any major locking progress (the MC started being fussy for about an hour, right after I ran the LSC offsets script, just before we started locking in earnest).  However, we have some ideas from talking with Rana about directions to go:

* Can we transition CARM over to REFL11I, and then engage the AO path?

* Then, while the MICH offset is still large, can we transition DARM over to POX or POY, actuating on a single arm?  If CARM is totally suppressed, this is DARM-y.  If CARM doesn't have the AO path yet, this is halfsy-halfsy, but maybe we don't care.

* Then, can we lower the MICH offset and transition back to a REFLQ signal?

* Separately, it seemed like we kept losing PRC lock due to PRC motion.  If the MICH offset is very large, are we sideband-limited at the POP port, such that we can use the POP DC QPD?  Is it even worth it?

 

MICH calibration:

A single mirror (ITM) moving by lambda/2, in the MICH-only situation is the full range, from bright to dark fringe.  So, half fringe should be lambda/4, or about 133nm.  If we are thinking about pushing on the BS, there's an extra factor of sqrt(2), so I think the half fringe should be at 188nm of BS motion.

When we had MICH locked on ASDC/POPDC, we put in a line at 143.125Hz, at 20 counts to (0.5*BS-0.2625*PRM), so a total of 10 counts to the BS at 143Hz.  Given the BS calibration in http://nodus.ligo.caltech.edu:8080/40m/8242, this is 10.1pm of actuation.  We saw a line in the error signal of 0.1 counts, so we infer that the MICH error signal of ASDC/POPDC has a calibration of 94pm/count. This number was invariant over a few different MICH offsets, although the ones I measured at were all below about 100 counts of MICH offset, so maybe this number changes as we start to get farther from the MICH dark fringe.

 

IFO left flashing (all mirrors aligned except SRM) in case anyone wants fresh data for that.

  22   Sun Oct 28 03:03:42 2007 ranaConfigurationIOOThree Way Excitement
We've been trying to measure the MC mirror internal mode frequencies so that we can measure
their absorption before and after drag wiping.


It looked nearly impossible to see these modes as driven by their thermal excitation level;
we're looking at the "MC_F" or 'servo' output directly on the MC servo board.

Today, I set up a band limited noise drive into the 'Fast POS' inputs of the 3 MC coil
driver boards (turns out you can do this with either the old HP or the SR785).

Frequencies:
MC1     28.21625 kHz
MC2     28.036   kHz
MC3     28.21637 kHz

I don't really have this kind of absolute accuracy. These are just numbers read off of the SR785.

The other side of the setup is that the same "MC_F" signal is going into the SR830 Lock-In which
is set to 'lock-in' at 27.8 kHz. The resulting demodulated 'R" signal (magnitude) is going into
our MC_AO channel (110B ADC).

As you can see from the above table, MC1 and MC3 are astonishingly and annoyingly very close in
frequency. I identified mirrors with peaks by driving one at a time and measuring on the spectrum
analyzer. I repeated it several times to make sure I wasn't fooling myself; it seems like they
are really very close
but distinct peaks. I really wish we had chipped one of these mirrors
before installing them.



Because of the closeness of these drumhead modes, we will have to measure the absorption by making long
measurements of this channel.
  13665   Mon Mar 5 11:58:24 2018 gautamUpdateElectronicsThree opampswalked onto an AA board

For testing the new IR ALS noise, we had decided that we would like to use the differential output of the demodulated ALS beat signal, as opposed to a single-ended output, as measurements suggested the former to be a lower noise configuration than the latter. For this purpose, Koji and I acquired a couple of old AA boards from the WB electronics shop. These are however, rev2 of the board, whereas the latest version is v6. The main difference between v2 and v6 is that (i) the THS4131 instrumentation amplifier has the Vocm pin grounded in v6 but is floating in v2 and (ii) the buffer opamps are AD8622 in v6 but are AD8672 in v2. But in fact, the boards we have are stuffed with AD8682

I talked to Rich on Friday, and he seemed to think the AD8672 didn't have any issues noise-wise, the main reason they changed it was because its power consumption was high, and was causing overheating when several of these 1U chassis were packed closely together in an electronics rack. But the AD8682, which is what we have, has comparable power consumption to the AD8622. It is however a JFET opamp, and the voltage noise is a bit higher than the AD8622. 

I am sure there is a way to LISO model a differential output opamp like the THS4131, but I thought I'd simulate the noise in LTSPICE instead. But I couldn't get that to work. So instead, I just measured the transfer function and noise of a single channel, for which Koji had expertly hacked together a custom shorting of the THS4131 Vocm pin to ground. Attachments #1 and #2 show the measurement. All looks good. Note that the phase is 180 at DC because I had hooked up the input signal opposite to what it should have been. The voltage noise of the differential outputs (each measured w.r.t. ground, with both inputs shorted to ground by a short patch cable) at 10 Hz is <100nV/rtHz, and the ADC noise is expected to be ~1uV/rtHz, so I think this is fine.

Conclusion: I think for the ALS test, we can just use the AA board in this config without worrying too much about replacing the buffer stage opamps, even though we've ordered 100pcs of AD8622.


Addendum 7 Mar 2018 11am: As per this document, the output noise of the AA board should be <75nV/rtHz from 10 Hz-50 kHz. So maybe the AD8682 noise is a little high after all. I've gotten the LTSpice model working now, will post the comparison of modelled output noise for various combinations here shortly.

  13667   Wed Mar 7 12:04:14 2018 gautamUpdateElectronicsThree opampswalked onto an AA board

Here are the plots. Comments:

  1. Measurement and model agree quite well yes.
  2. Of the 3 OpAmps, the ones installed seem to be the noisiest (per model)
  3. Despite #2, I don't think it is critical to replace the buffer opamps as we only win by ~10nV/rtHz in the 300-10kHz range.
  4. I don't understand the spec given in T070146. It says the noise everywhere between 10Hz-50kHz should be <75nV/rtHz. But even the model suggests that at 10Hz, the noise is ~250nV/rtHz for any choice of buffer opamp, so that's a factor of 3 difference which seems large. Maybe I made a mistake in the model but the agreement between measurement and model for the AD8682 choice gives me confidence in the simulation. LTSpice files used are in Attachment #3. Could also be an artefact of the way I made the measurement - between an output and ground instead of differentially...

I like LTspice for such modeling - the GUI is nice to have (though I personally think that typing out a nodal file a la LISO is faster), and compared to LISO, I think that the LTspice infrastructure is a bit more versatile in terms of effects that can be modeled. We can also easily download SPICE models for OpAmps from manufacturers and simply add them to the library, rather than manually type out parameters in opamp.lib for LISO. But the version available for Mac is somewhat pared down in terms of the UI, so I had to struggle a bit to find the correct syntax for the various simulation commands. The format of the exported data is also not as amenable to python plotting as LISO output files, but i'm nitpicking...

Quote:

I've gotten the LTSpice model working now, will post the comparison of modelled output noise for various combinations here shortly.

 

  15442   Tue Jun 30 10:59:16 2020 gautamUpdateLSCThree sensing matrices

Summary:

I injected some sensing lines and measured their responses in the various photodiodes, with the interferometer in a few different configurations. The results are summarized in Attachments #1 - #3. Even with the PRMI (no arm cavities) locked on 1f error signals, the MICH and PRCL signals show up in nearly the same quadrature in the REFL port photodiodes, except REFL165. I am now thinking if the output (actuation) matrix has something to do with this - part of the MICH control signal is fed back to the PRM in order to minimize the appearance of the MICH dither in the PRCL error signal, but maybe this matrix element is somehow horribly mistuned?

Details:

Attachment #1:

  • ETMs were misaligned and the PRMI was locked with the carrier resonant in the cavity (i.e. sidebands reflected).
  • The locking scheme was AS55_Q --> MICH and REFL11_I --> PRCL.

Attachment #2:

  • The PRFPMI was locked. The vertex DoFs were still under control using 3f error signals (REFL165_I for PRCL and REFL165_Q for MICH).
  • Still, the MICH/PRCL degeneracy in all photodiodes except REFL165 persists.

Attachment #3:

  • Nearly identical configuration to Attachment #2.
  • The main difference here is that I applied some offsets to the MICH and PRCL error points.
  • The offsets were chosen so that the appearance of a ~300 Hz dither in the length of MICH/PRCL was nulled in the AS110_Q / POP22_I signals respectively.
  • For the latter, the appearance of this peak in the POP110_I signal was also nulled, as it should be if our macroscopic PRC length is set correctly.
  • The offsets that best nulled the peak were 110 cts for PRCL, 25 cts for MICH. The measured sensing response is 1e12 cts/m for PRCL in REFL165_I and 9.2e11 cts/m for MICH in REFL165_Q. So these offsets, in physical units, are: 110 pm for PRCL and 27 pm for MICH. They seem like reasonable numbers to me - the PRC linewidth is ~7.5 nm, so the detuning without any digital offset applied is only 1.5% of the linewidth.
  • Note that I changed the POP22/POP110 demod phases to maximize the signal in the I quadrature. The final numbers were -124 degrees / -10 degrees respectively.
  • Yet another piece of evidence suggesting these were the correct offsets is that the DC value of POX and POY were zero on average after these offsets were applied.
  • However, the MICH/PRCL responses in the 1f REFL port photodiodes remain nearly degenerate.

Some other mysteries that I will investigate further:

  1. While POP22 indicated stable buildup of 11 MHz power in the PRC, I couldn't make any sense of the AS110 signals at the dark port - there was large variation of the signal content in the two quadratures, so unlike the POP signals, I couldn't find a digital demod phase that consistently had all the signal in one of the two quadratures. This is all due to angular fluctuations?
  2. My ASC simulations suggest that the POP QPD is a poor sensor of PRM motion when the PRFPMI is locked. However, I find that turning on a feedback loop with the POP QPD as a sensor and the PRM as the actuator dramatically reduces the low-frequency fluctuations of the arm cavity carrier buildup. 🤔

I blew the long lock last night because I forgot to not clear the ASS offsets when trying to find the right settings for running the ASS system at high power. Will try again tonight...

Quote:

Lock the PRMI on carrier and measure the sensing matrix, see if the MICH and PRCL signals look sensible in 1f and 3f photodiodes.

  1493   Fri Apr 17 11:05:22 2009 YoichiUpdateLockingThursday night locking status
The last night, it was sort of robust to go up until arm power = 26.
The REFL_DC gain seems to change a lot around this region. So I did fine adjustments of the gain with small incremental steps of the arm power.
This work will continue.
The AutoDTT shows that the lock loss happens with an oscillation of CARM at around 100Hz. This indicates that the cross-over is the culprit.
I was also able to increase the CM UGF up to 10kHz.
  249   Fri Jan 18 15:31:47 2008 robUpdateLSCThursday's locking

rob, johnnie, andrey

On Thursday night we got the intereferometer fully locked in a power-recycled FPMI state. The obstacles included the REFL166 phase being wrong by 180 deg (because that's the correct phase for DRMI locking) and getting confused (again) by the "manual" mode dewhite switching at the ETMs. After turning on the dewhites and the MICH correction, we took the noise spectrum below.
  5927   Thu Nov 17 15:19:06 2011 steveUpdateSUSTi spring plunger to hold OSEM is not affortable

Our existing 300 series SS plungers from McMastercar #8476A43 are silver plated as Atm2 shows.

Problems:  1, they become magnetized after years being close to the magnets

                     2, they oxidize by time so it is hard to turn them

                    

I looked around to replace them.

Titanium body, nose and beryllium copper spring. None magnetic for UHV enviorment.

Can be made in 7 weeks at an UNREASONABLE $169.00 ea at quantity of 50

  2213   Mon Nov 9 13:26:19 2009 AlbertoOmnistructureEnvironmentTidying up BNC cables rack around the lab

We have thousands of miles of BNC cables in the lab but we still don't find one when we need it. I decided to solve the problem.

This morning I tried to tidy up the several cable rack the we have in the lab.

i tried to dedicate each rack to a speecific type of cables: special cables, hand made cables, BNCs, LEMOs, etc.

In particular I tryed to concentrate BNC cable of several lengths on the rack near by the ITMX chamber.

People are invited to preserve the organization.

 

  2214   Mon Nov 9 14:53:47 2009 AlbertoOmnistructureEnvironmentTidying up BNC cables rack around the lab

This would be a good trial once you put the label "BNC only" on the wall.

Quote:

We have thousands of miles of BNC cables in the lab but we still don't find one when we need it. I decided to solve the problem.

This morning I tried to tidy up the several cable rack the we have in the lab.

i tried to dedicate each rack to a speecific type of cables: special cables, hand made cables, BNCs, LEMOs, etc.

In particular I tryed to concentrate BNC cable of several lengths on the rack near by the ITMX chamber.

People are invited to preserve the organization.

 

  2216   Mon Nov 9 15:08:29 2009 KojiOmnistructureEnvironmentTidying up BNC cables rack around the lab

Quote:

This would be a good trial once you put the label "BNC only" on the wall.

Quote:

We have thousands of miles of BNC cables in the lab but we still don't find one when we need it. I decided to solve the problem.

This morning I tried to tidy up the several cable rack the we have in the lab.

i tried to dedicate each rack to a speecific type of cables: special cables, hand made cables, BNCs, LEMOs, etc.

In particular I tryed to concentrate BNC cable of several lengths on the rack near by the ITMX chamber.

People are invited to preserve the organization.

 

 

Done! Check it out.

  11392   Tue Jul 7 17:22:16 2015 JessicaSummary Time Delay in ALS Cables

I measured the transfer functions in the delay line cables, and then calculated the time delay from that.

The first cable had a time delay of 1272 ns and the second had a time delay of 1264 ns. 

Below are the plots I created to calculate this. There does seem to be a pattern in the residual plots however, which was not expected. 

The R-Square parameter was very close to 1 for both fits, indicating that the fit was good. 

  10265   Wed Jul 23 18:53:11 2014 NichinUpdateElectronicsTime delay in RG405 coaxial cables

 A time delay can be modeled as the exponential transfer function :  e(-sTd)  as seen HERE . Therefore the slope of the phase gives us the time delay.

A RG405 coaxial cable, exactly 5.5 meters in length, was fit to an ideal delay function e(-sTd) , with Td = 150 ns.

The plots shows the actual data, fit data and data after correction using the ideal model stated above.

Conclusion:

Delay in RG405 cables is approximately 27.27 ns per meter. This value can be used to correct the phase in measurements of transimpedance for each PD by dividing out the ideal transfer function for time delay.

[EDIT: This looks like we have about 12 % the speed of light inside the RF cables. Too small to be true. I will check tomorrow if the Network analyzer itself has some delay and update this value.]

The varying attenuation of about 1dB due to the cable is not compensated by this. We need to separately include this.

Things to do:

1) Get the length of RF cables that is being used by each PD, so that the compensation can be made.

2) Calculate the attenuation and delay caused by RF multiplexer and Demodulator boards. Include these in the correction factor for transimpedance measurements. 

 

 

 

 

 

 

 

 

 

 

  10266   Wed Jul 23 19:30:34 2014 NichinUpdateElectronicsTime delay in the RF multiplexer (Rack 1Y1)

A time delay can be modeled as the exponential transfer function :  e(-sTd)  as seen HERE . Therefore the slope of the phase gives us the time delay.

The transfer function of RF multiplexer in rack 1Y1 (NI PXI-2547) was fit to an ideal delay function e(-sTd) , with Td = 59 ns.

The plots shows the actual data, fit data and data after correction using the ideal model stated above.

Conclusion:

Delay the RF Multiplexer is approximately 59 ns. This value can be used to correct the phase in measurements of transimpedance for each PD by dividing out the ideal transfer function for time delay.

 

  1031   Tue Oct 7 12:17:57 2008 AlbertoConfigurationComputersTime reset on MEDM
Yoichi, Alberto

I noticed the MEDM screen time was about 7 minutes ahead of the right time. The time on MEDM is read on channel C0:TIM-PACIFIC_STRING which takes it from the C1VCU-EPICS computer. Yoichi found that that computer did not have the right time because one of the startup scripts, ntpd, which are contained in the directory /etc/init.d/ for some reason did not start. So restring it by typing ./ntpd start updated the time on that computer and fixed the problem.
  16283   Thu Aug 19 03:23:00 2021 AnchalUpdateCDSTime synchornization not running

I tried to read a bit and understand the NTP synchronization implementation in FE computers. I'm quite sure that NTP synchronization should be 'yes' if timesyncd are running correctly in the output of timedatectl in these computers. As Koji reported in 15791, this is not the case. I logged into c1lsc, c1sus and c1ioo and saw that RTC has drifted from the software clocks too which does not happen if NTP synchronization was active. This would mean that almost certainly, if the computers are rebooted, the synchronization will be lost and the models will fail to come online.

My current findings are the following (this should be documented in wiki once we setup everything):

  • nodus is running a NTP server using chronyd. One can check the configuration of this NTP serer in /etc/chornyd.conf
  • fb1 is running an NTP server using ntpd that follows nodus and an IP address 131.215.239.14. This can be seen in /etc/ntp.conf.
  • There are no comments to describe what this other server (131.215.239.14) is. Does the GC network have an NTP server too?
  • c1lsc, c1sus and c1ioo all have systemd-timesyncd.service running with configuration file in /etc/systemd/timesyncd.conf.
  • The configuration file set Servers=ntpserver but echo $ntpserver produces nothing (blank) on these computers and I've been unable to find anyplace where ntpserver is defined.
  • In chiara (our name server), the name server file /etc/hosts does not have any entry for ntpserver either.
  • I think the problem might be that these computers are unable to find the ntpserver as it is not defined anywhere.

The solution to this issue could be as simple as just defining ntpserver in the name server list. But I'm not sure if my understanding of this issue is correct. Comments/suggestions are welcome for future steps.

 

  16284   Thu Aug 19 14:14:49 2021 KojiUpdateCDSTime synchornization not running

131.215.239.14 looks like Caltech's NTP server (ntp-02.caltech.edu)
https://webmagellan.com/explore/caltech.edu/28415b58-837f-4b46-a134-54f4b81bee53

I can't say it is correct or not as I did not make the survey at your level. I think you need a few tests of reconfiguring and restarting the NTP clients to see if time synchronization starts. Because the local time is not regulated right now anyway, this operation is safe I think.

 

  16285   Fri Aug 20 00:28:55 2021 AnchalUpdateCDSTime synchornization not running

I added ntpserver as a known host name for address 192.168.113.201 (fb1's address where ntp server is running) in the martian host list in the following files in Chiara:

/var/lib/bind/martian.hosts
/var/lib/bind/rev.113.168.192.in-addr.arpa

Note: a host name called ntp was already defined at 192.168.113.11 but I don't know what computer this is.

Then, I restarted the DNS on chiara by doing:

sudo service bind9 restart

Then I logged into c1lsc and c1ioo and ran following:

controls@c1ioo:~ 0$ sudo systemctl restart systemd-timesyncd.service

controls@c1ioo:~ 0$ sudo systemctl status systemd-timesyncd.service -l
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
   Active: active (running) since Fri 2021-08-20 07:24:03 UTC; 53s ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 23965 (systemd-timesyn)
   Status: "Idle."
   CGroup: /system.slice/systemd-timesyncd.service
           └─23965 /lib/systemd/systemd-timesyncd

Aug 20 07:24:03 c1ioo systemd[1]: Starting Network Time Synchronization...
Aug 20 07:24:03 c1ioo systemd[1]: Started Network Time Synchronization.
Aug 20 07:24:03 c1ioo systemd-timesyncd[23965]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 07:24:35 c1ioo systemd-timesyncd[23965]: Using NTP server 192.168.113.201:123 (ntpserver).
controls@c1ioo:~ 0$ timedatectl
      Local time: Fri 2021-08-20 07:25:28 UTC
  Universal time: Fri 2021-08-20 07:25:28 UTC
        RTC time: Fri 2021-08-20 07:25:31
       Time zone: Etc/UTC (UTC, +0000)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a

The same output is shown in c1lsc too. The NTP synchronized flag in output of timedatectl command did not change to yes and the RTC is still 3 seconds ahead of the local clock.

Then I went to c1sus to see what was the status output before rstarting the timesyncd service. I got folloing output:

controls@c1sus:~ 0$ sudo systemctl status systemd-timesyncd.service -l
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
   Active: active (running) since Tue 2021-08-17 04:38:03 UTC; 3 days ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 243 (systemd-timesyn)
   Status: "Idle."
   CGroup: /system.slice/systemd-timesyncd.service
           └─243 /lib/systemd/systemd-timesyncd

Aug 20 02:02:18 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 02:36:27 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 03:10:35 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 03:44:43 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 04:18:51 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 04:53:00 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 05:27:08 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 06:01:16 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 06:35:24 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 07:09:33 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).

This actually shows that the service was able to find ntpserver correctly at 192.168.113.201 even before I changed the name server file in chiara. So I'm retracting the changes made to name server. They are probably not required.

The configuration files for timesynd.conf are read only even with sudo. I tried changing permissions but that did not work either. Maybe these files are not correctly configured. The man page of timesyncd  says to use field 'NTP' to give the ntp servers. Our files are using field 'Servers'. But since we are not getting any error message, I don't think this is the issue here.

I'll look more into this problem.

  16286   Fri Aug 20 06:24:18 2021 AnchalUpdateCDSTime synchornization not running

I read on some stack exchange that 'NTP synchornized' indicator turns 'yes' in the output of command timedatectl only when RTC clock has been adjusted at some point. I also read that timesyncd does not do the change if the time difference is too much, roughly more than 3 seconds.

So I logged into all FE machines and ran sudo hwclock -w to synchronize them all to the system clocks and then waited if the timesyncd does any correction on RTC. It did not. A few hours later, I found the RTC clocks drifitng again from the system clocks. So even if the timesynd service is running as it should, it si not performing time correction for whatever reason.

Maybe we should try to use some other service?

Quote:
 

The NTP synchronized flag in output of timedatectl command did not change to yes and the RTC is still 3 seconds ahead of the local clock.

 

  16291   Mon Aug 23 22:51:44 2021 AnchalUpdateGeneralTime synchronization efforts

Related elog thread: 16286


I didn't really achieve anything but I'm listing what I've tried.

  • I know now that the timesyncd isn't working because systemd-timesyncd is known to have issues when running on a read-only file system. In particular, the service does not have privileges to change the clock or drift settings at /run/systemd/clock or /etc/adjtime.
  • The workarounds to these problems are poorly rated/reviews in stack exchange and require me to change the /etc/systmd/timesyncd.conf file but I'm unable to edit this file.
  • I know that Paco was able to change these files earlier as the files are now changed and configured to follow a debian ntp pool server which won't work as the FEs do not have internet access. So the conf file needs to be restored to using ntpserver as the ntp server.
  • From system messages, the ntpserver is recognized by the service as shown in the second part of 16285. I really think the issue is in file permissions. the file /etc/adjtime has never been updated since 2017.
  • I got help from Paco on how to edit files for FE machines. The FE machines directories are exported from fb1:/diskless/root.jessie/
  • I restored the /etc/systmd/timesyncd.conf file to how it as before with just servers=ntpserver line. Restarted timesyncd service on all FEs,I tried a few su the synchronization did not happen.
  • I tried a few suggestions from stackexchange but none of them worked. The only rated solution creates a tmpfs directory outside of read-only filesystem and uses that to run timesyncd. So, in my opinion, timesyncd  would never work in our diskless read-only file system FE machines.
  • One issue in an archlinux discussion ended by the questioner resorting to use opennptd from openBSD distribution. The user claimed that opennptd is simple enough that it can run ntp synchornization on a read-only file system.
  • Somehwat painfully, I 'kind of' installed the openntpd tool in the fb1:/diskless/root.jessie directory following directions from here. I had to manually add user group and group for the FEs (which I might not have done correctly). I was not able to get the openntpd daemon to start properly after soe tries.
  • I restored everything back to how it was and restarted timesyncd in c1sus even though it would not do anything really.
Quote:

This time no matter how we try to set the time, the IOPs do not run with "DC status" green. (We kept having 0x4000)

 

  16293   Tue Aug 24 18:11:27 2021 PacoUpdateGeneralTime synchronization not really working

tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?


Spent some time investigating the ntp synchronization. In the morning, after Anchal set up all the ntp servers / FE clients I tried restarting the rts IOPs with no success. Later, with Tega we tried the usual manual matching of the date between c1iscex and fb1 machines but we iterated over different n-second offsets from -10 to +10, also without success.

This afternoon, I tried debugging the FE and fb1 timing differences. For this I inspected the ntp configuration file under /etc/ntp.conf in both the fb1 and /diskless/root.jessie/etc/ntp.conf (for the FE machines) and tried different combinations with and without nodus, with and without restrict lines, all while looking at the output of sudo journalctl -f on c1iscey. Everytime I changed the ntp config file, I restarted the service using sudo systemctl restart ntp.service . Looking through some online forums, people suggested basic pinging to see if the ntp servers were up (and broadcasting their times over the local network) but this failed to run (read-only filesystem) so I went into fb1, and ran sudo chroot /diskless/root.jessie/ /bin/bash to allow me to change file permissions. The test was first done with /bin/ping which couldn't even open a socket (root access needed) by running chmod 4755 /bin/ping then ssh-ing into c1iscey and pinging the fb1 machine successfully. After this, I ran chmod 4755 /usr/sbin/ntpd so that the ntp daemon would have no problem in reaching the server in case this was blocking the synchronization. I exited the chroot shell and the ntp daemon in c1iscey; but the ntpstat still showed unsynchronised status. I also learned that when running an ntp query with ntpq -p if a client has succeeded in synchronizing its time to the server time, an asterisk should be appended at the end. This was not the case in any FE machine... and looking at fb1, this was also not true. Although the fb1 peers are correctly listed as nodus, the caltech ntp server, and a broadcast (.BCST.) server from local time (meant to serve the FE machines), none appears to have synchronized... Going one level further, in nodus I checked the time synchronization servers by running chronyc sources the output shows

controls@nodus|~> chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* testntp1.superonline.net      1  10   377   280  +1511us[+1403us] +/-   92ms
^+ 38.229.59.9                   2  10   377   206  +8219us[+8219us] +/-  117ms
^+ tms04.deltatelesystems.ru     2  10   377   23m    -17ms[  -17ms] +/-  183ms
^+ ntp.gnc.am                    3  10   377   914  -8294us[-8401us] +/-  168ms

I then ran chronyc clients to find if fb1 was listed (as I would have expected) but the output shows this --

Hostname                   Client    Peer CmdAuth CmdNorm  CmdBad  LstN  LstC
=========================  ======  ======  ======  ======  ======  ====  ====
501 Not authorised

So clearly chronyd succeeded in synchronizing nodus' time to whatever server it was pointed at but downstream from there, neither the fb1 or any FE machines seem to be synchronizing properly. It may be as simple as figuring out the correct ntp configuration file, or switching to chronyd for all machines (for the sake of homogeneity?)

  16295   Tue Aug 24 22:37:40 2021 AnchalUpdateGeneralTime synchronization not really working

I attempted to install chrony and run it on one of the FE machines. It didn't work and in doing so, I lost the working NTP client service on the FE computers as well. Following are some details:

  • I added the following two mirrors in the apt source list of root.jessie at /etc/apt/sources.list
    deb http://ftp.us.debian.org/debian/ jessie main contrib non-free
    deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free
  • Then I installed chrony in the root.jessie using
    sudo apt-get install chrony
    • I was getting an error E: Can not write log (Is /dev/pts mounted?) - posix_openpt (2: No such file or directory) . To fix this, I had to run:
      sudo mount -t devpts none "$rootpath/dev/pts" -o ptmxmode=0666,newinstance
      sudo ln -fs "pts/ptmx" "$rootpath/dev/ptmx"
    • Then, I had another error to resolve.
      Failed to read /proc/cmdline. Ignoring: No such file or directory
      start-stop-daemon: nothing in /proc - not mounted?
      To fix this, I had to exit to fb1 and run:
      sudo mount --bind /proc /diskless/root.jessie/proc
    • With these steps, chrony was finally installed, but I immediately saw an error message saying:
      Starting /usr/sbin/chronyd...
      Could not open NTP sockets
  • I figured this must be due to ntp running in the FE machines.  I logged into c1iscex and stopped and disabled the ntp service:
    sudo systemctl stop ntp
    sudo systemctl disable ntp
    • I saw some error messages from the above coomand as FEs are read only file systems:
      Synchronizing state for ntp.service with sysvinit using update-rc.d...
      Executing /usr/sbin/update-rc.d ntp defaults
      insserv: fopen(.depend.stop): Read-only file system
      Executing /usr/sbin/update-rc.d ntp disable
      update-rc.d: error: Read-only file system
    • So I went back to chroot in fb1 and ran the two command sabove that failed:
      /usr/sbin/update-rc.d ntp defaults
      /usr/sbin/update-rc.d ntp disable
    • The last line gave the output:
      insserv: warning: current start runlevel(s) (empty) of script `ntp' overrides LSB defaults (2 3 4 5).
      insserv: warning: current stop runlevel(s) (2 3 4 5) of script `ntp' overrides LSB defaults (empty).
    • I igored this and moved forward.
  • I copied the chronyd.service from nodus to the chroot in fb1 and configured it to use nodus as the server. The I started the chronyd.service

    sudo systemctl status chronyd.service
    but got the saem issue of NTP sockets.

    â—Â chronyd.service - NTP client/server
       Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled)
       Active: failed (Result: exit-code) since Tue 2021-08-24 21:52:30 PDT; 5s ago
      Process: 790 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=1/FAILURE)

    Aug 24 21:52:29 c1iscex systemd[1]: Starting NTP client/server...
    Aug 24 21:52:30 c1iscex chronyd[790]: Could not open NTP sockets
    Aug 24 21:52:30 c1iscex systemd[1]: chronyd.service: control process exited, code=exited status=1
    Aug 24 21:52:30 c1iscex systemd[1]: Failed to start NTP client/server.
    Aug 24 21:52:30 c1iscex systemd[1]: Unit chronyd.service entered failed state.

  • I tried a few things to resolve this, but couldn't get it to work. So I gave up on using chrony and decided to go back to ntp service atleast.

  • I stopped, disabled and checked status of chrony:
    sudo systemctl stop chronyd
    sudo systemctl disable chronyd
    sudo systemctl status chronyd
    This gave the output:

    â—Â chronyd.service - NTP client/server
       Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled)
       Active: failed (Result: exit-code) since Tue 2021-08-24 22:09:07 PDT; 25s ago

    Aug 24 22:09:07 c1iscex systemd[1]: Starting NTP client/server...
    Aug 24 22:09:07 c1iscex chronyd[2490]: Could not open NTP sockets
    Aug 24 22:09:07 c1iscex systemd[1]: chronyd.service: control process exited, code=exited status=1
    Aug 24 22:09:07 c1iscex systemd[1]: Failed to start NTP client/server.
    Aug 24 22:09:07 c1iscex systemd[1]: Unit chronyd.service entered failed state.
    Aug 24 22:09:15 c1iscex systemd[1]: Stopped NTP client/server.

  • I went back to fb1 chroot and removed chrony package and deleted the configuration files and systemd service files:
    sudo apt-get remove chrony

  • But when I started ntp daemon service back in c1iscex, it gave error:
    sudo systemctl restart ntp
    Job for ntp.service failed. See 'systemctl status ntp.service' and 'journalctl -xn' for details.

  • Status shows:

    sudo systemctl status ntp
    â—Â ntp.service - LSB: Start NTP daemon
       Loaded: loaded (/etc/init.d/ntp)
       Active: failed (Result: exit-code) since Tue 2021-08-24 22:09:56 PDT; 9s ago
      Process: 2597 ExecStart=/etc/init.d/ntp start (code=exited, status=5)

    Aug 24 22:09:55 c1iscex systemd[1]: Starting LSB: Start NTP daemon...
    Aug 24 22:09:56 c1iscex systemd[1]: ntp.service: control process exited, code=exited status=5
    Aug 24 22:09:56 c1iscex systemd[1]: Failed to start LSB: Start NTP daemon.
    Aug 24 22:09:56 c1iscex systemd[1]: Unit ntp.service entered failed state.

  • I tried to enable back the ntp service by sudo systemctl enable ntp. I got similar error messages of read only filesystem as earlier.
    Synchronizing state for ntp.service with sysvinit using update-rc.d...
    Executing /usr/sbin/update-rc.d ntp defaults
    insserv: warning: current start runlevel(s) (empty) of script `ntp' overrides LSB defaults (2 3 4 5).
    insserv: warning: current stop runlevel(s) (2 3 4 5) of script `ntp' overrides LSB defaults (empty).
    insserv: fopen(.depend.stop): Read-only file system
    Executing /usr/sbin/update-rc.d ntp enable
    update-rc.d: error: Read-only file system

    • I went back to chroot in fb1 and ran:
      /usr/sbin/update-rc.d ntp defaults
      insserv: warning: current start runlevel(s) (empty) of script `ntp' overrides LSB defaults (2 3 4 5).
      insserv: warning: current stop runlevel(s) (2 3 4 5) of script `ntp' overrides LSB defaults (empty).
      and
      /usr/sbin/update-rc.d ntp enable

  • I came back to c1iscex and tried restarting the ntp service but got same error messages as above with exit code 5.

  • I checked c1sus, the ntp was running there. I tested the configuration by restarting the ntp service, and then it failed with same error message. So the remaining three FEs, c1lsc, c1ioo and c1iscey have running ntp service, but they won't be able to restart.

  • As a last try, I rebooted c1iscex to see if ntp comes back online nicely, but it doesn't.

Bottom line, I went to try chrony in the FEs, and I ended up breaking the ntp client services on the computers as well. We have no NTP synchronization in any of the FEs.

Even though Paco and I are learning about the ntp and cds stuff, I think it's time we get help from someone with real experience. The lab is not in a good state for far too long.

Quote:

tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?

 

  16292   Tue Aug 24 09:22:48 2021 AnchalUpdateGeneralTime synchronization working now

Jamie told me to use chroot to log in into the chroot jail of debian os that are exported for the FEs and install ntp there. I took following steps at the end of which, all FEs have NTP synchronized now.

  • I logged into fb1 through nodus.
  • chroot /diskless/root.jessie /bin/bash took me to the bash terminal for debian os that is exported to all FEs.
  • Here, I ran sudo apt-get install ntp which ran without any errors.
  • I then edited the file in /etc/ntp.conf , i removed the default servers and added following lines for servers (fb1 and nodus ip addresses):
    server 192.113.168.201
    server 192.113.168.201
  • I logged into each FE machine and ran following commands:
    sudo systemctl stop systemd-timesyncd.service; sudo systemctl status systemd-timesyncd.service;
    timedatectl; sleep 2;sudo systemctl daemon-reload;  sudo systemctl start ntp; sleep 2; sudo systemctl status ntp; timedatectl
    sudo hwclock -s
    • The first line ensures that systemd-timesyncd.service is not running anymore. I did not uninstall timesyncd and left its configuration file as it is.
    • The second line first shows the times of local and RTC clocks. Then reloads the daemon services to get ntp registered. Then starts ntp.service and shows it's status. Finally, the timedatectl command shows the synchronized clocks and that NTP synchronization has occured.
    • The last line sets the local clock same as RTC clock. Even though this wasn't required as I saw that the clocks were already same to seconds, I just wanted a point where all the local clocks are synchronized to the ntp server.
  • Hopefully, this would resolve our issue of restarting the models anytime some glitch happens or when we need ot update something in one of them.

Edit Tue Aug 24 10:19:11 2021:

I also disabled timesyncd on all FEs using sudo systemctl disable systemd-timesyncd.service

I've added this wiki page for summarizing the NTP synchronization knowledge.

  8098   Mon Feb 18 11:54:15 2013 Max HortonUpdateSummary PagesTiming Issues and Calendars

Crontab: The bug of data only plotting until 5PM is being investigated.  The crontab's final call to the summary page generator was at 5PM.  This means that the data plots were not being generated after 5PM, so clearly they never contained data from after 5PM.  In fact, the original crontab reads:

0 11,5,11,17 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh 2>&1

I'm not exactly sure what inspired these entries.  The 11,5,11,17 entries are supposed to be the hours at which the program is run.  Why is it run twice at 11?  I assume it was just a typo or something.

The final call time was changed to 11:59PM in an attempt to plot the entire day's data, but this method didn't appear to work because the program would still be running past midnight, which was apparently inhibiting its functionality (most likely, the day change was affecting how the data is fetched).  The best solution is probably to just wait until the next day, then call the summary page generator on the previous day's data.  This will be implemented soon.

Calendars: Although the calendar tabs on the left side of the page were fixed, the calendars displayed at: https://nodus.ligo.caltech.edu:30889/40m-summary/calendar/ appear to still have squished together text.  The calendar is being fetched from https://nodus.ligo.caltech.edu:30889/40m-summary/calendar/calendar.html and displayed in the page.  This error is peculiar because the URL from which the calendar is being fetched does NOT have squished together text, but the resulting calendar at 40m-summary/calendar/ will not display spaces between the text.  This issue is still being investigated.

  10193   Mon Jul 14 13:03:23 2014 AkhilSummaryElectronicsTiming Issues of Mini Circuits UFC-6000: Solved

Main Problem:

The frequency counter (FC) takes in an analog RF input(signal) and outputs the frequency of the signal(Ranging from 1 MHz- 6000 MHz) in the digital domain (into a processor). The FC samples the data with a given sample rate( user defined) which ranges from 0.1 s to 1 s(faced problems in fixing this initially).  For data acquisition, we have been using a Raspberry Pi(as a processor) which is connected to the martian network and can communicate with the computers inside the 40m.  The ultimate challenge which I faced( and been knocking my head off from past two-three weeks) is the synchronization of clocks between the Raspberry Pi and the FC i.e the clock which the FC uses to sample and dump data( every 'x' s) and the clock inside the raspberry pi( used  in the loop to wait for a particular amount of time the frequency counter takes to dump successive data).

 

Steps Taken:

  • To address this problem, first I added an external clock circuit which monitors the Raspberry Pi and the FC to dump and read data at a particular rate(which is equal to the sampling rate of the FC)In detail: http://nodus.ligo.caltech.edu:8080/40m/10129. 
  • While doing so, at first the level trigger algorithm was used which means that the external clock frequency was half as that of  the reciprocal of the sampling rate and a trigger was seen every time the level shifts from +DC to -DC(of the external square wave).
  • But this did not completely mitigate the issues and there were still few issues on how quickly the ADC reads the signal and R Pi processes it.
  • To minimize these issues completely, an edge trigger algorithm which detects a pos edge(rising)  of the clock was used. The clock  frequency is now equal to the reciprocal of the sampling rate. This algorithm showed better results and greatly minimized the drift of the sampling time.

Psuedo Code(code attached):

open device : FC via USB-HID;

open device : ADC via I2C;

always(for t= recording time):

            read data from ADC(external clock);

            if pos edge detected:

                    read data from FC and store it in a register;

             else read data from ADC;

end

write data stored in the register to a file( can be an Epics channel or a text file);

 

Results:

The attached are the plots showing the time between samples for a large number of samples taken for different sampling times of the FC. The percentage error is the percentage of standard error in the timing between two samples for the data for the entire measurement. It can be inferred that this error has been cut down to the order of ms.

 

To do next:

  • I have started taking phase measurements( analysis and plots will follow this elog) and also the PSD plots with the improved timing characteristics.

 

 

 

             

 

ELOG V3.1.3-