40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 1 of 326  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Type Category Subject
  16423   Fri Oct 22 17:35:08 2021 Ian MacMillanSummaryPEMParticle counter setup near BS Chamber

I have done some reading about where would be the best place to put the particle counter. The ISO standard (14644-1:2015) for cleanrooms is one every 1000 m^2 so one for every 30m x 30m space. We should have the particle counter reasonably close to the open chamber and all the manufactures that I read about suggest a little more than 1 every 30x30m. We will have it much closer than this so it is nice to know that it should still get a good reading. They also suggest keeping it in the open and not tucked away which is a little obvious. I think the best spot is attached to the cable tray that is right above the door to the control room. This should put it out of the way and within about 5m of where we are working. I ordered some cables to route it over there last night so when they come in I can put it up there. 

  16422   Thu Oct 21 15:24:35 2021 ranaSummaryPEMParticle counter setup near BS Chamber

rethinking what I said on Wednesday - its not a good idea to put the particle counter on a vac chamber with optics inside. The rumble from the air pump shows up in the acoustic noise of the interferometer. Let's look for a way to mount it near the BS chamber, but attached to something other than vacuum chambers and optical tables.

Quote:

I have placed a GT321 particle counter on top of the MC1/MC3 chamber next to the BS chamber.

 

  16421   Thu Oct 21 15:22:35 2021 ranaSummaryPEMParticle counter setup near BS Chamber

SVG doesn't work in my browser(s). Can we use PDF as our standard for all graphics other than photos (PNG/JPG) ?

  16420   Thu Oct 21 11:41:31 2021 AnchalSummaryPEMParticle counter setup near BS Chamber

The particle count channel names were changes yesterday to follow naming conventions used at the sites. Following are the new names:

C1:PEM-BS_DUST_300NM
C1:PEM-BS_DUST_500NM
C1:PEM-BS_DUST_1000NM
C1:PEM-BS_DUST_2000NM
C1:PEM-BS_DUST_5000NM
 

The legacy count channels are kept alive with C1:PEM-count_full copying C1:PEM-BS_DUST_1000NM channel and C1:PEM-count_half copying C1:PEM-BS_DUST_500NM channel.

Attachment one is the particle counter trend since 8:30 am morning today when the HVAC wokr started. Seems like there was some peak particle presence around 11 am. The particle counter even counted 8 counts of particles size above 5um!

 

Attachment 1: ParticleCountData20211021.pdf
ParticleCountData20211021.pdf
  16419   Thu Oct 21 11:38:43 2021 JordanUpdateSUSStandoffs for Side Magnet on 3" Adapter Ring SOS Assembly

I had 8 standoffs made at the Caltech chemistry machine shop to be used as spacers for the side magnets on the 3" Ring assembly. This is to create enough clearance between the magnet and the cap screws directly above on the wire clamp.

These are 0.075" diameter by .10" length. Putting them through clean and bake now.

Attachment 1: Magnet_Standoffs.jpg
Magnet_Standoffs.jpg
  16418   Wed Oct 20 15:58:27 2021 KojiUpdateVACHow to vent TP1

Probably the hard disk of c0rga is dead. I'll follow up in this elog later today.

Looking at the log in /opt/rtcds/caltech/c1/scripts/RGA/logs , it seemed that the last RGA scan was Sept 2, 2021, the day when we had the disk full issue of chiara.
I could not login to c0rga from control machines.
I was not aware of the presence for c0rga until today, but I could locate it in the X arm.
The machine was not responding and it was rebooted, but could not restart. It made some knocking sound. I am afraid that the HDD failed.

I think we can
- prepare a replacement linux machine for the python scripts
or
- integrate it with c1vac

  16417   Wed Oct 20 11:48:27 2021 AnchalSummaryCDSPower supple configured correctly.

This was horrible! That's my bad, I should have checked the configuration before assuming that it is right.

I fixed the power supply configuration. Now the strip has two rails of +/- 18V and the GND is referenced to power supply earth GND.

Ian should redo the tests.

  16416   Wed Oct 20 11:16:21 2021 AnchalSummaryPEMParticle counter setup near BS Chamber

I have placed a GT321 particle counter on top of the MC1/MC3 chamber next to the BS chamber. The serial cable is connected to c1psl computer on 1X2 using 2 usb extenders (blue in color) over the PSL enclosure and over the 1X1 rack.

The main serial communication script for this counter by Radhika is present in 40m/labutils/serial_com/gt321.py.

A 40m specific application script is present in the new git repo for 40m scripts, in 40m/scripts/PEM/particleCounter.py. Our plan is to slowly migrate the legacy scripts directory to this repo overtime. I've cloned this repo in the nfs shared directory at /opt/rtcds/caltech/c1/Git/40m/scripts which makes the scripts available at all computers and keep them upto date in all computers.

The particle counter script is running on c1psl through a systemd service, using service file 40m/scripts/PEM/particleCounter.service. Locally in c1psl, /etc/systemd/system/particleCounter.service is symbollically linked to the file in the file.

Following channels for particle counter needed to be created as I could not find any existing particle counter channels.

[C1:PEM-BS_PAR_CTS_0p3_UM]
[C1:PEM-BS_PAR_CTS_0p5_UM]
[C1:PEM-BS_PAR_CTS_1_UM]
[C1:PEM-BS_PAR_CTS_2_UM]
[C1:PEM-BS_PAR_CTS_5_UM]

These are created from 40m/softChansModbus/particleCountChans.db database file. Computer optimus is running a docker container to serve as EPICS server for such soft channels. To add or edit channels, one just need to add new database file or edit database files in thsi repo and on optimus do:

controls@optimus|~> sudo docker container restart softchansmodbus_SoftChans_1
softchansmodbus_SoftChans_1

that's it.

I've added the above channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini to record them in framebuilder. Starting from 11:20 am Oct 20, 2021 PDT, the data on these channels is from BS chamber area. Currently the script is running continuosly, which means 0.3u particles are sampled every minute, 0.5u twice in 5 minutes and 1u, 2u, and 5u particles are sampled once in 5 minutes. We can reduce the sampling rate if this seems unncessary to us.

Attachment 1: PXL_20211020_183728734.jpg
PXL_20211020_183728734.jpg
  16415   Tue Oct 19 23:43:09 2021 KojiSummaryCDSc1sus2 DAC to ADC test

(Because of a totally unrelated reason) I was checking the electronics units for the upgrade. And I realized that the electronics units at the test stand have not been properly powered.

I found that the AA/AI stack at the test stand (Attachment 1) has an unusual powering configuration (Attachment 2).
- Only the positive power supply was used / - The supply voltage is only +15V / - The GND reference is not connected to anywhere.

For confirmation, I checked the voltage across the DC power strip (Attachments 3/4). The positive was +5.3V and the negative was -9.4V. This is subject to change depending on the earth potential.

This is not a good condition at all. The asymmetric powering of the circuit may cause damages to the opamps. So I turned off the switches of the units.

The power configuration should be immediately corrected.

  1. Use both positive and negative supply (2 power supply channels) to produce the positive and the negative voltage potentials. Connect the reference potential to the earth post of the power supply.
    https://www.youtube.com/watch?v=9_6ecyf6K40   [Dual Power Supply Connection / Serial plus minus electronics laboratory PS with center tap]
  2. These units have DC power regulator which produces +/-15V out of +/-18V. So the DC power supplies are supposed to be set at +18V.

 

Attachment 1: P_20211019_224433.jpg
P_20211019_224433.jpg
Attachment 2: P_20211019_224122.jpg
P_20211019_224122.jpg
Attachment 3: P_20211019_224400.jpg
P_20211019_224400.jpg
Attachment 4: P_20211019_224411.jpg
P_20211019_224411.jpg
  16414   Tue Oct 19 18:20:33 2021 Ian MacMillanSummaryCDSc1sus2 DAC to ADC test

I ran a DAC to ADC test on c1sus2 channels where I hooked up the outputs on the DAC to the input channels on the ADC. We used different combinations of ADCs and DACs to make sure that there were no errors that cancel each other out in the end. I took a transfer function across these channel combinations to reproduce figure 1 in T2000188.

As seen in the two attached PDFs the channels seem to be working properly they have a flat response with a gain of 0.5 (-6 dB). This is the response that is expected and is the result of the DAC signal being sent as a single ended signal and the ADC receiving as a differential input signal. This should result in a recorded signal of 0.5 the amplitude of the actual output signal.

The drop off on the high frequency end is the result of the anti-aliasing filter and the anti-imaging filter. Both of these are 8-pole elliptical filters so when combined we should get a drop off of 320dB per decade. I measured the slope on the last few points of each filter and the averaged value was around 347dB per decade. This is slightly steeper than expected but since it is to cut off higher frequencies it shouldn't have an effect on the operation of the system. Also it is very close to the expected value.

The ripples seen before the drop off are also an effect of the elliptical filters and are seen in T2000188.

Note: the transfer function that doesn't seem to match the others is the heartbeat timing signal.

Attachment 1: data3_Plots.pdf
data3_Plots.pdf data3_Plots.pdf data3_Plots.pdf data3_Plots.pdf data3_Plots.pdf data3_Plots.pdf data3_Plots.pdf data3_Plots.pdf
Attachment 2: data2_Plots.pdf
data2_Plots.pdf data2_Plots.pdf data2_Plots.pdf data2_Plots.pdf data2_Plots.pdf data2_Plots.pdf data2_Plots.pdf data2_Plots.pdf
  16413   Tue Oct 19 11:30:39 2021 KojiUpdateVACHow to vent TP1

I learned that TP1 was vented through the RGA room in the past. This can be done by opening VM2 and a manual valve ("needle valve")
I checked the setup and realized that this will vent RGA. But it is OK as long as we turns of the RGA during vent and bake it once TP1 is back.

Additional note:

- It'd be nice to take a scan for the current background level before the work.
- Turn RGA EM and filament off, let it cool down overnight. 
- Vent with clean N2 or clean air. (Normal operating temp ~80C is to minimize accumulation of H-C contaminations.)
- There is a manual switch and indicators on the top of the RGA amp. It has auto protection to turn filament off if the pressure increase over ~1e-5.

Attachment 1: Screen_Shot_2021-10-18_at_14.52.34.png
Screen_Shot_2021-10-18_at_14.52.34.png
  16412   Tue Oct 19 10:59:09 2021 KojiUpdateVACVent Started / Completed

[Chub, Jordan, Yehonathan, Anchal, Koji]

North door of the BS chamber opened

 

  16411   Mon Oct 18 16:48:32 2021 TegaUpdateElectronicsSat Amp modifications

[S2100738, S2100745, S2100751] Completed three more Sat Amp units modification with seven remaining.

 

Attachment 1: IMG_20211018_162918574.jpg
IMG_20211018_162918574.jpg
  16410   Mon Oct 18 10:02:17 2021 KojiUpdateVACVent Started / Completed

[Chub, Jordan, Anchal, Koji]

- Checked the main volume is isolated.
- TP1 and TP2 were made isolated from other volumes. Stopped TP1. Closed V4 to isolate TP1 from TP2.
- TP3 was made isolated. TP3 was stopped.
- We wanted to vent annuli, but it was not allowed as VA6 was open. We closed VA6 and vented the annuli with VAVEE.
- We wanted to vent the volume between VA6, V5, VM3, V7 together with TP1. So V7 was opened. This did not change the TP1 pressure (P2 = 1.7mmTorr) .
- We wanted to connect the TP1 volume with the main volume. But this was not allowed as TP1 was not rotating. We will vent TP1 through TP2 once the vent of the main volume is done.

- Satrted venting the main volume@Oct 18, 2021 9:45AM PDT

- We started from 10mTorr/min, and increased the vent speed to 200mTorr/min, 700mTorr/min, and now it is 1Torr/min @ 20Torr
- 280Torr @11:50AM
- 1atm  @~2PM


We wanted to vent TP1. We rerun the TP2 and tried to slowly introduce the air via TP2. But the interlock prevents the action.

Right now the magenta volume in the attachment is still ~1mTorr. Do we want to open the gate valves manually? Or stop the interlock process so that we can bypass it?

Attachment 1: Screen_Shot_2021-10-18_at_14.52.34.png
Screen_Shot_2021-10-18_at_14.52.34.png
Attachment 2: Screenshot_2021-10-18_15-08-59.png
Screenshot_2021-10-18_15-08-59.png
  16409   Fri Oct 15 20:53:49 2021 KojiSummaryGeneralVent Prep

From the IFO point of view, all look good and we are ready for venting from Mon Oct 18 9AM

  16408   Fri Oct 15 17:17:51 2021 KojiSummaryGeneralVent Prep

I took over the vent prep: I'm going through the list in [ELOG 15649] and [ELOG 15651]. I will also look at [ELOG 15652] at the day of venting.

  1. IFO alignment: Two arms are already locking. The dark port beam is well overlapped. We will move PRM/SRM etc. So we don't need to worry about them. [Attachment 1]
    scripts>z read C1:SUS-BS_PIT_BIAS C1:SUS-BS_YAW_BIAS
    -304.7661529521767
    -109.23924626857811
    scripts>z read C1:SUS-ITMX_PIT_BIAS C1:SUS-ITMX_YAW_BIAS
    15.534616817500943
    -503.4536332290159
    scripts>z read C1:SUS-ITMY_PIT_BIAS C1:SUS-ITMY_YAW_BIAS
    653.0100945988496
    -478.16260735781225
    scripts>z read C1:SUS-ETMX_PIT_BIAS C1:SUS-ETMX_YAW_BIAS
    -136.17863332517527
    181.09285307121306
    scripts>z read C1:SUS-ETMY_PIT_BIAS C1:SUS-ETMY_YAW_BIAS
    -196.6200333695437
    -85.40819256078339

     
  2. IMC alignment: Locking nicely. I ran WFS relief to move the WFS output on to the alignment sliders. All the WFS feedback values are now <10. Here is the slider snapshots. [Attachment 2]
     
  3. PMC alignmnet: The PMC looked like it was quite misaligned -> aligned. IMC/PMC locking snapshot [Attachment 3]
    Arm transmissions:
    scripts>z avg 10  C1:LSC-TRX_OUT C1:LSC-TRY_OUT
    C1:LSC-TRX_OUT 0.9825591325759888
    C1:LSC-TRY_OUT 0.9488834202289581

     
  4. Suspension Status Snapshot [Attachment 4]
     
  5. Anchal aligned the OPLEV beams [ELOG 16407]
    I also checked the 100 days trend of the OPLEV sum power. The trend of the max values look flat and fine. [Attachment 5] For this purpose, the PRM and SRM was aligned and the SRM oplev was also aligned. The SRM sum was 23580 when aligned and it was just fine (this is not so visible in the trend plot).
     
  6. The X and Y green beams were aligned for the cavity TEM00s. Y end green PZT values were nulled. The transmission I could reach was as follows.
    >z read C1:ALS-TRX_OUTPUT C1:ALS-TRY_OUTPUT
    0.42343354488901286
    0.24739624058377277

    It seems that these GTRX and GTRY seemed to have crosstalk. When each green shutters were closed the transmissino and the dark offset were measured to be
    >z read C1:ALS-TRX_OUTPUT C1:ALS-TRY_OUTPUT
    0.41822833190834546
    0.025039383697636856
    >z read C1:ALS-TRX_OUTPUT C1:ALS-TRY_OUTPUT
    0.00021112720155274818
    0.2249448773499293

    Note that Y green seemed to have significant (~0.1) of 1st order HOM. I don't know why I could not transfer this power into TEM00. I could not find any significant clipping of the TR beams on the PSL table PDs.
     
  7. IMC Power reduction
    Now we have nice motorized HWP. sitemap -> PSL -> Power control
    == Initial condition == [Attachment 6]
    C1:IOO-HWP_POS 38.83
    Measured input power = 0.99W
    C1:IOO-MC_RFPD_DCMON = 5.38
    == Power reduction == [Attachment 7]
    - The motor was enabled upon rotation on the screen

    C1:IOO-HWP_POS 74.23
    Measured input power = 98mW
    C1:IOO-MC_RFPD_DCMON = 0.537
    - Then, the motor was disabled
     
  8. Went to the detection table and swapped the 10% reflector with the 98% reflector stored on the same table. [Attachments 8/9]
    After the beam alignment the MC REFL PD received about the same amount of the light as before.
    C1:IOO-MC_RFPD_DCMON = 5.6
    There is no beam delivered to the WFS paths.
    CAUTION: IF THE POWER IS INCREASED TO THE NOMINAL WITH THIS CONFIGURATION, MC REFL PD WILL BE DESTROYED.
  9. The IMC can already be locked with this configuration. But for the MC Autolocker, the MCTRANS threshold for the autolocker needs to be reduced as well.
    This was done by swapping a line in  /opt/rtcds/caltech/c1/scripts/MC/AutoLockMC.init
    # BEFORE
    /bin/csh ./AutoLockMC.csh >> $LOGFILE
    #/bin/csh ./AutoLockMC_LowPower.csh >> $LOGFILE
    --->
    # AFTER
    #/bin/csh ./AutoLockMC.csh >> $LOGFILE
    /bin/csh ./AutoLockMC_LowPower.csh >> $LOGFILE

    Confirmed that the autolocker works a few times by toggling the PSL shutter. The PSL shutter was closed upon the completion of the test
     
  10. Walked around the lab and checked all the bellows - the jam nuts are all tight, and I couldn't move them with my hands. So this is okay according to the ancient tale by Steve.
Attachment 1: Screenshot_2021-10-15_17-36-00.png
Screenshot_2021-10-15_17-36-00.png
Attachment 2: Screenshot_2021-10-15_17-39-58.png
Screenshot_2021-10-15_17-39-58.png
Attachment 3: Screenshot_2021-10-15_17-42-20.png
Screenshot_2021-10-15_17-42-20.png
Attachment 4: Screenshot_2021-10-15_17-46-13.png
Screenshot_2021-10-15_17-46-13.png
Attachment 5: Screenshot_2021-10-15_18-05-54.png
Screenshot_2021-10-15_18-05-54.png
Attachment 6: Screen_Shot_2021-10-15_at_19.45.05.png
Screen_Shot_2021-10-15_at_19.45.05.png
Attachment 7: Screen_Shot_2021-10-15_at_19.47.10.png
Screen_Shot_2021-10-15_at_19.47.10.png
  16407   Fri Oct 15 16:46:27 2021 AnchalSummaryOptical LeversVent Prep

I centered all the optical levers on ITMX, ITMY, ETMX, ETMY, and BS to a position where the single arm lock on both were best aligned. Unfortunately, we are seeing the TRX at 0.78 and TRY at 0.76 at the most aligned positions. It seems less power is getting out of PMC since last month. (Attachment 1).

Then, I tried to lock PRMI with carrier with no luck. But I was able to see flashing of up to 4000 counts in POP_DC. At this position, I centered the PRM optical lever too (Attachment 2).

Attachment 1: Screen_Shot_2021-10-15_at_4.34.45_PM.png
Screen_Shot_2021-10-15_at_4.34.45_PM.png
Attachment 2: Screen_Shot_2021-10-15_at_4.45.31_PM.png
Screen_Shot_2021-10-15_at_4.45.31_PM.png
Attachment 3: Screen_Shot_2021-10-15_at_4.34.45_PM.png
Screen_Shot_2021-10-15_at_4.34.45_PM.png
Attachment 4: Screen_Shot_2021-10-15_at_4.34.45_PM.png
Screen_Shot_2021-10-15_at_4.34.45_PM.png
  16406   Fri Oct 15 12:14:27 2021 Ian MacMillanUpdateGeneralKicking optics in freeSwing measurment

[Ian, Anchal]

we ran the free swinging test last night and the results match up with in 1/10th of a Hz. We calculated the peak using the getPeakFreqs2 script to find the peaks and they are close to previous values from 2016.

In attachment 1 you will see the results of the test for each optic.

The peak values are as follows:

Optic POS (Hz) PIT (Hz) YAW (Hz) SIDE (Hz)
PRM 0.94 0.96 0.99 0.99
MC2 0.97 0.75 0.82 0.99
ETMY 0.98 0.98 0.95 0.95
MC1 0.97 0.68 0.80 1.00
ITMX 0.95 0.68 0.68 0.98
ETMX 0.96 0.73 0.85 1.00
BS 0.99 0.74 0.80 0.96
ITMY 0.98 0.72 0.72 0.98
MC3 0.98 0.77 0.84 0.97

The results from 2016 can be found at: /cvs/cds/rtcdt/caltech/c1/scripts/SUS/PeakFit/parameters2.m

Attachment 1: 20211015_Kicktest_plot.pdf
20211015_Kicktest_plot.pdf 20211015_Kicktest_plot.pdf 20211015_Kicktest_plot.pdf 20211015_Kicktest_plot.pdf 20211015_Kicktest_plot.pdf 20211015_Kicktest_plot.pdf 20211015_Kicktest_plot.pdf 20211015_Kicktest_plot.pdf
  16405   Thu Oct 14 20:16:22 2021 YehonathanUpdateGeneralPRMI free swinging

{Yehonathan, Raj}

We aligned the IFO in the PRMI state and let it swing freely.

  16404   Thu Oct 14 18:30:23 2021 KojiSummaryVACFlange/Cable Stand Configuration

Flange Configuration for BHD

We will need total 5 new cable stands. So Qty.6 is the number to be ordered.


Looking at the accuglass drawing, the in-vaccum cables are standard dsub 25pin cables only with two standard fixing threads.

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110070_3.pdf

For SOSs, the standard 40m style cable bracket works fine. https://dcc.ligo.org/D010194-x0

However, for the OMCs, we need to make the thread holes available so that we can mate DB25 male cables to these cables.
One possibility is to improvise this cable bracket to suspend the cables using clean Cu wires or something. I think we can deal with this issue in situ.


Ha! The male side has the 4-40 standoff (jack) screws. So we can hold the male side on the bracket using the standoff (jack) screws and plug the female cables. OK! The issue solved!

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110029_3.pdf

Attachment 1: 40m_flange_layout_20211014.pdf
40m_flange_layout_20211014.pdf
  16403   Thu Oct 14 16:38:26 2021 Ian MacMillanUpdateGeneralKicking optics in freeSwing measurment

[Ian, Anchal]

We are going to kick the optics tonight at 2am.

The optics we will kick are the PRM BS ITMX ITMY ETMX ETMY

We will kick each one once and record for 2000 seconds and the log files will be placed in users/ian/20211015_FreeSwingTest/logs.

  16402   Thu Oct 14 13:40:49 2021 YehonathanSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

Here is a side by side comparison of the PRMI sensing matrix using PRM/BS actuation (attachment 1) and ITMs actuation (attachment 2). The situation looks similar in both cases. That is, good orthogonality on REFL55 and bad seperation in the rest of the RFPDs.

Quote:

should compare side by side with the ITM PRMI radar plots to see if there is a difference. How do your new plots compare with Gautam's plots of PRMI?

 

Attachment 1: BSPRM_Actuation_Radar_plots.png
BSPRM_Actuation_Radar_plots.png
Attachment 2: ITM_Actuation_Radar_plots.png
ITM_Actuation_Radar_plots.png
  16401   Thu Oct 14 11:25:49 2021 YehonathanUpdatePSLPMC unlocked

{Yehonathan, Anchal}

I went to get a sandwich around 10:20 AM and when I came back BS was moving like crazy. We shutdown the watchdog.

We look at the spectra of the OSEMs (attachment 1). Clearly, the UR sensing is bad.

We took the BS sattelite box out. Anchal opened the box and nothing seemed wrong visually. We returned the box and connected it to the fake OSEM box. The sensor spectra seemed normal.

We connected the box to the vacuum chamber and the spectra is still normal (attachment 2).

We turn on the coils and the motion got damped very quickly (RMS <0.5mV).

Either the problem was solved by disconnecting and connecting the cables or it will come back to haunt us.

 

 

 

Attachment 1: BS_OSEM_Sensor_PSD.pdf
BS_OSEM_Sensor_PSD.pdf
Attachment 2: BS_OSEM_Sensor_PSD_AfterReconnectingCables.pdf
BS_OSEM_Sensor_PSD_AfterReconnectingCables.pdf
  16400   Thu Oct 14 09:28:46 2021 YehonathanUpdatePSLPMC unlocked

PMC has been unlocked since ~ 2:30 AM. Seems like the PZT got saturated. I moved the DC output adjuster and the PMC locked immidiatly although with a low transmission of 0.62V (>0.7V is the usual case) and high REFL.

IMC locked immidiately but IFO seems to be completely misaligned. The beams on the AS monitor are moving quite alot syncronously. BS watchdog tripped. I enabled the coil outputs. Waiting for the RMS motion to relax...

Its not relaxing. RMS motion is still high. I disabled the coils again and reenabled them. This seems to have worked. Arms were locked quite easily but the ETMs oplevs were way off and the ASS couldn't get the TRX and TRY more than 0.7. I align the ETMs to center the oplev. I realign everything else and lock the arms. Maximium TR is still < 0.8.

 

 

  16399   Wed Oct 13 15:36:38 2021 HangUpdateCalibrationXARM OLTF

We did a few quick XARM oltf measurements. We excited C1:LSC-ETMX_EXC with a broadband white noise upto 4 kHz. The timestamps for the measurements are: 1318199043 (start) - 1318199427 (end).

We will process the measurement to compute the cavity pole and analog filter poles & zeros later.

Attachment 1: Screenshot_2021-10-13_15-32-16.png
Screenshot_2021-10-13_15-32-16.png
  16398   Wed Oct 13 11:25:14 2021 AnchalSummaryCDSRan c1sus2 models in martian CDS. All good!

Three extra steps (when adding new models, new FE):

  • Chris pointed out that the sudo command in c1sus2 is giving error
    sudo: unable to resolve host c1sus2
    
    This error comes in when the computer could not figure out it's own hostname. Since FEs are network booted off the fb1, we need to update the /etc/hosts in /diskless/root everytime we add a new FE.
    controls@fb1:~ 0$ sudo chroot /diskless/root
    fb1:/ 0# sudo nano /etc/hosts
    fb1:/ 0# exit
    
    I added the following line in /etc/hosts file above:
    192.168.113.92  c1sus2 c1sus2.martian
    
    This resolved the issue of sudo giving error. Now, the rtcds make and install steps had no errors mentioned in their outputs.
  • Another thing that needs to be done, as Koji pointed out, is to add the host and models in /etc/rtsystab in /diskless/root of fb:
    controls@fb1:~ 0$ sudo chroot /diskless/root
    fb1:/ 0# sudo nano /etc/rtsystab
    fb1:/ 0# exit
    
    I added the following lines in /etc/rtsystab file above:
    c1sus2   c1x07  c1su2
    
    This told rtcds what models would be available on c1sus2. Now rtcds list is displaying the right models:
    controls@c1sus2:~ 0$ rtcds list
    c1x07
    c1su2
  • The above steps are still not sufficient for the daqd_ processes to know about the new models. This part is supossed to happen automatically, but does not happen in our CDS apparently. So everytime there is a new model, we need to edit the file /opt/rtcds/caltech/c1/chans/daq/master and add following lines to it:
    # Fast Data Channel lists
    # c1sus2
    /opt/rtcds/caltech/c1/chans/daq/C1X07.ini
    /opt/rtcds/caltech/c1/chans/daq/C1SU2.ini
    
    # test point lists
    # c1sus2
    /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par
    /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1su2.par
    
    I needed to restart the daqd_ processes in  fb1 for them to notice these changes:
    controls@fb1:~ 0$ sudo systemctl restart daqd_*
    
    This finally lit up the status channels of DC in C1X07_GDS_TP.adl and C1SU2_GDS_TP.adl . However the channels C1:DAQ-DC0_C1X07_STATUS and C1:DAQ-DC0_C1SU2_STATUS both have values 0x2bad. This persists on restarting the models. I then just simply restarted teh mx_stream on c1sus2 and boom, it worked! (see attached all green screen, never seen before!)

So now Ian can work on testing the I/O chassis and we would be good to move c1sus2 FE and I/O chassis to 1Y3 after that. I've also done following extra changes:

  • Updated CDS_FE_STATUS medm screen to show the new c1sus2 host.
  • Updated global diag rest script to act on c1xo7 and c1su2 as well.
  • Updated mxstream restart script to act on c1sus2 as well.
Attachment 1: CDS_screens_running.png
CDS_screens_running.png
  16397   Tue Oct 12 23:42:56 2021 KojiSummaryCDSConnected c1sus2 to martian network

Don't you need to add the new hosts to /diskless/root/etc/rtsystab at fb1? --> There looks many elogs talking about editing "rtsystab".

controls@fb1:/diskless/root/etc 0$ cat rtsystab
#
# host    list of control systems to run, starting with IOP
#
c1iscex  c1x01  c1scx c1asx
c1sus     c1x02  c1sus c1mcs c1rfm c1pem
c1ioo     c1x03  c1ioo c1als c1omc
c1lsc    c1x04  c1lsc c1ass c1oaf c1cal c1dnn c1daf
c1iscey  c1x05 c1scy c1asy
#c1test   c1x10  c1tst2

 

  16396   Tue Oct 12 17:20:12 2021 AnchalSummaryCDSConnected c1sus2 to martian network

I connected c1sus2 to the martian network by splitting the c1sim connection with a 5-way switch. I also ran another ethernet cable from the second port of c1sus2 to the DAQ network switch on 1X7.

Then I logged into chiara and added the following in chiara:/etc/dhcp/dhcpd.conf :

host c1sus2 {
  hardware ethernet 00:25:90:06:69:C2;
  fixed-address 192.168.113.92;
}

And following line in chiara:/var/lib/bind/martian.hosts :

c1sus2          A    192.168.113.92

Note that entires c1bhd is already added in these files, probably during some earlier testing by Gautam or Jon. Then I ran following to restart the dhcp server and nameserver:

~> sudo service bind9 reload
[sudo] password for controls:
 * Reloading domain name service... bind9                                                 [ OK ]
~> sudo service isc-dhcp-server restart
isc-dhcp-server stop/waiting
isc-dhcp-server start/running, process 25764

Now, As I switched on c1sus2 from front panel, it booted over network from fb1 like other FE machines and I was able to login to it by first logging to fb1 and then sshing to c1sus2.

Next, I copied the simulink models and the medm screens of c1x06, xc1x07, c1bhd, c1sus2 from the paths mentioned on this wiki page. I also copied the medm screens from chiara(clone):/opt/rtcds/caltech/c1/medm to martian network chiara in the appropriate places. I have placed the file /opt/rtcds/caltech/c1/medm/teststand_sitemap.adl which can be used to open sitemap for c1bhd and c1sus2 IOP and user models.

Then I logged into c1sus2 (via fb1) and did make, install, start procedure:

controls@c1sus2:~ 0$ rtcds make c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### building c1x07...
Cleaning c1x07...
Done
Parsing the model c1x07...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1x07...
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl

Successfully compiled c1x07
***********************************************
Compile Warnings, found in c1x07_warnings.log:
***********************************************
***********************************************
controls@c1sus2:~ 0$ rtcds install c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### installing c1x07...
Installing system=c1x07 site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1X07.txt
Installing /opt/rtcds/caltech/c1/target/c1x07/c1x07epics
Installing /opt/rtcds/caltech/c1/target/c1x07
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1x07
/opt/rtcds/caltech/c1/scripts/startc1x07
sudo: unable to resolve host c1sus2
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_211012_174226.par -gds_node=24 -site_letter=C -system=c1x07 -host=c1sus2
Installing GDS node 24 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1X07.ini
Installing Epics MEDM screens
Running post-build script

safe.snap exists 
controls@c1sus2:~ 0$ rtcds start c1x07
Cannot start/stop model 'c1x07' on host c1sus2.
controls@c1sus2:~ 4$ rtcds list

controls@c1sus2:~ 0$ 

One can see that even after making and installing, the model c1x07 is not listed as available models in rtcds list. Same is the case for c1sus2 as well. So I could not proceed with testing.

Good news is that nothing that I did affect the current CDS functioning. So we can probably do this testing safely from the main CDS setup.

  16395   Tue Oct 12 17:10:56 2021 AnchalSummaryCDSSome more information

Chris pointed out some information displaying scripts, that show if the DAQ network is working or not. I thought it would be nice to log this information here as well.

controls@fb1:/opt/mx/bin 0$ ./mx_info
MX Version: 1.2.16
MX Build: controls@fb1:/opt/src/mx-1.2.16 Mon Aug 14 11:06:09 PDT 2017
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
    8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0:  364.4 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
    Status:        Running, P0: Link Up
    Network:    Ethernet 10G

    MAC Address:    00:60:dd:45:37:86
    Product code:    10G-PCIE-8B-S
    Part number:    09-04228
    Serial number:    423340
    Mapper:        00:60:dd:45:37:86, version = 0x00000000, configured
    Mapped hosts:    3

                                                        ROUTE COUNT
INDEX    MAC ADDRESS     HOST NAME                        P0
-----    -----------     ---------                        ---
   0) 00:60:dd:45:37:86 fb1:0                             1,0
   1) 00:25:90:05:ab:47 c1bhd:0                           1,0
   2) 00:25:90:06:69:c3 c1sus2:0                          1,0

 

controls@c1bhd:~ 1$ /opt/open-mx/bin/omx_info
Open-MX version 1.5.4
 build: root@fb1:/opt/src/open-mx-1.5.4 Tue Aug 15 23:48:03 UTC 2017

Found 1 boards (32 max) supporting 32 endpoints each:
 c1bhd:0 (board #0 name eth1 addr 00:25:90:05:ab:47)
   managed by driver 'igb'

Peer table is ready, mapper is 00:60:dd:45:37:86
================================================
  0) 00:25:90:05:ab:47 c1bhd:0
  1) 00:60:dd:45:37:86 fb1:0
  2) 00:25:90:06:69:c3 c1sus2:0

 

controls@c1sus2:~ 0$ /opt/open-mx/bin/omx_info
Open-MX version 1.5.4
 build: root@fb1:/opt/src/open-mx-1.5.4 Tue Aug 15 23:48:03 UTC 2017

Found 1 boards (32 max) supporting 32 endpoints each:
 c1sus2:0 (board #0 name eth1 addr 00:25:90:06:69:c3)
   managed by driver 'igb'

Peer table is ready, mapper is 00:60:dd:45:37:86
================================================
  0) 00:25:90:06:69:c3 c1sus2:0
  1) 00:60:dd:45:37:86 fb1:0
  2) 00:25:90:05:ab:47 c1bhd:0

These outputs prove that the framebuilder and the FEs are able to see each other in teh DAQ network.


Further, the error that we see when IOP model is started which crashes the mx_stream service on the FE machines (see 40m/16391) :

isendxxx failed with status Remote Endpoint Unreachable

This has been seen earlier when Jamie was troubleshooting the current fb1 in martian network in 40m/11655 in Oct, 2015. Unfortunately, I could not find what Jamie did over a year to fix this issue.

  16394   Tue Oct 12 16:39:52 2021 ranaSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

should compare side by side with the ITM PRMI radar plots to see if there is a difference. How do your new plots compare with Gautam's plots of PRMI?

  16393   Tue Oct 12 11:32:54 2021 YehonathanSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

Late submission (From Thursday 10/07):

I measured the PRMI sensing matrix to see if the BS and PRMI output matrices tweaking had any effect.

While doing so, I noticed I made a mistake in the analysis of the previous sensing matrix measurement. It seems that I have used the radar plot function with radians where degrees should have been used (the reason is that the azimuthal uncertainty looked crazy when I used degrees. I still don't know why this is the case with this measurement).

In any case, attachment 1 and 2 show the PRMI radar plots with the modified output matrices and and in the normal state, respectively.

It seems like the output matrix modification didn't do anything but REFL55 has good orthogonality. Problem gone??

Attachment 1: modified_output_matrices_radar_plots.png
modified_output_matrices_radar_plots.png
Attachment 2: normal_output_matrices_radar_plots.png
normal_output_matrices_radar_plots.png
  16392   Mon Oct 11 18:29:35 2021 AnchalSummaryCDSMoving forward?

The teststand has some non-trivial issue with Myrinet card (either software or hardware) which even teh experts are saying they don't remember how to fix it. CDS with mx was iin use more than a decade ago, so it is hard to find support for issues with it now and will be the same in future. We need to wrap up this test procedure one way or another now, so I have following two options moving forward:


Direct integration with main CDS and testing

  • We can just connect the c1sus2 and c1bhd FE computers to martian network directly.
  • We'll have to connect c1sus2 and c1bhd to the optical fiber subnetwork as well.
  • On booting, they would get booted through the exisitng fb1 boot server which seems to work fine for the other 5 FE machines.
  • We can update teh DHCP in chiara and reload it so that we can ssh into these FEs with host names.
  • Hopefully, presence of these computers won't tank the existing CDS even if they  themselves have any issues, as they have no shared memory with other models.
  • If this works, we can do the loop back testing of I/O chassis using the main DAQ network and move on with our upgrade.
  • If this does not work and causes any harm to exisitng CDS network, we can disconnect these computers and go back to existing CDS. Recently, our confidence on rebooting the CDS has increased with the robust performance as some legacy issues were fixed.
  • We'll however, continue to use a CDS which is no more supported by the current LIGO CDS group.

Testing CDS upgrade on teststand

  • From what I could gather, most of the hardware in I/O chassis that I could find, is still used in CDS of LLO and LHO, with their recent tests and documents using the same cards and PCBs.
  • There might be some difference in the DAQ network setup that I need to confirm.
  • I've summarised the current c1teststand hardware on this wiki page.
  • If the latest CDS is backwards compatible with our hardware, we can test the new CDS in teh c1teststand setup without disrupting our main CDS. We'll have ample help and support for this upgrade from the current LIGO CDS group.
  • We can do the loop back testing of the I/O chassis as well.
  • If the upgrade is succesfull in the teststand without many hardware changes, we can upgrade the main CDS of 40m as well, as it has the same hardware as our teststand.
  • Biggest plus point would be that out CDS will be up-to-date and we will be able to take help from CDS group if any trouble occurs.

So these are the two options we have. We should discuss which one to take in the mattermost chat or in upcoming meeting.

  16391   Mon Oct 11 17:31:25 2021 AnchalSummaryCDSFixed mounting of mx devices in fb. daqd_dc is running now.
 
 

However, lspci | grep 'Myri' shows following output on both computers:

controls@fb1:/dev 0$ lspci | grep 'Myri'
02:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC (rev 01)

Which means that the computer detects the card on PCie slot.

 

I tried to add this to /etc/rc.local to run this script at every boot, but it did not work. So for now, I'll just manually do this step everytime. Once the devices are loaded, we get:

controls@fb1:/etc 0$ ls /dev/*mx*
/dev/mx0  /dev/mx4  /dev/mxctl   /dev/mxp2  /dev/mxp6         /dev/ptmx
/dev/mx1  /dev/mx5  /dev/mxctlp  /dev/mxp3  /dev/mxp7
/dev/mx2  /dev/mx6  /dev/mxp0    /dev/mxp4  /dev/open-mx
/dev/mx3  /dev/mx7  /dev/mxp1    /dev/mxp5  /dev/open-mx-raw

The, restarting all daqd_ processes, I found that daqd_dc was running succesfully now. Here is the status:

controls@fb1:/etc 0$ sudo systemctl status daqd_* -l
● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
   Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:00 PDT; 23min ago
 Main PID: 2308 (daqd_dc_mx)
   CGroup: /daqd.slice/daqd_dc.service
           ├─2308 /usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc
           └─2370 caRepeater

Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 006 thread priority error Operation not permitted[Mon Oct 11 17:48:06 2021]
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 005 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] [Mon Oct 11 17:48:06 2021] mx receiver 006 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 007 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread - label dqmx003 pid=2362
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread priority error Operation not permitted
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: warning:regcache incompatible with malloc
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] EDCU has 410 channels configured; first=0
Oct 11 17:49:06 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:49:06 2021] ->4: clear crc

● daqd_fw.service - Advanced LIGO RTS daqd frame writer
   Loaded: loaded (/etc/systemd/system/daqd_fw.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:01 PDT; 23min ago
 Main PID: 2318 (daqd_fw)
   CGroup: /daqd.slice/daqd_fw.service
           └─2318 /usr/bin/daqd_fw -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.fw

Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] [Mon Oct 11 17:48:09 2021] Producer thread - label dqproddbg pid=2440
Oct 11 17:48:09 fb1 daqd_fw[2318]: Producer crc thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] [Mon Oct 11 17:48:09 2021] Producer crc thread put on CPU 0
Oct 11 17:48:09 fb1 daqd_fw[2318]: Producer thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread put on CPU 0
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread - label dqprod pid=2434
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread put on CPU 0
Oct 11 17:48:10 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:10 2021] Minute trender made GPS time correction; gps=1318034906; gps%60=26
Oct 11 17:49:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:49:09 2021] ->3: clear crc

● daqd_rcv.service - Advanced LIGO RTS daqd testpoint receiver
   Loaded: loaded (/etc/systemd/system/daqd_rcv.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:00 PDT; 23min ago
 Main PID: 2311 (daqd_rcv)
   CGroup: /daqd.slice/daqd_rcv.service
           └─2311 /usr/bin/daqd_rcv -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.rcv

Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1X07_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_STATUS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_CRC_CPS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_STATUS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_CRC_CPS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1OM[Mon Oct 11 17:50:21 2021] Epics server started
Oct 11 17:50:24 fb1 daqd_rcv[2311]: [Mon Oct 11 17:50:24 2021] Minute trender made GPS time correction; gps=1318035040; gps%120=40
Oct 11 17:51:21 fb1 daqd_rcv[2311]: [Mon Oct 11 17:51:21 2021] ->3: clear crc

Now, even before starting teh FE models, I see DC status as ox2bad in the CDS screens of the IOP and user models. The mx_stream service remains in a failed state at teh FE machines and remain the same even after restarting the service.

controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: failed (Result: exit-code) since Mon 2021-10-11 17:50:26 PDT; 15min ago
  Process: 382 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
 Main PID: 382 (code=exited, status=1/FAILURE)

Oct 11 17:50:25 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 17:50:25 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 17:50:25 c1sus2 mx_stream_exec[382]: Failed to open endpoint Not initialized
Oct 11 17:50:26 c1sus2 systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 11 17:50:26 c1sus2 systemd[1]: Unit mx_stream.service entered failed state.

But  if I restart the mx_stream service before starting the rtcds models, the mx-stream service starts succesfully:

controls@c1sus2:~ 0$ sudo systemctl restart mx_stream
controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: active (running) since Mon 2021-10-11 18:14:13 PDT; 25s ago
 Main PID: 1337 (mx_stream)
   CGroup: /system.slice/mx_stream.service
           └─1337 /usr/bin/mx_stream -e 0 -r 0 -w 0 -W 0 -s c1x07 c1su2 -d fb1:0

Oct 11 18:14:13 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 18:14:13 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: send len = 263596
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: Connection Made

However, the DC status on CDS screens still show 0x2bad. As soon as I start the rtcds model c1x07 (the IOP model for c1sus2), the mx_stream service fails:

controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: failed (Result: exit-code) since Mon 2021-10-11 18:18:03 PDT; 27s ago
  Process: 1337 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
 Main PID: 1337 (code=exited, status=1/FAILURE)

Oct 11 18:14:13 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 18:14:13 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: send len = 263596
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: Connection Made
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: isendxxx failed with status Remote Endpoint Unreachable
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: disconnected from the sender
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: c1x07_daq mmapped address is 0x7fe3620c3000
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: c1su2_daq mmapped address is 0x7fe35e0c3000
Oct 11 18:18:03 c1sus2 systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 11 18:18:03 c1sus2 systemd[1]: Unit mx_stream.service entered failed state.

This shows that the start of rtcds model, causes the fail in mx_stream, possibly due to inability of finding the endpoint on fb1. I've again reached to the edge of my knowledge here. Maybe the fiber optic connection between fb and the network switch that connects to FE is bad, or the connection between switch and FEs is bad.

But we are just one step away from making this work.

 

 

  16390   Mon Oct 11 13:59:47 2021 HangUpdateSUSMore PRM L2P measurements

We report here the analysis results for the measurements done in elog:16388

Figs. 1 & 2 are respectively measurements of the white noise excitation and the optimized excitation. The shaded region corresponds to the 1-sigma uncertainty at each frequency bin. By eyes, one can already see that the constraints on the phase in the 0.6-1 Hz band are much tighter in the optimized case than in the white noise case. 

We found the transfer function was best described by two real poles + one pair of complex poles (i.e., resonance) + one pair of complex zeros in the right-half plane (non-minimum phase delay). The measurement in fact suggested a right-hand pole somewhere between 0.05-0.1 Hz which cannot be right. For now, I just manually flipped the sign of this lowest frequency pole to the left-hand side. However, this introduced some systematic deviation in the phase in the 0.3-0.5 Hz band where our coherence was still good. Therefore, a caveat is that our model with 7 free parameters (4 poles + 2 zeros + 1 gain as one would expect for an ideal signal-stage L2P TF) might not sufficiently capture the entire physics. 

In Fig. 3 we showed the comparison of the two sets of measurements together with the predictions based on the Fisher matrix. Here the color gray is for the white-noise excitation and olive is for the optimized excitation. The solid and dotted contours are respectively the 1-sigma and 3-sigma regions from the Fisher calculation, and crosses are maximum likelihood estimations of each measurement (though the scipy optimizer might not find the true maximum).

Note that the mean values don't match in the two sets of measurements, suggesting potential bias or other systematics exists in the current measurement. Moreover, there could be multiple local maxima in the likelihood in this high-D parameter space (not surprising). For example, one could reduce the resonant Q but enhance the overall gain to keep the shoulder of a resonance having the same amplitude. However, this correlation is not explicit in the Fisher matrix (first-order derivatives of the TF, i.e., local gradients) as it does not show up in the error ellipse. 

In Fig. 4 we show the further optimized excitation for the next round of measurements. Here the cyan and olive traces are obtained assuming different values of the "true" physical parameter, yet the overall shapes of the two are quite similar, and are close to the optimized excitation spectrum we already used in elog:16388

 

Attachment 1: prm_l2p_tf_meas_white.pdf
prm_l2p_tf_meas_white.pdf
Attachment 2: prm_l2p_tf_meas_opt.pdf
prm_l2p_tf_meas_opt.pdf
Attachment 3: prm_l2p_fisher_vs_data_white_vs_opt.pdf
prm_l2p_fisher_vs_data_white_vs_opt.pdf
Attachment 4: prm_l2p_Pxx_evol_v2.pdf
prm_l2p_Pxx_evol_v2.pdf
  16389   Mon Oct 11 11:13:04 2021 ranaUpdateSUSMore PRM L2P measurements

For the oplev, there are DQ channels you can use so that its possible to look back in the past for long measurements. They have names like PERROR

  16388   Fri Oct 8 17:33:13 2021 HangUpdateSUSMore PRM L2P measurements

[Raj, Hang]

We did some more measurements on the PRM L2P TF. 

We tried to compare the parameter estimation uncertainties of white vs. optimal excitation. We drove C1:SUS-PRM_LSC_EXC with "Normal" excitation and digital gain of 700.

For the white noise exciation, we simply put a butter("LowPass",4,10) filter to select out the <10 Hz band.

For the optimal exciation, we use butter("BandPass",6,0.3,1.6) gain(3) notch(1,20,8) to approximate the spectral shape reported in elog:16384. We tried to use awg.ArbitraryLoop yet this function seems to have some bugs and didn't run correctly; an issue has been submitted to the gitlab repo with more details. We also noticed that in elog:16384, the pitch motion should be read out from C1:SUS-PRM_OL_PIT_IN1 instead of the OUT channel, as there are some extra filters between IN1 and OUT. Consequently, the exact optimal exciation should be revisited, yet we think the main result should not be altered significantly.

While a more detail analysis will be done later offline, we post in the attached plot a comparison between the white (blue) vs optimal (red) excitation. Note in this case, we kept the total force applied to the PRM the same (as the RMS level matches).

Under this simple case, the optimal excitation appears reasonable in two folds.

First, the optimization tries to concentrate the power around the resonance. We would naturally expect that near the resonance, we would get more Fisher information, as the phase changes the fastest there (i.e., large derivatives in the TF).

Second, while we move the power in the >2 Hz band to the 0.3-2 Hz band, from the coherence plot we see that we don't lose any information in the > 2 Hz region. Indeed, even with the original white excitation, the coherence is low and the > 2 Hz region would not be informative. Therefore, it seems reasonable to give up this band so that we can gain more information from locations where we have meaningful coherence.

Attachment 1: Screenshot_2021-10-08_17-30-52.png
Screenshot_2021-10-08_17-30-52.png
  16387   Thu Oct 7 02:04:19 2021 KojiUpdateElectronicsSatellite amp adapter chassis

The 4 units of Satellite Amp Adapter were done:
- The ears were fixed with the screws
- The handles were attached (The stock of the handles is low)
- The boards are now supported by plastic stand-offs. (The chassis were drilled)
- The front and rear panels were fixed to the chassis
- The front and rear connectors were fixed with the low profile 4-40 stand-off screws (3M 3341-1S)
 

Attachment 1: P_20211006_205044.jpg
P_20211006_205044.jpg
  16386   Wed Oct 6 16:31:02 2021 TegaUpdateElectronicsSat Amp modifications

[Tega, Koji]

(S2100737) - Debugging showed that the opamp, AD822ARZ, for PD2 circuit was not working as expected so we replaced with a spare and this fixed the problem. Somehow, the PD1 circuit no longer presents any issues, so everything is now fine with the unit.

(S2100741) - All good.

Quote:

Trying to finish 2 more Sat Amp units so that we have the 7 units needed for the X-arm install. 

S2100736 - All good

S2100737 - This unit presented with an issue on the PD1 circuit of channel 1-4 PCB where the voltage reading on TP6, TP7 and TP8 are -15.1V,  -14.2V, and +14.7V respectively, instead of ~0V.  The unit also has an issue on the PD2 circuit of channel 1-4 PCB because the voltage reading on TP7 and TP8 are  -14.2V, and +14.25V respectively, instead of ~0V.

 

 

  16385   Wed Oct 6 15:39:29 2021 AnchalSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

Note that your tests were done with the output matrix for BS and PRM in the compensated state as done in 40m/16374. The changes made there were supposed to clear out any coil actuation imbalance in the angular degrees of freedom.

  16384   Wed Oct 6 15:04:36 2021 HangUpdateSUSPRM L2P TF measurement & Fisher matrix analysis

[Paco, Hang]

Yesterday afternoon Paco and I measured the PRM L2P transfer function. We drove C1:SUS-PRM_LSC_EXC with a white noise in the 0-10 Hz band (effectively a white, longitudinal force applied to the suspension) and read out the pitch response in C1:SUS-PRM_OL_PIT_OUT. The local damping was left on during the measurement. Each FFT segment in our measurement is 32 sec and we used 8 non-overlapping segments for each measurement. The empirically determined results are also compared with the Fisher matrix estimation (similar to elog:16373).

Results:

Fig. 1 shows one example of the measured L2P transfer function. The gray traces are measurement data and shaded region the corresponding uncertainty. The olive trace is the best fit model. 

Note that for a single-stage suspension, the ideal L2P TF should have two zeros at DC and two pairs of complex poles for the length and pitch resonances, respectively. We found the two resonances at around 1 Hz from the fitting as expected. However, the zeros were not at DC as the ideal, theoretical model suggested. Instead, we found a pair of right-half plane zeros in order to explain the measurement results. If we cast such a pair of right-half plane zeros into (f, Q) pair, it means a negative value of Q. This means the system does not have the minimum phase delay and suggests some dirty cross-coupling exists, which might not be surprising. 

Fig. 2 compares the distribution of the fitting results for 4 different measurements (4 red crosses) and the analytical error estimation obtained using the Fisher matrix (the gray contours; the inner one is the 1-sigma region and the outer one the 3-sigma region). The Fisher matrix appears to underestimate the scattering from this experiment, yet it does capture the correlation between different parameters (the frequencies and quality factors of the two resonances).

One caveat though is that the fitting routine is not especially robust. We used the vectfit routine w/ human intervening to get some initial guesses of the model. We then used a standard scipy least-sq routine to find the maximal likelihood estimator of the restricted model (with fixed number of zeros and poles; here 2 complex zeros and 4 complex poles). The initial guess for the scipy routine was obtained from the vectfit model.  

Fig. 3 shows how we may shape our excitation PSD to maximize the Fisher information while keeping the RMS force applied to the PRM suspension fixed. In this case the result is very intuitive. We simply concentrate our drive around the resonance at ~ 1 Hz, focusing on locations where we initially have good SNR. So at least code is not suggesting something crazy... 

Fig. 4 then shows how the new uncertainty (3-sigma contours) should change as we optimize our excitation. Basically one iteration (from gray to olive) is sufficient here. 

We will find a time very recently to repeat the measurement with the optimized injection spectrum.

Attachment 1: prm_l2p_tf_meas.pdf
prm_l2p_tf_meas.pdf
Attachment 2: prm_l2p_fisher_vs_data.pdf
prm_l2p_fisher_vs_data.pdf
Attachment 3: prm_l2p_Pxx_evol.pdf
prm_l2p_Pxx_evol.pdf
Attachment 4: prm_l2p_fisher_evol.pdf
prm_l2p_fisher_evol.pdf
  16383   Tue Oct 5 20:04:22 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[Paco, Rana]

We had a look at the BS actuation. Along the way we created a couple of issues that we fixed. A summary is below.

  1. First, we locked MICH. While doing this, we used the /users/Templates/ndscope/LSC/MICH.yml ndscope template to monitor some channels. I edited the yaml file to look at C1:LSC-ASDC_OUT_DQ instead of the REFL_DC. Rana pointed out that the C1:LSC-MICH_OUT_DQ (MICH control point) had a big range (~ 5000 counts rms) and this should not be like that.
  2. We tried to investigate the aforementioned thing by looking at the whitening / uwhitening filters but all the slow epics channels where "white" on the medm screen. Looking under CDS/slow channel monitors, we realized that both c1iscaux and c1auxey were weird, so we tried telnet to c1iscaux without success. Therefore, we followed the recommended wiki procedure of hard rebooting this machine. While inside the lab and looking for this machine, we touched things around the 'rfpd' rack and once we were back in the control room, we couldn't see any light on the AS port camera. But the whitening filter medm screens were back up.
  3. While rana ssh'd into c1auxey to investigate about its status, and burtrestored the c1iscaux channels, we looked at trends to figure out if anything had changed (for example TT1 or TT2) but this wasn't the case. We decided to go back inside to check the actual REFL beams and noticed it was grossly misaligned (clipping)... so we blamed it on the TTs and again, went around and moved some stuff around the 'rfpd' rack. We didn't really connect or disconnect anything, but once we were back in the control room, light was coming from the AS port again. This is a weird mystery and we should systematically try to repeat this and fix the actual issue.
  4. We restored the MICH, and returned to BS actuation problems. Here, we essentially devised a scheme to inject noise at 310.97 Hz and 313.74. The choice is twofold, first it lies outside the MICH loop UGF (~150 Hz), and second, it matches the sensing matrix OSC frequencies, so it's more appropriate for a comparison.
  5. We injected two lines using the BS SUS LOCKIN1 and LOCKIN2 oscilators so we can probe two coils at once, with the LSC loop closed, and read back using the C1:LSC-MICH_IN1_DQ channel. We excited with an amplitude of 1234.0 counts and 1254 counts respectively (to match the ~ 2 % difference in frequency) and noted that the magnitude response in UR was 10% larger than UL, LL, and LR which were close to each other at the 2% level.

[Paco]

After rana left, I did a second pass at the BS actuation. I took TF measurements at the oscilator frequencies noted above using diaggui, and summarize the results below:

TF UL (310.97 Hz) UR (313.74 Hz) LL (310.97 Hz) LR (313.74 Hz)
Magnitude (dB) 93.20 92.20 94.27 93.85
Phase (deg) -128.3 -127.9 -128.4 -127.5

This procedure should be done with PRM as well and using the PRCL instead of MICH.

  16382   Tue Oct 5 18:00:53 2021 AnchalSummaryCDSc1teststand time synchronization working now

Today I got a new router that I used to connect the c1teststand, fb1 and chiara. I was able to see internet access in c1teststand and fb1, but not in chiara. I'm not sure why that is the case.

The good news is that the ntp server on fb1(clone) is working fine now and both FE computers, c1bhd and c1sus2 are succesfully synchronized to the fb1(clone) ntpserver. This resolves any possible timing issues in this DAQ network.

On running the IOP and user models however, I see the same errors are mentioned in 40m/16372. Something to do with:

Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: mx_connect failed Nic ID not Found in Peer Table
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1x07_daq mmapped address is 0x7fa4819cc000
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1su2_daq mmapped address is 0x7fa47d9cc000


Thu Oct 7 17:04:31 2021

I fixed the issue of chiara not getting internet. Now c1teststand, fb1 and chiara, all have internet connections. It was the issue of default gateway and interface and findiing the DNS. I have found the correct settings now.

  16381   Tue Oct 5 17:58:52 2021 AnchalSummaryCDSc1teststand problems summary

open-mx service is running successfully on the fb1(clone), c1bhd and c1sus.

Quote:

I don't know anything about mx/open-mx, but you also need open-mx,don't you?


  16380   Tue Oct 5 17:01:20 2021 KojiUpdateElectronicsSat Amp modifications

Make sure the inputs for the PD amps are open. This is the current amplifier and we want to leave the input pins open for the test of this circuit.

TP6 is the first stage of the amps (TIA). So this stage has the issue. Usual check if the power is properly supplied / if the pins are properly connected/isolated / If the opamp is alive or not.

For TP8, if TP8 get railed. TP5 and TP7 are going to be railed too. Is that the case, if so, check this whitening stage in the same way as above.
If the problem is only in the TP5 and/or TP7 it is the differential driver issue. Check the final stage as above. Replacing the opamp could help.

 

  16379   Mon Oct 4 21:58:17 2021 TegaUpdateElectronicsSat Amp modifications

Trying to finish 2 more Sat Amp units so that we have the 7 units needed for the X-arm install. 

S2100736 - All good

S2100737 - This unit presented with an issue on the PD1 circuit of channel 1-4 PCB where the voltage reading on TP6, TP7 and TP8 are -15.1V,  -14.2V, and +14.7V respectively, instead of ~0V.  The unit also has an issue on the PD2 circuit of channel 1-4 PCB because the voltage reading on TP7 and TP8 are  -14.2V, and +14.25V respectively, instead of ~0V.

 

  16378   Mon Oct 4 20:46:08 2021 KojiUpdateElectronicsSatellite amp box adapters

Thanks. You should be able to find the chassis-related hardware on the left side of the benchtop drawers at the middle workbench.

Hardware: The special low profile 4-40 standoff screw / 1U handles / screws and washers for the chassis / flat-top screws for chassis panels and lids

  16377   Mon Oct 4 18:35:12 2021 PacoUpdateElectronicsSatellite amp box adapters

[Paco]

I have finished assembling the 1U adapters from 8 to 5 DB9 conn. for the satellite amp boxes. One thing I had to "hack" was the corners of the front panel end of the PCB. Because the PCB was a bit too wide, it wasn't really flush against the front panel (see Attachment #1), so I just filed the corners by ~ 3 mm and covered with kapton tape to prevent contact between ground planes and the chassis. After this, I made DB9 cables, connected everything in place and attached to the rear panel (Attachment #2). Four units are resting near the CAD machine (next to the bench area), see Attachment #3.

Attachment 1: pcb_no_flush.jpg
pcb_no_flush.jpg
Attachment 2: 1U_assembly.jpg
1U_assembly.jpg
Attachment 3: fourunits.jpg
fourunits.jpg
  16376   Mon Oct 4 18:00:16 2021 KojiSummaryCDSc1teststand problems summary

I don't know anything about mx/open-mx, but you also need open-mx,don't you?


controls@c1ioo:~ 0$ systemctl status *mx*
● open-mx.service - LSB: starts Open-MX driver
   Loaded: loaded (/etc/init.d/open-mx)
   Active: active (running) since Wed 2021-09-22 11:54:39 PDT; 1 weeks 5 days ago
  Process: 470 ExecStart=/etc/init.d/open-mx start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/open-mx.service
           └─620 /opt/3.2.88-csp/open-mx-1.5.4/bin/fma -d

● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: active (running) since Wed 2021-09-22 12:08:00 PDT; 1 weeks 5 days ago
 Main PID: 5785 (mx_stream)
   CGroup: /system.slice/mx_stream.service
           └─5785 /usr/bin/mx_stream -e 0 -r 0 -w 0 -W 0 -s c1x03 c1ioo c1als c1omc -d fb1:0

 

  16375   Mon Oct 4 16:10:09 2021 ranaSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

not sure that this is necessary. If you look at teh previous entries Gautam made on this topic, it is clear that the BS/PRM PRMI matrix is snafu, whereas the ITM PRMI matrix is not.

Is it possible that the ~5% coil imbalance of the BS/PRM can explain the observed sensing matrix? If not, then there is no need to balance these coils.

  16374   Mon Oct 4 16:00:57 2021 YehonathanSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

{Yehonathan, Anchel}

In an attempt to fix the actuation of the PRMI DOFs we set to modify the output matrix of the BS and PRM such that the response of the coils will be similar to each other as much as possible.

To do so, we used the responses at a single frequency from the previous measurement to infer the output matrix coefficients that will equilize the OpLev responses (arbitrarily making the LL coil as a reference). This corrected the imbalance in BS almost completely while it didn't really work for PRM (see attachment 1).

The new output matrices are shown in attachment 2-3.

Attachment 1: BS_PRM_ANG_ACT_TF_20211004.pdf
BS_PRM_ANG_ACT_TF_20211004.pdf BS_PRM_ANG_ACT_TF_20211004.pdf BS_PRM_ANG_ACT_TF_20211004.pdf BS_PRM_ANG_ACT_TF_20211004.pdf
Attachment 2: BS_out_mat_20211004.txt
9.839999999999999858e-01 8.965770586285104482e-01 9.486710352885977526e-01 3.099999999999999978e-01
1.016000000000000014e+00 9.750242104232501594e-01 -9.291967546765563801e-01 3.099999999999999978e-01
9.839999999999999858e-01 -1.086765190351774768e+00 1.009798093279114628e+00 3.099999999999999978e-01
1.016000000000000014e+00 -1.031706735496689786e+00 -1.103142995587099939e+00 3.099999999999999978e-01
0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 1.000000000000000000e+00
Attachment 3: PRM_out_mat_20211004.txt
1.000000000000000000e+00 1.033455230230304611e+00 9.844796282226820905e-01 0.000000000000000000e+00
1.000000000000000000e+00 9.342329554807877745e-01 -1.021296201828568506e+00 0.000000000000000000e+00
1.000000000000000000e+00 -1.009214777246558503e+00 9.965113815550634691e-01 0.000000000000000000e+00
1.000000000000000000e+00 -1.020129700278567197e+00 -9.973560027273553619e-01 0.000000000000000000e+00
0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 1.000000000000000000e+00
ELOG V3.1.3-