40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 118 of 341  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  16070   Thu Apr 22 01:42:38 2021 KojiSummaryElectronicsHV Supply Comparison

New HV power supply from Company 'M' has been delivered. So I decided to compare the noise levels of some HV supplies in the lab. There are three models from companies 'H', 'K', and 'M'.

The noise level was measured with SR785 via Gautam's HP filter with protection diodes.

'H' is a fully analog HV supply and the indicator is analog meters.
'K' is a model with a LCD digital display and numerical keypad.
'M' is a model with a knob and digital displays.

All the models showed that the noise levels increased with increased output voltage.

Among these three, H showed the lowest noise. (<~1uV/rtHz@10Hz and <50nV/rtHz@100Hz) (Attachment 1)

K is quite noisy all over the measured freq range and the level was <50uV/rtHz. Also the PSD has lots of 5Hz harmonics. (Attachment 2)

M has a modest noise level (<~30uV/rtHz@10Hz and <1uV/rtHz@100Hz)except for the noticeable line noise (ripple). (Attachment 3)

The comparison of the three models at 300V is Attachment 4. The other day Gautam and I checked the power spectrum of the HV coil driver with KEPCO and the output noise level of the coil driver was acceptable. So I expect that we will be able to use the HV supply from Company M. Next step is to check the HV driver noise with the model by M used as the supply.

Attachment 1: HV_Supply_PSD_H.pdf
HV_Supply_PSD_H.pdf
Attachment 2: HV_Supply_PSD_K.pdf
HV_Supply_PSD_K.pdf
Attachment 3: HV_Supply_PSD_M.pdf
HV_Supply_PSD_M.pdf
Attachment 4: HV_Supply_PSD.pdf
HV_Supply_PSD.pdf
  16083   Fri Apr 23 19:26:58 2021 KojiUpdatePSLHEPA speed lowered

I believe that there is an internal setting for the minimum flow, so the flow is not linear ("0%" is not zero), but we should mark this flow speed once you find this is sufficiently low for the locking too.

  16099   Thu Apr 29 17:43:16 2021 KojiUpdateCDSRFM

The other day I felt hot at the X end. I wondered if the Xend A/C was off, but the switch right next to the SP table was ON (green light).
I could not confirm if the A/C was actually blowing or not.

  16115   Mon May 3 23:28:56 2021 KojiSummaryGeneralWeird gas leakagr kind of noise in 40m control room

I also noticed some sound in the control room. (didn't open the MP3 yet)

I'm afraid that the hard disk in the control room iMac is dying.

 

  16131   Tue May 11 17:43:09 2021 KojiUpdateCDSI/O Chassis Assembly

Did you match the local PC time with the GPS time?

  16136   Wed May 12 16:53:59 2021 KojiUpdateSUSMass Properties of SOS Assembly with 3"->2" Optic sleeve, in SI units

No, this is the property of the suspension assembly. The mass says 10kg

Could you do the same for the testmass assembly (only the suspended part)? The units are good, but I expect that the values will be small. I want to keep at least three significant digits.

  16140   Fri May 14 03:29:50 2021 KojiUpdateElectronicsHV Driver noise test with the new HV power supply from Matsusada

I believe I did the identical test with the one in [40m ELOG 15786]. The + input of PA95 was shorted to the ground to exclude the noise from the bias input. The voltage noise at TP6 was measured with +/-300V supply by two HP6209 and two Matsusada R4G360.

With R4G360, the floor level was identical and 60Hz line peaks were less. It looks like R4G360 is cheap, easier and precise to handle, and sufficiently low noise.

Attachment 1: HV_Driver_PSD.pdf
HV_Driver_PSD.pdf
  16146   Wed May 19 18:29:41 2021 KojiUpdateSUSMass Properties of SOS Assembly with 3"->2" Optic sleeve, in SI units

Calculation for the SOS POS/PIT/YAW resonant frequencies

- Nominal height gap between the CoM and the wire clamping point is 0.9mm (cf T970135)

- To have the similar res freq for the optic with the 3" metal sleeve is 1.0~1.1mm.
As the previous elog does not specify this number for the current configuration, we need to asses this value and the make the adjustment of the CoM height.

Attachment 1: SOS_resonant_freq.pdf
SOS_resonant_freq.pdf SOS_resonant_freq.pdf
Attachment 2: SOS_resonant_freq.nb.zip
  16148   Thu May 20 16:56:21 2021 KojiUpdateElectronicsProduction version of the HV coil driver tested with KEPCO HV supplies

HP HV power supply ( HP6209 ) were returned to Downs

Attachment 1: P_20210520_154523_copy.jpg
P_20210520_154523_copy.jpg
  16149   Fri May 21 00:05:45 2021 KojiUpdateSUSNew electronics: Sat Amp / Coil Drivers

11 new Satellite Amps were picked up from Downs. 7 more are coming from there. I have one spare unit I made. 1 sat amp has already been used at MC1.

We had 8 HAM-A coil drivers delivered from the assembling company. We also have two coil drivers delivered from Downs (Anchal tested)

Attachment 1: F3CDEF8D-4B1E-42CF-8EFC-EA1278C128EB_1_105_c.jpeg
F3CDEF8D-4B1E-42CF-8EFC-EA1278C128EB_1_105_c.jpeg
  16150   Fri May 21 00:15:33 2021 KojiUpdateElectronicsDC Power Strip delivered / stored

DC Power Strip Assemblies delivered and stored behind the Y arm tube (Attachment 1)

  • 7x 18V Power Strip (Attachment 2)
  • 7x 24V Power Strip (Attachment 2)
  • 7x 18V/24V Sequencer / 14x Mounting Panel (Attachment 3)
  • DC Power Cables 3ft, 6ft, 10ft (Attachments 4/5)
  • DC Power Cables AWG12 Orange / Yellow (Attachments 6/7)

I also moved the spare 1U Chassis to the same place.

  • 5+7+9 = 21x 1U Chassis (Attachments 8/9)

 

Attachment 1: P_20210520_233112.jpeg
P_20210520_233112.jpeg
Attachment 2: P_20210520_233123.jpg
P_20210520_233123.jpg
Attachment 3: P_20210520_233207.jpg
P_20210520_233207.jpg
Attachment 4: P_20210520_231542.jpg
P_20210520_231542.jpg
Attachment 5: P_20210520_231815.jpg
P_20210520_231815.jpg
Attachment 6: P_20210520_195318.jpg
P_20210520_195318.jpg
Attachment 7: P_20210520_231644.jpg
P_20210520_231644.jpg
Attachment 8: P_20210520_233203.jpg
P_20210520_233203.jpg
Attachment 9: P_20210520_195204.jpg
P_20210520_195204.jpg
  16158   Mon May 24 20:55:00 2021 KojiSummaryBHDHow to align two OMCs on the BHD platform?

Differential misalignment of the OMCs

40m BHD will employ two OMCs on the BHD platform. We will have two SOSs for each of the LO and AS beams. The challenge here is that the input beam must optimally couple to the OMCs simultaneously. This is not easy as we won't have independent actuators for each OMC. e.g. The alignment of the LO beam can be optimally adjusted to the OMC1, but this, in general, does not mean the beam is optimally aligned to the OMC2.

Requirement

When a beam with the matched mode to an optical cavity has a misalignment, the power coupling C can be reduced from the unity as

C = 1 - \left(\frac{a}{\omega_0}\right)^2 - \left(\frac{\alpha}{\theta_0}\right)^2

where \omega_0 is the waist radius, \theta_0 is the divergence angle defined as \theta_0 \equiv \lambda/ \pi \omega, a and \alpha are the beam lateral translation and rotation at the waist position.

The waist size of the OMC is 500um. Therefore \omega_0 = 500um and \theta_0 = 0.68 mrad. If we require C to be better than 0.995 according to the design requirement document (T1900761). This corresponds to a (only) to be 35um and \alpha (only) to be 48urad. These numbers are quite tough to be realized without post-installation adjustment. Moreover, the OMCs themselves have individual differences in the beam axis. So no matter how we set the mechanical precision of the OMC installation, we will introduce a maximum of 1mm and ~5mrad uncertainty of the optical axis.

Adjustment

Suppose we adjust the incident beam to the OMC placed at the transmission side of the BHD BS. The reflected beam at the BS can be steered by picomotors. The distance from the BS to the OMC waist is 12.7" (322mm) according to the drawing.
So we can absorb the misalignment mode of (a, \alpha) = (0.322 \theta, \theta). This is a bit unfortunate. 0.322m is about 1/2 of the rayleigh range. Therefore, this actuation is still angle-dominated but a bit of translation is still coupled.

If we enable to use the third picomotor on the BHD BS mount, we can introduce the translation of the beam in the horiz direction. This is not too huge therefore we still want to prepare the method to align the OMC in the horiz direction.

The difficult problem is the vertical alignment. This requires the vertical displacement of the OMC. And we will not have the option to lower the OMC. Therefore if the OMC2 is too high, we have to raise the OMC1 so that the resulting beam is aligned to the OMC2. i.e. we need to maintain the method to raise both OMCs. (... or swap the OMCs). From the images of the OMC beam spots, we'll probably be able to analyze the intracavity axes of the OMCs. So we can always place the OMC with a higher optical axis at the transmission side of the BHD BS.

 

 

  16172   Wed Jun 2 01:03:19 2021 KojiUpdateBHDSOS assembly

Can you just cut the viton tips smaller? If you cut it to have some wedge (or say, taper), it can get stuck with the vent hole.

 

  16173   Wed Jun 2 01:08:57 2021 KojiUpdateSUSCoM to Clamping Point Measurement for 3" Adapter Ring

How about to use the non-Ag coated threaded shaft + the end SS masses with helicoils inserted? Does this save the masses to get stuck?

 

  16181   Thu Jun 3 22:08:00 2021 KojiUpdateCDSOpto-isolator for c1auxey

- Could you explain what is the blue thing in Attachment 1?

- To check the validity of the signal chain, can you make a diagram summarizing the path from the fast BO - BO I/F - Acromag - This opto-isolator - the coil driver relay? (Cut-and-paste of the existing schematics is fine)

 

  16184   Sun Jun 6 03:02:14 2021 KojiUpdateCDSOpto-isolator for c1auxey

This RTS also use the BO interface with an opto isolator. https://dcc.ligo.org/LIGO-D1002593

Could you also include the pull up/pull down situations?

  16198   Fri Jun 11 20:19:50 2021 KojiSummaryBHDBHD OMC invacuum wiring

Stephen and I discussed the in-vacuum OMC wiring.

- One of the OMCs has already been completed. (Blue)
- The other OMC is still being built. It means that these cables need to be built. (Pink)
- However, the cables for the former OMC should also be replaced because the cable harness needs to be replaced from the metal one to the PEEK one.
- The replacement of the harness can be done by releasing the Glenair Mighty Mouse connectors from the harness. (This probably requires a special tool)
- The link to the harness photo is here: https://photos.app.goo.gl/3XsUKaDePbxbmWdY7

- We want to combine the signals for the two OMCs into three DB25s. (Green)
- These cables are custom and need to be designed.

- The three standard aLIGO-style cables are going to be used. (Yellow)

- The cable stand here should be the aLIGO style.

Attachment 1: 40mBHD_OMC_wiring.pdf
40mBHD_OMC_wiring.pdf
  16203   Tue Jun 15 21:48:55 2021 KojiUpdateCDSOpto-isolator for c1auxey

If my understanding is correct, the (photo receiving) NPN transistor of the optocoupler is energized through the acromag. The LED side should be driven by the coil driver circuit. It is properly done for the "enable mon" through 750Ohm and +V. However, "Run/Acquire" is a relay switch and there is no one to drive the line. I propose to add the pull-up network to the run/acquire outputs. This way all 8 outputs become identical and symmetric.

We should test the configuration if this works properly. This can be done with just a manual switch, R=750Ohm, and a +V supply  (+18V I guess).

Attachment 1: Acromag_RTS_BI_config.jpg
Acromag_RTS_BI_config.jpg
  16206   Wed Jun 16 19:34:18 2021 KojiUpdateGeneralHVAC

I made a flow sensor with a stick and tissue paper to check the airflow.

- The HVAC indicator was not lit, but it was just the blub problem. The replacement bulb is inside the gray box.

- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
  The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.

- Then I went to the vertex and the east arm. The outlets and intakes are flowing.

Attachment 1: HVAC_Power.jpeg
HVAC_Power.jpeg
Attachment 2: South_Arm.jpeg
South_Arm.jpeg
Attachment 3: South_End_Tenperature.jpeg
South_End_Tenperature.jpeg
Attachment 4: Vertex.jpeg
Vertex.jpeg
Attachment 5: East_Arm.jpeg
East_Arm.jpeg
  16211   Thu Jun 17 22:19:12 2021 KojiUpdateElectronics25 HAM-A coil driver units delivered

25 HAM-A coil driver units were fabricated by Todd and I've transported them to the 40m.
 2 units we already have received earlier.
The last (1) unit has been completed, but Luis wants to use it for some A+ testing. So 1 more unit is coming.

Attachment 1: P_20210617_195811.jpg
P_20210617_195811.jpg
  16212   Thu Jun 17 22:25:38 2021 KojiUpdateSUSNew electronics: Sat Amp / Coil Drivers

It is a belated report: We received 5 more sat amps on June 4th. (I said 7 more but it was 6 more) So we still have one more sat amp coming from Todd.

- 1 already delivered long ago
- 8 received from Todd -> DeLeone -> Chub. They are in the lab.
- 11 units on May 21st
- 5 units on Jun 4th
Total 1+8+11+5 = 25
1 more unit is coming

 

Quote:

11 new Satellite Amps were picked up from Downs. 7 more are coming from there. I have one spare unit I made. 1 sat amp has already been used at MC1.

We had 8 HAM-A coil drivers delivered from the assembling company. We also have two coil drivers delivered from Downs (Anchal tested)

 

Attachment 1: P_20210604_231028.jpeg
P_20210604_231028.jpeg
  16216   Fri Jun 18 23:53:08 2021 KojiUpdateBHDSOS assembly

Then, can we replace the four small EQ stops at the bottom (barrel surface) with two 1/4-20 EQ stops? This will require drilling the bottom EQ stop holders (two per SOS).

 

  16223   Thu Jun 24 16:40:37 2021 KojiUpdateSUSMC lock acquired back again

[Koji, Anchal]

The issue of the PD output was that the PD whitened outputs of the sat amp (D080276) are differential, while the successive circuit (D000210 PD whitening unit) has the single-ended inputs. This means that the neg outputs (D080276 U2) have always been shorted to GND with no output R. This forced AD8672 to work hard at the output current limit. Maybe there was a heat problem due to this current saturation as Anchal reported that the unit came back sane after some power-cycling or opening the lid. But the heat issue and the forced differential voltage to the input stage of the chip eventually cause it to fail, I believe.

Anchal came up with the brilliant idea to bypass this issue. The sat amp box has the PD mon channels which are single-ended. We simply shifted the output cables to the mon connectors. The MC1 sus was nicely damped and the IMC was locked as usual. Anchal will keep checking if the circuit will keep working for a few days.

Attachment 1: P_20210624_163641_1.jpg
P_20210624_163641_1.jpg
  16240   Tue Jul 6 17:40:32 2021 KojiSummaryGeneralLab cleaning

We held the lab cleaning for the first time since the campus reopening (Attachment 1).
Now we can use some of the desks for the people to live! Thanks for the cooperation.

We relocated a lot of items into the lab.

  • The entrance area was cleaned up. We believe that there is no 40m lab stuff left.
    • BHD BS optics was moved to the south optics cabinet. (Attachment 2)
    • DSUB feedthrough flanges were moved to the vacuum area (Attachment 3)
  • Some instruments were moved into the lab.
    • The Zurich instrument box
    • KEPCO HV supplies
    • Matsusada HV supplies
  • We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4)
  • We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7)
  • ISC/WFS left over components were moved to the pile of the BHD electronics.
    • Front panels (Attachment 5)
    • Components in the boxes (Attachment 6)

We still want to make some more cleaning:

- Electronics workbenches
- Stray setup (cart/wagon in the lab)
- Some leftover on the desks
- Instruments scattered all over the lab
- Ewaste removal

Attachment 1: P_20210706_163456.jpg
P_20210706_163456.jpg
Attachment 2: P_20210706_161725.jpg
P_20210706_161725.jpg
Attachment 3: P_20210706_145210.jpg
P_20210706_145210.jpg
Attachment 4: P_20210706_161255.jpg
P_20210706_161255.jpg
Attachment 5: P_20210706_145815.jpg
P_20210706_145815.jpg
Attachment 6: P_20210706_145805.jpg
P_20210706_145805.jpg
Attachment 7: PXL_20210707_005717772.jpg
PXL_20210707_005717772.jpg
  16246   Wed Jul 14 19:21:44 2021 KojiUpdateGeneralBrrr

Jordan reported on Jun 18, 2021:
"HVAC tech came today, and replaced the thermostat and a coolant tube in the AC unit. It is working now and he left the thermostat set to 68F, which was what the old one was set to."

  16250   Sat Jul 17 00:52:33 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Canon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Attachment 1: P_20210716_213850.jpg
P_20210716_213850.jpg
  16252   Wed Jul 21 14:50:23 2021 KojiUpdateSUSNew electronics

Received:

Jun 29, 2021 BIO I/F 6 units
Jul 19, 2021 PZT Drivers x2 / QPD Transimedance amp x2

 

Attachment 1: P_20210629_183950.jpeg
P_20210629_183950.jpeg
Attachment 2: P_20210719_135938.jpeg
P_20210719_135938.jpeg
  16255   Sun Jul 25 18:21:10 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light returned / Electronics borrowed

Camera and accesories returned

One HAM-A coildriver and one sat amp borrowed -> QIL

https://nodus.ligo.caltech.edu:8081/QIL/2616

 

  16260   Tue Jul 27 20:12:53 2021 KojiUpdateBHDSOS assembly

1 or 2. The stained ones are just fine. If you find the vented 1/4-20 screws in the clean room, you can use them.

For the 28 screws, yeah find some spares in the clean room (faster), otherwise just order.

  16278   Thu Aug 12 14:59:25 2021 KojiUpdateGeneralPSL shutter was closed this morning

What I was afraid of was the vacuum interlock. And indeed there was a pressure surge this morning. Is this real? Why didn't we receive the alert?

Attachment 1: Screen_Shot_2021-08-12_at_14.58.59.png
Screen_Shot_2021-08-12_at_14.58.59.png
  16279   Thu Aug 12 20:52:04 2021 KojiUpdateGeneralPSL shutter was closed this morning

I did a bit more investigation on this.

- I checked P1~P4, PTP2/3, N2, TP2, TP3. But found only P1a and P2 were affected.

- Looking at the min/mean/max of P1a and P2 (Attachment 1), the signal had a large fluctuation. It is impossible to have P1a from 0.004 to 0 instantaneously.

- Looking at the raw data of P1a and P2 (Attachment 2), the value was not steadily large. Instead it looks like fluctuating noise.

So my conclusion is that because of an unknown reason, an unknown noise coupled only into P1a and P2 and tripped the PSL shutter. I still don't know the status of the mail alert.

Attachment 1: Screen_Shot_2021-08-12_at_20.51.19.png
Screen_Shot_2021-08-12_at_20.51.19.png
Attachment 2: Screen_Shot_2021-08-12_at_20.51.34.png
Screen_Shot_2021-08-12_at_20.51.34.png
  16281   Tue Aug 17 04:30:35 2021 KojiUpdateSUSNew electronics

Received:

Aug 17, 2021 2x ISC Whitening

Delivered 2x Sat Amp board to Todd

 

Attachment 1: P_20210816_234136.jpg
P_20210816_234136.jpg
Attachment 2: P_20210816_235106.jpg
P_20210816_235106.jpg
Attachment 3: P_20210816_234220.jpg
P_20210816_234220.jpg
  16284   Thu Aug 19 14:14:49 2021 KojiUpdateCDSTime synchornization not running

131.215.239.14 looks like Caltech's NTP server (ntp-02.caltech.edu)
https://webmagellan.com/explore/caltech.edu/28415b58-837f-4b46-a134-54f4b81bee53

I can't say it is correct or not as I did not make the survey at your level. I think you need a few tests of reconfiguring and restarting the NTP clients to see if time synchronization starts. Because the local time is not regulated right now anyway, this operation is safe I think.

 

  16288   Mon Aug 23 11:51:26 2021 KojiUpdateGeneralCampus Wide Power Glitch Reported: Monday, 8/23/21 at 9:30am

Campus Wide Power Glitch Reported: Monday, 8/23/21 at 9:30am (more like 9:34am according to nodus log)

nodus: rebooted. ELOG/apache/svn is running. (looks like Anchal worked on it)

chiara: survived the glitch thanks to UPS

fb1: not responding -> @1pm open to login / seemed rebooted only at 9:34am (network path recovered???)

megatron: not responding

optimus: no route to host

c1aux: ping ok, ssh not responding -> needed to use telnet (vme / vxworks)
c1auxex: ssh ok
c1auxey: ping ok, ssh not respoding -> needed to use telnet (vme / vxworks)
c1psl: ping NG, power cycled the switch on 1X2 -> ssh OK now
c1iscaux: ping NG -> rebooted the machine -> ssh recovered

c1iscaux2: does not exist any more
c1susaux: ping NG -> responds after 1X2 switch reboot

c1pem1: telnet ok (vme / vxworks)
c1iool0: does not exist any more

c1vac1: ethernet service restarted locally -> responding
ottavia: doesnot exist?
c1teststand: ping ok, ssh not respoding

3:20PM we started restarting the RTS

  16290   Mon Aug 23 19:00:05 2021 KojiUpdateGeneralCampus Wide Power Glitch Reported: Monday, 8/23/21 at 9:30am

Restarting the RTS was unsuccessful because of the timing discrepancy error between the RT machines and the FB. This time no matter how we try to set the time, the IOPs do not run with "DC status" green. (We kept having 0x4000)

We then decided to work on the recovery without the data recorded. After some burtrestores, the IMC was locked and the spot appeared on the AS port. However, IPC seemed down and no WFS could run.

  16294   Tue Aug 24 18:44:03 2021 KojiUpdateCDSFB is writing the frames with a year old date

Dan Kozak pointed out that the new frame files of the 40m has not been written in 2021 GPS time but 2020 GPS time.

Current GPS time is 1313890914 (or something like that), but the new files are written as C-R-1282268576-16.gwf

I don't know how this can happen but this may explain why we can't have the agreement between the FB gps time and the RTS gps time.

(dataviewer seems dependent on the FB GPS time and it indicates 2020 date. DTT/diaggui does not.)


This is the way to check the gpstime on fb1. It's apparently a year off.

controls@fb1:~ 0$ cat /proc/gps
1282269402.89

Attachment 1: Screen_Shot_2021-08-24_at_18.46.24.png
Screen_Shot_2021-08-24_at_18.46.24.png
  16306   Wed Sep 1 21:55:14 2021 KojiSummaryGeneralTowards the end upgrade

- Sat amp mod and test: on going (Tega)
- Coil driver mod and test: on going (Tega)

- Acromag: almost ready (Yehonathan)

- IDC10-DB9 cable / D2100641 / IDC10F for ribbon in hand / Dsub9M ribbon brought from Downs / QTY 2 for two ends -> Made 2 (stored in the DSUB connector plastic box)
- IDC40-DB9 cable / D2100640 / IDC40F for ribbon in hand / DB9F solder brought from Downs  / QTY 4 for two ends -> Made 4 0.5m cables (stored in the DSUB connector plastic box)

- DB15-DB9 reducer cable / ETMX2+ETMY2+VERTEX16+NewSOS14 = 34 / to be ordered

- End DAC signal adapter with Dewhitening (with DIFF/SE converter) / to be designed & built
- End ADC adapter (with SE/DIFF converter) / to be designed & built


MISC Ordering

  • 3.5 x Sat Amp Adapter made (order more DSUB25 conns)
    • -> Gave 2 to Tega, 1.5 in the DSUB box
    • 5747842-4 A32100-ND -> ‎5747842-3‎ A32099-ND‎ Qty40
    • 5747846-3 A32125-ND -> ‎747846-3‎ A23311-ND‎ Qty40
  • Tega's sat amp components
    • 499Ω P499BCCT-ND 78 -> Backorder -> ‎RG32P499BCT-ND‎ Qty 100
    • 4.99KΩ TNPW12064K99BEEA 56 -> Qty 100
    • 75Ω YAG5096CT-ND 180 -> Qty 200
    • 1.82KΩ P18391CT-ND 103 -> Qty 120
    • 68 nF P10965-ND 209
  • Order more DB9s for Tega's sat amp adapter 4 units (look at the AA IO BOM) 
    • 4x 8x 5747840-4 DB9M PCB A32092-ND -> 6-747840-9‎ A123182-ND‎ Qty 35
    • 4x 5x 5747844-4 A32117-ND -> Qty 25
    • 4x 5x DB9M ribbon MMR09K-ND -> 8209-8000‎ 8209-8000-ND‎ Qty 25
    • 4x 5x 5746861-4 DB9F ribbon 5746861-4-ND -> 400F0-09-1-00 ‎LFR09H-ND‎ Qty 35
  • Order 18bit DAC AI -> 16bit DAC AI components 4 units
    • 4x 4x 5747150-8 DSUB9F PCB A34072-ND -> ‎D09S24A4PX00LF‎609-6357-ND‎ Qty 20
    • 4x 1x 787082-7 CONN D-TYPE RCPT 68POS R/A SLDR (SCSI Female) A3321-ND -> ‎5787082-7‎ A31814-ND‎ Qty 5
    • 4x 1x 22-23-2021 Connector Header Through Hole 2 position 0.100" (2.54mm)    WM4200-ND -> Qty5

 

 

  16308   Thu Sep 2 19:28:02 2021 KojiUpdate This week's FB1 GPS Timing Issue Solved

After the disk system trouble, we could not make the RTS running at the nominal state. A part of the troubleshoot FB1 was rebooted. But the we found that the GPS time was a year off from the current time

controls@fb1:/diskless/root/etc 0$ cat /proc/gps 
1283046156.91
controls@fb1:/diskless/root/etc 0$ date
Thu Sep  2 18:43:02 PDT 2021
controls@fb1:/diskless/root/etc 0$ timedatectl 
      Local time: Thu 2021-09-02 18:43:08 PDT
  Universal time: Fri 2021-09-03 01:43:08 UTC
        RTC time: Fri 2021-09-03 01:43:08
       Time zone: America/Los_Angeles (PDT, -0700)
     NTP enabled: no
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2021-03-14 01:59:59 PST
                  Sun 2021-03-14 03:00:00 PDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2021-11-07 01:59:59 PDT
                  Sun 2021-11-07 01:00:00 PST


Paco went through the process described in Jamie's elog [40m ELOG 16299] (except for the installation part) and it actually made the GPS time even strange

controls@fb1:~ 0$ cat /proc/gps
967861610.89

I decided to remove the gpstime module and then load it again. This made the gps time back to normal again.

controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 0$ cat /proc/gps
cat: /proc/gps: No such file or directory
controls@fb1:~ 1$ sudo modprobe gpstime
controls@fb1:~ 0$ cat /proc/gps
1314671254.11

 

  16309   Thu Sep 2 19:47:38 2021 KojiUpdateCDSThis week's FB1 GPS Timing Issue Solved

After the reboot daqd_dc was not working, but manual starting of open-mx / mx services solved the issue.

sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*

 

  16310   Thu Sep 2 20:44:18 2021 KojiUpdateCDSChiara DHCP restarted

We had the issue of the RT machines rebooting. Once we hooked up the display on c1iscex, it turned out that the IP was not given at it's booting-up.

I went to chiara and confirmed that the DHCP service was not running

~>sudo service isc-dhcp-server status
[sudo] password for controls:
isc-dhcp-server stop/waiting

So the DHCP service was manually restarted

~>sudo service isc-dhcp-server start
isc-dhcp-server start/running, process 24502
~>sudo service isc-dhcp-server status
isc-dhcp-server start/running, process 24502

 

 

  16311   Thu Sep 2 20:47:19 2021 KojiUpdateCDSChiara DHCP restarted

[Paco, Tega, Koji]

Once chiara's DHCP is back, things got much more straight forward.
c1iscex and c1iscey were rebooted and the IOPs were launched without any hesitation.

Paco ran rebootC1LSC.sh and for the first time in this year we had the launch of the processes without any issue.

  16312   Thu Sep 2 21:21:14 2021 KojiSummaryComputersVacuum recovery 2

Attachment 1:
We are pumping the main volume with TP2. Once P1a reached the pressure ~2.2mtorr, we could open the PSL shutter. The TP2 voltage went up once but came down to ~20V. It's close to nominal now.
We wondered if we should use TP3 or not. I checked the vacuum pressure trends and found that the annulus pressures were going up. So we decided to open the annulus valves.

Attachment 2:
The current vacuum status is as shown in the MEDM screenshot.

There is no trend data of the valve status (sad)

Attachment 1: Screenshot_2021-09-02_21-20-24.png
Screenshot_2021-09-02_21-20-24.png
Attachment 2: Screenshot_2021-09-02_21-20-48.png
Screenshot_2021-09-02_21-20-48.png
  16316   Wed Sep 8 18:00:01 2021 KojiUpdateVACcronjobs & N2 pressure alert

In the weekly meeting, Jordan pointed out that we didn't receive the alert for the low N2 pressure.

To check the situation, I went around the machines and summarized the cronjob situation.
[40m wiki: cronjob summary]
Note that this list does not include the vacuum watchdog and mailer as it is not on cronjob.

Now, I found that there are two N2 scripts running:

1. /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh on megatron and is running every minute (!)
2. /opt/rtcds/caltech/c1/scripts/Admin/N2check/pyN2check.sh on c1vac and is running every 3 hours.

Then, the N2 log file was checked: /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log

Wed Sep 1 12:38:01 PDT 2021 : N2 Pressure: 76.3621
Wed Sep 1 12:38:01 PDT 2021 : T1 Pressure: 112.4
Wed Sep 1 12:38:01 PDT 2021 : T2 Pressure: 349.2
Wed Sep 1 12:39:02 PDT 2021 : N2 Pressure: 76.0241
Wed Sep 1 12:39:02 PDT 2021 : N2 pressure has fallen to 76.0241 PSI !

Tank pressures are 94.6 and 98.6 PSI!

This email was sent from Nodus.  The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh

Wed Sep 1 12:40:02 PDT 2021 : N2 Pressure: 75.5322
Wed Sep 1 12:40:02 PDT 2021 : N2 pressure has fallen to 75.5322 PSI !

Tank pressures are 93.6 and 97.6 PSI!

This email was sent from Nodus.  The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh

...

The error started at 11:39 and lasted until 13:01 every minute. So this was coming from the script on megatron. We were supposed to have ~20 alerting emails (but did none).
So what's happened to the mails? I tested the script with my mail address and the test mail came to me. Then I sent the test mail to 40m mailing list. It did not reach.
-> Decided to put the mail address (specified in /etc/mailname , I believe) to the whitelist so that the mailing list can accept it.
I did run the test again and it was successful. So I suppose the system can now send us the alert again.
And alerting every minute is excessive. I changed the check frequency to every ten minutes.

What's happened to the python version running on c1vac?
1) The script is running, spitting out some error in the cron report (email on c1vac). But it seems working.
2) This script checks the pressures of the bottles rather than the N2 pressure downstream. So it's complementary.
3) During the incident on Sept 1, the checker did not trip as the pressure drop happened between the cronjob runs and the script didn't notice it.
4) On top of them, the alert was set to send the mails only to an our former grad student. I changed it to deliver to the 40m mailing list. As the "From" address is set to be some ligox...@gmail.com, which is a member of the mailing list (why?), we are supposed to receive the alert. (And we do for other vacuum alert from this address).

 

 

 

 

  16317   Wed Sep 8 19:06:14 2021 KojiUpdateGeneralBackup situation

Tega mentioned in the meeting that it could be safer to separate some of nodus's functions from the martian file system.
That's an interesting thought. The summary pages and other web services are linked to the user dir. This has high traffic and can cause the issure of the internal network once we crash the disk.
Or if the internal system is crashed, we still want to use elogs as the source of the recovery info. Also currently we have no backup of the elog. This is dangerous.

We can save some of the risks by adding two identical 2TB disks to nodus to accomodate svn/elog/web and their daily backup.

host file system or contents condition note
nodus root none or unknown  
nodus home (svn, elog) none  
nodus web (incl summary pages) backed up linked to /cvs/cds
chiara root maybe need to check with Jon/Anchal
chiara /home/cds local copy The backup disk is smaller than the main disk.
chiara /home/cds remote copy - stalled we used to have, but stalled since 2017/11/17
fb1 root maybe need to check with Jon/Anchal
fb1 frame rsync pulled from LDAS according to Tega
       

 

  16328   Tue Sep 14 17:14:46 2021 KojiUpdateSUSSOS Tower Hardware

Yup this is OK. No problem.

 

  16331   Tue Sep 14 19:12:03 2021 KojiSummaryPEMExcess seismic noise in 0.1 - 0.3 Hz band

Looks like this increase is correlated for BS/EX/EY. So it is likely to be real.

Comparison between 9/15 (UTC) (Attachment 1) and 9/10 (UTC) (Attachment 2)

Attachment 1: C1-ALL_393F21_SPECTRUM-1315699218-86400.png
C1-ALL_393F21_SPECTRUM-1315699218-86400.png
Attachment 2: C1-ALL_393F21_SPECTRUM-1315267218-86400.png
C1-ALL_393F21_SPECTRUM-1315267218-86400.png
  16333   Wed Sep 15 23:38:32 2021 KojiUpdateALSALS ASX PZT HV was off -> restored

It was known that the Y end ALS PZTs are not working. But Anchal reported in the meeting that the X end PZTs are not working too.

We went down to the X arm in the afternoon and checked the status. The HV (KEPCO) was off from the mechanical switch. I don't know this KEPCO has the function to shutdown the switch at the power glitch or not.
But anyway the power switch was engaged. We also saw a large amount of misalignment of the X end green. The alignment was manually adjusted. Anchal was able to reach ~0.4 Green TRX, but no more. He claimed that it was ~0.8.

We tried to tweak the SHG temp from 36.4. We found that the TRX had the (local) maximum of ~0.48 at 37.1 degC. This is the new setpoint right now.

Attachment 1: P_20210915_151333.jpg
P_20210915_151333.jpg
  16334   Wed Sep 15 23:53:54 2021 KojiSummaryGeneralTowards the end upgrade

Ordered compoenents are in.

- Made 36 more Sat Amp internal boards (Attachment 1). Now we can install the adapters to all the 19 sat amp units.

- Gave Tega the components for the sat amp adapter units. (Attachment 2)

- Gave Tega the componennts for the sat amp / coil driver modifications.

- Made 5 PCBs for the 16bit DAC AI rear panel interface (Attachment 3)

Attachment 1: P_20210915_231308.jpg
P_20210915_231308.jpg
Attachment 2: P_20210915_225039.jpg
P_20210915_225039.jpg
Attachment 3: P_20210915_224341.jpg
P_20210915_224341.jpg
  16335   Thu Sep 16 00:00:20 2021 KojiUpdateGeneralRIO Planex 1064 Lasers in the south cabinet

RIO Planex 1064 Lasers in the south cabinet

Property Number C30684/C30685/C30686/C30687

Attachment 1: P_20210915_232426.jpg
P_20210915_232426.jpg
  16336   Thu Sep 16 01:16:48 2021 KojiUpdateGeneralFrozen 2

It happened again. Defrosting required.

Attachment 1: P_20210916_003406_1.jpg
P_20210916_003406_1.jpg
ELOG V3.1.3-