40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 340 of 357  Not logged in ELOG logo
ID Date Authordown Type Category Subject
  16340   Thu Sep 16 20:18:13 2021 AnchalUpdateGeneralReset

Fridge brought back inside.

Quote:

Put outside.

Quote:

It happened again. Defrosting required.

 

 

Attachment 1: PXL_20210917_031633702.jpg
PXL_20210917_031633702.jpg
  16351   Tue Sep 21 11:09:34 2021 AnchalSummaryCDSXARM YARM UGF Servo and Oscillators added

I've updated the c1LSC simulink model to add the so-called UGF servos in the XARM and YARM single arm loops as well. These were earlier present in DARM, CARM, MICH and PRCL loops only. The UGF servo themselves serves a larger purpose but we won't be using that. What we have access to now is to add an oscillator in the single arm and get realtime demodulated signal before and after the addition of the oscillator. This would allow us to get the open loop transfer function and its uncertaintiy at particular frequencies (set by the oscillator) and would allow us to create a noise budget on the calibration error of these transfer functions.

 

The new model has been committed locally in the 40m/RTCDSmodels git repo. I do not have rights to push to the remote in git.ligo. The model builds, installs and starts correctly.

  16354   Wed Sep 22 12:40:04 2021 AnchalSummaryCDSXARM YARM UGF Servo and Oscillators shifted to OAF

To reduce burden on c1lsc, I've shifted the added UGF block to to c1oaf model. c1lsc had to be modified to allow addition of an oscillator in the XARm and YARM control loops and take out test points before and after the addition to c1oaf through shared memory IPC to do realtime demodulation in c1oaf model.

The new models built and installed successfully and I've been able to recover both single arm locks after restarting the computers.

 

  16365   Wed Sep 29 17:10:09 2021 AnchalSummaryCDSc1teststand problems summary

[anchal, ian]

We went and collected some information for the overlords to fix the c1teststand DAQ network issue.


  • from c1teststand, c1bhd and c1sus2 computers were not accessible through ssh. (No route to host). So we restarted both the computers (the I/O chassis were ON).
  • After the computers restarted, we were able to ssh into c1bhd and c1sus, ad we ran rtcds start c1x06 and rtcds start c1x07.
  • The first page in attachment shows the screenshot of GDS_TP screens of the IOP models after this step.
  • Then we started teh user models by running rtcds start c1bhd and rtcds start c1su2.
  • The second page shows the screenshot of GDS_TP screens. You can notice that DAQ status is red in all the screens and the DC statuses are blank.
  • So we checked if daqd_ services are running in the fb computer. They were not. So we started them all by sudo systemctl start daqd_*.
  • Third page shows the status of all services after this step. the daqd_dc.service remained at failed state.
  • open-mx_stream.service was not even loaded in fb. We started it by running sudo systemctl start open-mx_stream.service.
  • The fourth page shows the status of this service. It started without any errors.
  • However, when we went to check the status of mx_stream.service in c1bhd and c1sus2, they were not loaded and we we tried to start them, they showed failed state and kept trying to start every 3 seconds without success. (See page 5 and 6).
  • Finally, we also took a screenshot of timedatectl command output on the three computers fb, c1bhd, and c1sus2 to show that their times were not synced at all.
  • The ntp service is running on fb but it probably does not have access to any of the servers it is following.
  • The timesyncd on c1bhd and c1sus2 (FE machines) is also running but showing status 'Idle' which suggested they are unable to find the ntp signal from fb.
  • I believe this issue is similar to what jamie ficed in the fb1 on martian network in 40m/16302. Since the fb on c1teststand network was cloned before this fix, it might have this dysfunctional ntp as well.

We would try to get internet access to c1teststand soon. Meanwhile, someone with more experience and knowledge should look into this situation and try to fix it. We need to test the c1teststand within few weeks now.

Attachment 1: c1teststand_issues_summary.pdf
c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf
  16367   Thu Sep 30 14:09:37 2021 AnchalSummaryCDSNew way to ssh into c1teststand

Late elog, original time Wed Sep 29 14:09:59 2021

We opened a new port (22220) in the router to the martian subnetwork which is forwarded to port 22 on c1teststand (192.168.113.245) allowing direct ssh access to c1teststand computer from the outside world using:

                                                                       

                                                                                    
 

Checkout this wiki page for unredadcted info.

  16368   Thu Sep 30 14:13:18 2021 AnchalUpdateLSCHV supply to Xend Green laser injection mirrors M1 and M2 PZT restored

Late elog, original date Sep 15th

We found that the power switch of HV supply that powers the PZT drivers for M1 and M2 on Xend green laser injection alignment was tripped off. We could not find any log of someone doing it, it is a physical switch. Our only explanation is that this supply might have a solenoid mechansm to shut off during power glitches and it probably did so on Aug 23 (see 40m/16287). We were able to align the green laser using PZT again, however, the maximum power at green transmission from X arm cavity is now about half of what it used to be before the glitch. Maybe the seed laser on the X end died a little.

  16372   Mon Oct 4 11:05:44 2021 AnchalSummaryCDSc1teststand problems summary

[Anchal, Paco]

We tried to fix the ntp synchronization in c1teststand today by repeating the steps listed in 40m/16302. Even though teh cloned fb1 now has the exact same package version, conf & service files, and status, the FE machines (c1bhd and c1sus2) fail to sync to the time. the timedatectl shows the same stauts 'Idle'. We also, dug bit deeper into the error messages of daq_dc on cloned fb1 and mx_stream on FE machines and have some error messages to report here.


Attempt on fixing the ntp

  • We copied the ntp package version 1:4.2.6 deb file from /var/cache/apt/archives/ntp_1%3a4.2.6.p5+dfsg-7+deb8u3_amd64.deb on the martian fb1 to the cloned fb1 and ran.
    controls@fb1:~ 0$ sudo dbpg -i ntp_1%3a4.2.6.p5+dfsg-7+deb8u3_amd64.deb
  • We got error messages about missing dependencies of libopts25 and libssl1.1. We downloaded oldoldstable jessie versions of these packages from here and here. We ensured that these versions are higher than the required versions for ntp. We installed them with:
    controls@fb1:~ 0$ sudo dbpg -i libopts25_5.18.12-3_amd64.deb 
    controls@fb1:~ 0$ sudo dbpg -i libssl1.1_1.1.0l-1~deb9u4_amd64.deb
  • Then we installed the ntp package as described above. It asked us if we want to keep the configuration file, we pressed Y.
  • However, we decided to make the configuration and service files exactly same as martian fb1 to make it same in cloned fb1. We copied /etc/ntp.conf and /etc/systemd/system/ntp.service files from martian fb1 to cloned fb1 in the same positions. Then we enabled ntp, reloaded the daemon, and restarted ntp service:
    controls@fb1:~ 0$ sudo systemctl enable ntp
    controls@fb1:~ 0$ sudo systemctl daemon-reload
    controls@fb1:~ 0$ sudo systemctl restart ntp
  • But ofcourse, since fb1 doesn't have internet access, we got some errors in status of the ntp.service:
    controls@fb1:~ 0$ sudo systemctl status ntp
    ● ntp.service - NTP daemon (custom service)
       Loaded: loaded (/etc/systemd/system/ntp.service; enabled)
       Active: active (running) since Mon 2021-10-04 17:12:58 UTC; 1h 15min ago
     Main PID: 26807 (code=exited, status=0/SUCCESS)
       CGroup: /system.slice/ntp.service
               ├─30408 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:107
               └─30525 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:107
    
    Oct 04 17:48:42 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 17:48:52 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
    Oct 04 18:05:05 fb1 ntpd_intres[30525]: host name not found: 0.debian.pool.ntp.org
    Oct 04 18:05:15 fb1 ntpd_intres[30525]: host name not found: 1.debian.pool.ntp.org
    Oct 04 18:05:25 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 18:05:35 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
    Oct 04 18:21:48 fb1 ntpd_intres[30525]: host name not found: 0.debian.pool.ntp.org
    Oct 04 18:21:58 fb1 ntpd_intres[30525]: host name not found: 1.debian.pool.ntp.org
    Oct 04 18:22:08 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 18:22:18 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
  • But the ntpq command is giving the saem output as given by ntpq comman in martian fb1 (except for the source servers), that the broadcasting is happening in the same manner:
    controls@fb1:~ 0$ ntpq -p
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
     192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000
    
  • On the FE machines side though, the systemd-timesyncd are still unable to read the time signal from fb1 and show the status as idle:
    controls@c1bhd:~ 3$ timedatectl
          Local time: Mon 2021-10-04 18:34:38 UTC
      Universal time: Mon 2021-10-04 18:34:38 UTC
            RTC time: Mon 2021-10-04 18:34:38
           Time zone: Etc/UTC (UTC, +0000)
         NTP enabled: yes
    NTP synchronized: no
     RTC in local TZ: no
          DST active: n/a
    controls@c1bhd:~ 0$ systemctl status systemd-timesyncd -l
    ● systemd-timesyncd.service - Network Time Synchronization
       Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
       Active: active (running) since Mon 2021-10-04 17:21:29 UTC; 1h 13min ago
         Docs: man:systemd-timesyncd.service(8)
     Main PID: 244 (systemd-timesyn)
       Status: "Idle."
       CGroup: /system.slice/systemd-timesyncd.service
               └─244 /lib/systemd/systemd-timesyncd
  • So the time synchronization is still not working. We expected the FE machined to just synchronize to fb1 even though it doesn't have any upstream ntp server to synchronize to. But that didn't happen.
  • I'm (Anchal) working on getting internet access to c1teststand computers.

Digging into mx_stream/daqd_dc errors:

  • We went and changed the Restart fileld in /etc/systemd/system/daqd_dc.service on cloned fb1 to 2. This allows the service to fail and stop restarting after two attempts. This allows us to see the real error message instead of the systemd error message that the service is restarting too often. We got following:
    controls@fb1:~ 3$ sudo systemctl status daqd_dc -l
    ● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
       Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
       Active: failed (Result: exit-code) since Mon 2021-10-04 17:50:25 UTC; 22s ago
      Process: 715 ExecStart=/usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc (code=exited, status=1/FAILURE)
     Main PID: 715 (code=exited, status=1/FAILURE)
    
    Oct 04 17:50:24 fb1 systemd[1]: Started Advanced LIGO RTS daqd data concentrator.
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: [Mon Oct  4 17:50:25 2021] Unable to set to nice = -20 -error Unknown error -1
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: Failed to do mx_get_info: MX not initialized.
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: 263596
    Oct 04 17:50:25 fb1 systemd[1]: daqd_dc.service: main process exited, code=exited, status=1/FAILURE
    Oct 04 17:50:25 fb1 systemd[1]: Unit daqd_dc.service entered failed state.
    
  • It seemed like the only thing daqd_dc process doesn't like is that mx_stream services are in failed state in teh FE computers. So we did the same process on FE machines to get the real error messages:
    controls@fb1:~ 0$ sudo chroot /diskless/root
    fb1:/ 0#
    fb1:/ 0# sudo nano /etc/systemd/system/mx_stream.service
    fb1:/ 0#
    fb1:/ 0# exit
  • Then I ssh'ed into c1bhd to see the error message on mx_stream service properly.
    controls@c1bhd:~ 0$ sudo systemctl daemon-reload
    controls@c1bhd:~ 0$ sudo systemctl restart mx_stream
    controls@c1bhd:~ 0$ sudo systemctl status mx_stream -l
    ● mx_stream.service - Advanced LIGO RTS front end mx stream
       Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
       Active: failed (Result: exit-code) since Mon 2021-10-04 17:57:20 UTC; 24s ago
      Process: 11832 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
     Main PID: 11832 (code=exited, status=1/FAILURE)
    
    Oct 04 17:57:20 c1bhd systemd[1]: Starting Advanced LIGO RTS front end mx stream...
    Oct 04 17:57:20 c1bhd systemd[1]: Started Advanced LIGO RTS front end mx stream.
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: send len = 263596
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: mx_connect failed Nic ID not Found in Peer Table
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: c1x06_daq mmapped address is 0x7f516a97a000
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: c1bhd_daq mmapped address is 0x7f516697a000
    Oct 04 17:57:20 c1bhd systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
    Oct 04 17:57:20 c1bhd systemd[1]: Unit mx_stream.service entered failed state.
    
  • c1sus2 shows the same error. I'm not sure I understand these errors at all. But they seem to have nothing to do with timing issuessurprise!

As usual, some help would be helpful

  16381   Tue Oct 5 17:58:52 2021 AnchalSummaryCDSc1teststand problems summary

open-mx service is running successfully on the fb1(clone), c1bhd and c1sus.

Quote:

I don't know anything about mx/open-mx, but you also need open-mx,don't you?


  16382   Tue Oct 5 18:00:53 2021 AnchalSummaryCDSc1teststand time synchronization working now

Today I got a new router that I used to connect the c1teststand, fb1 and chiara. I was able to see internet access in c1teststand and fb1, but not in chiara. I'm not sure why that is the case.

The good news is that the ntp server on fb1(clone) is working fine now and both FE computers, c1bhd and c1sus2 are succesfully synchronized to the fb1(clone) ntpserver. This resolves any possible timing issues in this DAQ network.

On running the IOP and user models however, I see the same errors are mentioned in 40m/16372. Something to do with:

Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: mx_connect failed Nic ID not Found in Peer Table
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1x07_daq mmapped address is 0x7fa4819cc000
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1su2_daq mmapped address is 0x7fa47d9cc000


Thu Oct 7 17:04:31 2021

I fixed the issue of chiara not getting internet. Now c1teststand, fb1 and chiara, all have internet connections. It was the issue of default gateway and interface and findiing the DNS. I have found the correct settings now.

  16385   Wed Oct 6 15:39:29 2021 AnchalSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

Note that your tests were done with the output matrix for BS and PRM in the compensated state as done in 40m/16374. The changes made there were supposed to clear out any coil actuation imbalance in the angular degrees of freedom.

  16391   Mon Oct 11 17:31:25 2021 AnchalSummaryCDSFixed mounting of mx devices in fb. daqd_dc is running now.
 
 

However, lspci | grep 'Myri' shows following output on both computers:

controls@fb1:/dev 0$ lspci | grep 'Myri'
02:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC (rev 01)

Which means that the computer detects the card on PCie slot.

 

I tried to add this to /etc/rc.local to run this script at every boot, but it did not work. So for now, I'll just manually do this step everytime. Once the devices are loaded, we get:

controls@fb1:/etc 0$ ls /dev/*mx*
/dev/mx0  /dev/mx4  /dev/mxctl   /dev/mxp2  /dev/mxp6         /dev/ptmx
/dev/mx1  /dev/mx5  /dev/mxctlp  /dev/mxp3  /dev/mxp7
/dev/mx2  /dev/mx6  /dev/mxp0    /dev/mxp4  /dev/open-mx
/dev/mx3  /dev/mx7  /dev/mxp1    /dev/mxp5  /dev/open-mx-raw

The, restarting all daqd_ processes, I found that daqd_dc was running succesfully now. Here is the status:

controls@fb1:/etc 0$ sudo systemctl status daqd_* -l
● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
   Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:00 PDT; 23min ago
 Main PID: 2308 (daqd_dc_mx)
   CGroup: /daqd.slice/daqd_dc.service
           ├─2308 /usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc
           └─2370 caRepeater

Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 006 thread priority error Operation not permitted[Mon Oct 11 17:48:06 2021]
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 005 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] [Mon Oct 11 17:48:06 2021] mx receiver 006 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 007 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread - label dqmx003 pid=2362
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread priority error Operation not permitted
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: warning:regcache incompatible with malloc
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] EDCU has 410 channels configured; first=0
Oct 11 17:49:06 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:49:06 2021] ->4: clear crc

● daqd_fw.service - Advanced LIGO RTS daqd frame writer
   Loaded: loaded (/etc/systemd/system/daqd_fw.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:01 PDT; 23min ago
 Main PID: 2318 (daqd_fw)
   CGroup: /daqd.slice/daqd_fw.service
           └─2318 /usr/bin/daqd_fw -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.fw

Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] [Mon Oct 11 17:48:09 2021] Producer thread - label dqproddbg pid=2440
Oct 11 17:48:09 fb1 daqd_fw[2318]: Producer crc thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] [Mon Oct 11 17:48:09 2021] Producer crc thread put on CPU 0
Oct 11 17:48:09 fb1 daqd_fw[2318]: Producer thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread put on CPU 0
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread - label dqprod pid=2434
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread put on CPU 0
Oct 11 17:48:10 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:10 2021] Minute trender made GPS time correction; gps=1318034906; gps%60=26
Oct 11 17:49:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:49:09 2021] ->3: clear crc

● daqd_rcv.service - Advanced LIGO RTS daqd testpoint receiver
   Loaded: loaded (/etc/systemd/system/daqd_rcv.service; enabled)
   Active: active (running) since Mon 2021-10-11 17:48:00 PDT; 23min ago
 Main PID: 2311 (daqd_rcv)
   CGroup: /daqd.slice/daqd_rcv.service
           └─2311 /usr/bin/daqd_rcv -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.rcv

Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1X07_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_STATUS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_CRC_CPS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_STATUS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_CRC_CPS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1OM[Mon Oct 11 17:50:21 2021] Epics server started
Oct 11 17:50:24 fb1 daqd_rcv[2311]: [Mon Oct 11 17:50:24 2021] Minute trender made GPS time correction; gps=1318035040; gps%120=40
Oct 11 17:51:21 fb1 daqd_rcv[2311]: [Mon Oct 11 17:51:21 2021] ->3: clear crc

Now, even before starting teh FE models, I see DC status as ox2bad in the CDS screens of the IOP and user models. The mx_stream service remains in a failed state at teh FE machines and remain the same even after restarting the service.

controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: failed (Result: exit-code) since Mon 2021-10-11 17:50:26 PDT; 15min ago
  Process: 382 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
 Main PID: 382 (code=exited, status=1/FAILURE)

Oct 11 17:50:25 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 17:50:25 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 17:50:25 c1sus2 mx_stream_exec[382]: Failed to open endpoint Not initialized
Oct 11 17:50:26 c1sus2 systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 11 17:50:26 c1sus2 systemd[1]: Unit mx_stream.service entered failed state.

But  if I restart the mx_stream service before starting the rtcds models, the mx-stream service starts succesfully:

controls@c1sus2:~ 0$ sudo systemctl restart mx_stream
controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: active (running) since Mon 2021-10-11 18:14:13 PDT; 25s ago
 Main PID: 1337 (mx_stream)
   CGroup: /system.slice/mx_stream.service
           └─1337 /usr/bin/mx_stream -e 0 -r 0 -w 0 -W 0 -s c1x07 c1su2 -d fb1:0

Oct 11 18:14:13 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 18:14:13 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: send len = 263596
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: Connection Made

However, the DC status on CDS screens still show 0x2bad. As soon as I start the rtcds model c1x07 (the IOP model for c1sus2), the mx_stream service fails:

controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: failed (Result: exit-code) since Mon 2021-10-11 18:18:03 PDT; 27s ago
  Process: 1337 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
 Main PID: 1337 (code=exited, status=1/FAILURE)

Oct 11 18:14:13 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 18:14:13 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: send len = 263596
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: Connection Made
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: isendxxx failed with status Remote Endpoint Unreachable
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: disconnected from the sender
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: c1x07_daq mmapped address is 0x7fe3620c3000
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: c1su2_daq mmapped address is 0x7fe35e0c3000
Oct 11 18:18:03 c1sus2 systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 11 18:18:03 c1sus2 systemd[1]: Unit mx_stream.service entered failed state.

This shows that the start of rtcds model, causes the fail in mx_stream, possibly due to inability of finding the endpoint on fb1. I've again reached to the edge of my knowledge here. Maybe the fiber optic connection between fb and the network switch that connects to FE is bad, or the connection between switch and FEs is bad.

But we are just one step away from making this work.

 

 

  16392   Mon Oct 11 18:29:35 2021 AnchalSummaryCDSMoving forward?

The teststand has some non-trivial issue with Myrinet card (either software or hardware) which even teh experts are saying they don't remember how to fix it. CDS with mx was iin use more than a decade ago, so it is hard to find support for issues with it now and will be the same in future. We need to wrap up this test procedure one way or another now, so I have following two options moving forward:


Direct integration with main CDS and testing

  • We can just connect the c1sus2 and c1bhd FE computers to martian network directly.
  • We'll have to connect c1sus2 and c1bhd to the optical fiber subnetwork as well.
  • On booting, they would get booted through the exisitng fb1 boot server which seems to work fine for the other 5 FE machines.
  • We can update teh DHCP in chiara and reload it so that we can ssh into these FEs with host names.
  • Hopefully, presence of these computers won't tank the existing CDS even if they  themselves have any issues, as they have no shared memory with other models.
  • If this works, we can do the loop back testing of I/O chassis using the main DAQ network and move on with our upgrade.
  • If this does not work and causes any harm to exisitng CDS network, we can disconnect these computers and go back to existing CDS. Recently, our confidence on rebooting the CDS has increased with the robust performance as some legacy issues were fixed.
  • We'll however, continue to use a CDS which is no more supported by the current LIGO CDS group.

Testing CDS upgrade on teststand

  • From what I could gather, most of the hardware in I/O chassis that I could find, is still used in CDS of LLO and LHO, with their recent tests and documents using the same cards and PCBs.
  • There might be some difference in the DAQ network setup that I need to confirm.
  • I've summarised the current c1teststand hardware on this wiki page.
  • If the latest CDS is backwards compatible with our hardware, we can test the new CDS in teh c1teststand setup without disrupting our main CDS. We'll have ample help and support for this upgrade from the current LIGO CDS group.
  • We can do the loop back testing of the I/O chassis as well.
  • If the upgrade is succesfull in the teststand without many hardware changes, we can upgrade the main CDS of 40m as well, as it has the same hardware as our teststand.
  • Biggest plus point would be that out CDS will be up-to-date and we will be able to take help from CDS group if any trouble occurs.

So these are the two options we have. We should discuss which one to take in the mattermost chat or in upcoming meeting.

  16395   Tue Oct 12 17:10:56 2021 AnchalSummaryCDSSome more information

Chris pointed out some information displaying scripts, that show if the DAQ network is working or not. I thought it would be nice to log this information here as well.

controls@fb1:/opt/mx/bin 0$ ./mx_info
MX Version: 1.2.16
MX Build: controls@fb1:/opt/src/mx-1.2.16 Mon Aug 14 11:06:09 PDT 2017
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
    8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0:  364.4 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
    Status:        Running, P0: Link Up
    Network:    Ethernet 10G

    MAC Address:    00:60:dd:45:37:86
    Product code:    10G-PCIE-8B-S
    Part number:    09-04228
    Serial number:    423340
    Mapper:        00:60:dd:45:37:86, version = 0x00000000, configured
    Mapped hosts:    3

                                                        ROUTE COUNT
INDEX    MAC ADDRESS     HOST NAME                        P0
-----    -----------     ---------                        ---
   0) 00:60:dd:45:37:86 fb1:0                             1,0
   1) 00:25:90:05:ab:47 c1bhd:0                           1,0
   2) 00:25:90:06:69:c3 c1sus2:0                          1,0

 

controls@c1bhd:~ 1$ /opt/open-mx/bin/omx_info
Open-MX version 1.5.4
 build: root@fb1:/opt/src/open-mx-1.5.4 Tue Aug 15 23:48:03 UTC 2017

Found 1 boards (32 max) supporting 32 endpoints each:
 c1bhd:0 (board #0 name eth1 addr 00:25:90:05:ab:47)
   managed by driver 'igb'

Peer table is ready, mapper is 00:60:dd:45:37:86
================================================
  0) 00:25:90:05:ab:47 c1bhd:0
  1) 00:60:dd:45:37:86 fb1:0
  2) 00:25:90:06:69:c3 c1sus2:0

 

controls@c1sus2:~ 0$ /opt/open-mx/bin/omx_info
Open-MX version 1.5.4
 build: root@fb1:/opt/src/open-mx-1.5.4 Tue Aug 15 23:48:03 UTC 2017

Found 1 boards (32 max) supporting 32 endpoints each:
 c1sus2:0 (board #0 name eth1 addr 00:25:90:06:69:c3)
   managed by driver 'igb'

Peer table is ready, mapper is 00:60:dd:45:37:86
================================================
  0) 00:25:90:06:69:c3 c1sus2:0
  1) 00:60:dd:45:37:86 fb1:0
  2) 00:25:90:05:ab:47 c1bhd:0

These outputs prove that the framebuilder and the FEs are able to see each other in teh DAQ network.


Further, the error that we see when IOP model is started which crashes the mx_stream service on the FE machines (see 40m/16391) :

isendxxx failed with status Remote Endpoint Unreachable

This has been seen earlier when Jamie was troubleshooting the current fb1 in martian network in 40m/11655 in Oct, 2015. Unfortunately, I could not find what Jamie did over a year to fix this issue.

  16396   Tue Oct 12 17:20:12 2021 AnchalSummaryCDSConnected c1sus2 to martian network

I connected c1sus2 to the martian network by splitting the c1sim connection with a 5-way switch. I also ran another ethernet cable from the second port of c1sus2 to the DAQ network switch on 1X7.

Then I logged into chiara and added the following in chiara:/etc/dhcp/dhcpd.conf :

host c1sus2 {
  hardware ethernet 00:25:90:06:69:C2;
  fixed-address 192.168.113.92;
}

And following line in chiara:/var/lib/bind/martian.hosts :

c1sus2          A    192.168.113.92

Note that entires c1bhd is already added in these files, probably during some earlier testing by Gautam or Jon. Then I ran following to restart the dhcp server and nameserver:

~> sudo service bind9 reload
[sudo] password for controls:
 * Reloading domain name service... bind9                                                 [ OK ]
~> sudo service isc-dhcp-server restart
isc-dhcp-server stop/waiting
isc-dhcp-server start/running, process 25764

Now, As I switched on c1sus2 from front panel, it booted over network from fb1 like other FE machines and I was able to login to it by first logging to fb1 and then sshing to c1sus2.

Next, I copied the simulink models and the medm screens of c1x06, xc1x07, c1bhd, c1sus2 from the paths mentioned on this wiki page. I also copied the medm screens from chiara(clone):/opt/rtcds/caltech/c1/medm to martian network chiara in the appropriate places. I have placed the file /opt/rtcds/caltech/c1/medm/teststand_sitemap.adl which can be used to open sitemap for c1bhd and c1sus2 IOP and user models.

Then I logged into c1sus2 (via fb1) and did make, install, start procedure:

controls@c1sus2:~ 0$ rtcds make c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### building c1x07...
Cleaning c1x07...
Done
Parsing the model c1x07...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1x07...
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl

Successfully compiled c1x07
***********************************************
Compile Warnings, found in c1x07_warnings.log:
***********************************************
***********************************************
controls@c1sus2:~ 0$ rtcds install c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### installing c1x07...
Installing system=c1x07 site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1X07.txt
Installing /opt/rtcds/caltech/c1/target/c1x07/c1x07epics
Installing /opt/rtcds/caltech/c1/target/c1x07
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1x07
/opt/rtcds/caltech/c1/scripts/startc1x07
sudo: unable to resolve host c1sus2
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_211012_174226.par -gds_node=24 -site_letter=C -system=c1x07 -host=c1sus2
Installing GDS node 24 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1X07.ini
Installing Epics MEDM screens
Running post-build script

safe.snap exists 
controls@c1sus2:~ 0$ rtcds start c1x07
Cannot start/stop model 'c1x07' on host c1sus2.
controls@c1sus2:~ 4$ rtcds list

controls@c1sus2:~ 0$ 

One can see that even after making and installing, the model c1x07 is not listed as available models in rtcds list. Same is the case for c1sus2 as well. So I could not proceed with testing.

Good news is that nothing that I did affect the current CDS functioning. So we can probably do this testing safely from the main CDS setup.

  16398   Wed Oct 13 11:25:14 2021 AnchalSummaryCDSRan c1sus2 models in martian CDS. All good!

Three extra steps (when adding new models, new FE):

  • Chris pointed out that the sudo command in c1sus2 is giving error
    sudo: unable to resolve host c1sus2
    
    This error comes in when the computer could not figure out it's own hostname. Since FEs are network booted off the fb1, we need to update the /etc/hosts in /diskless/root everytime we add a new FE.
    controls@fb1:~ 0$ sudo chroot /diskless/root
    fb1:/ 0# sudo nano /etc/hosts
    fb1:/ 0# exit
    
    I added the following line in /etc/hosts file above:
    192.168.113.92  c1sus2 c1sus2.martian
    
    This resolved the issue of sudo giving error. Now, the rtcds make and install steps had no errors mentioned in their outputs.
  • Another thing that needs to be done, as Koji pointed out, is to add the host and models in /etc/rtsystab in /diskless/root of fb:
    controls@fb1:~ 0$ sudo chroot /diskless/root
    fb1:/ 0# sudo nano /etc/rtsystab
    fb1:/ 0# exit
    
    I added the following lines in /etc/rtsystab file above:
    c1sus2   c1x07  c1su2
    
    This told rtcds what models would be available on c1sus2. Now rtcds list is displaying the right models:
    controls@c1sus2:~ 0$ rtcds list
    c1x07
    c1su2
  • The above steps are still not sufficient for the daqd_ processes to know about the new models. This part is supossed to happen automatically, but does not happen in our CDS apparently. So everytime there is a new model, we need to edit the file /opt/rtcds/caltech/c1/target/daqd/master and add following lines to it:
    # Fast Data Channel lists
    # c1sus2
    /opt/rtcds/caltech/c1/chans/daq/C1X07.ini
    /opt/rtcds/caltech/c1/chans/daq/C1SU2.ini
    
    # test point lists
    # c1sus2
    /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par
    /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1su2.par
    
    I needed to restart the daqd_ processes in  fb1 for them to notice these changes:
    controls@fb1:~ 0$ sudo systemctl restart daqd_*
    
    This finally lit up the status channels of DC in C1X07_GDS_TP.adl and C1SU2_GDS_TP.adl . However the channels C1:DAQ-DC0_C1X07_STATUS and C1:DAQ-DC0_C1SU2_STATUS both have values 0x2bad. This persists on restarting the models. I then just simply restarted teh mx_stream on c1sus2 and boom, it worked! (see attached all green screen, never seen before!)

So now Ian can work on testing the I/O chassis and we would be good to move c1sus2 FE and I/O chassis to 1Y3 after that. I've also done following extra changes:

  • Updated CDS_FE_STATUS medm screen to show the new c1sus2 host.
  • Updated global diag rest script to act on c1xo7 and c1su2 as well.
  • Updated mxstream restart script to act on c1sus2 as well.
Attachment 1: CDS_screens_running.png
CDS_screens_running.png
  16407   Fri Oct 15 16:46:27 2021 AnchalSummaryOptical LeversVent Prep

I centered all the optical levers on ITMX, ITMY, ETMX, ETMY, and BS to a position where the single arm lock on both were best aligned. Unfortunately, we are seeing the TRX at 0.78 and TRY at 0.76 at the most aligned positions. It seems less power is getting out of PMC since last month. (Attachment 1).

Then, I tried to lock PRMI with carrier with no luck. But I was able to see flashing of up to 4000 counts in POP_DC. At this position, I centered the PRM optical lever too (Attachment 2).

Attachment 1: Screen_Shot_2021-10-15_at_4.34.45_PM.png
Screen_Shot_2021-10-15_at_4.34.45_PM.png
Attachment 2: Screen_Shot_2021-10-15_at_4.45.31_PM.png
Screen_Shot_2021-10-15_at_4.45.31_PM.png
Attachment 3: Screen_Shot_2021-10-15_at_4.34.45_PM.png
Screen_Shot_2021-10-15_at_4.34.45_PM.png
Attachment 4: Screen_Shot_2021-10-15_at_4.34.45_PM.png
Screen_Shot_2021-10-15_at_4.34.45_PM.png
  16416   Wed Oct 20 11:16:21 2021 AnchalSummaryPEMParticle counter setup near BS Chamber

I have placed a GT321 particle counter on top of the MC1/MC3 chamber next to the BS chamber. The serial cable is connected to c1psl computer on 1X2 using 2 usb extenders (blue in color) over the PSL enclosure and over the 1X1 rack.

The main serial communication script for this counter by Radhika is present in 40m/labutils/serial_com/gt321.py.

A 40m specific application script is present in the new git repo for 40m scripts, in 40m/scripts/PEM/particleCounter.py. Our plan is to slowly migrate the legacy scripts directory to this repo overtime. I've cloned this repo in the nfs shared directory at /opt/rtcds/caltech/c1/Git/40m/scripts which makes the scripts available at all computers and keep them upto date in all computers.

The particle counter script is running on c1psl through a systemd service, using service file 40m/scripts/PEM/particleCounter.service. Locally in c1psl, /etc/systemd/system/particleCounter.service is symbollically linked to the file in the file.

Following channels for particle counter needed to be created as I could not find any existing particle counter channels.

[C1:PEM-BS_PAR_CTS_0p3_UM]
[C1:PEM-BS_PAR_CTS_0p5_UM]
[C1:PEM-BS_PAR_CTS_1_UM]
[C1:PEM-BS_PAR_CTS_2_UM]
[C1:PEM-BS_PAR_CTS_5_UM]

These are created from 40m/softChansModbus/particleCountChans.db database file. Computer optimus is running a docker container to serve as EPICS server for such soft channels. To add or edit channels, one just need to add new database file or edit database files in thsi repo and on optimus do:

controls@optimus|~> sudo docker container restart softchansmodbus_SoftChans_1
softchansmodbus_SoftChans_1

that's it.

I've added the above channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini to record them in framebuilder. Starting from 11:20 am Oct 20, 2021 PDT, the data on these channels is from BS chamber area. Currently the script is running continuosly, which means 0.3u particles are sampled every minute, 0.5u twice in 5 minutes and 1u, 2u, and 5u particles are sampled once in 5 minutes. We can reduce the sampling rate if this seems unncessary to us.

Attachment 1: PXL_20211020_183728734.jpg
PXL_20211020_183728734.jpg
  16417   Wed Oct 20 11:48:27 2021 AnchalSummaryCDSPower supple configured correctly.

This was horrible! That's my bad, I should have checked the configuration before assuming that it is right.

I fixed the power supply configuration. Now the strip has two rails of +/- 18V and the GND is referenced to power supply earth GND.

Ian should redo the tests.

  16420   Thu Oct 21 11:41:31 2021 AnchalSummaryPEMParticle counter setup near BS Chamber

The particle count channel names were changes yesterday to follow naming conventions used at the sites. Following are the new names:

C1:PEM-BS_DUST_300NM
C1:PEM-BS_DUST_500NM
C1:PEM-BS_DUST_1000NM
C1:PEM-BS_DUST_2000NM
C1:PEM-BS_DUST_5000NM
 

The legacy count channels are kept alive with C1:PEM-count_full copying C1:PEM-BS_DUST_1000NM channel and C1:PEM-count_half copying C1:PEM-BS_DUST_500NM channel.

Attachment one is the particle counter trend since 8:30 am morning today when the HVAC wokr started. Seems like there was some peak particle presence around 11 am. The particle counter even counted 8 counts of particles size above 5um!

 

Attachment 1: ParticleCountData20211021.pdf
ParticleCountData20211021.pdf
  16424   Mon Oct 25 13:23:45 2021 AnchalSummaryBHDBefore photos of BSC

[Yehonathan, Anchal]

On thursday Oct 21 2021, Yehonathan and I opened the door to BSC and took some photos. We setup the HEPA stand next to the door with anti-static curtains covering all sides. We spend about 15 minutes trying to understand the current layout and taking photos and a video. Any suggestions on improvement in our technique and approach would be helpful.

Links to photos:

https://photos.app.goo.gl/fkkdu9qAvH1g5boq6

  16425   Mon Oct 25 17:37:42 2021 AnchalSummaryBHDPart I of BHR upgrade - Removed optics from BSC

[Anchal, Paco, Ian]


Clean room etiquettes

  • Two people in coverall suits, head covers, masks and AccuTech ultra clean gloves.
  • One person in just booties to interact with outside "dirty" world.
  • Anything that comes in chamber, first cleaned outside with clean cloth and IPA. Then cleaned by the "clean" folks. We followed this for allen keys, camera and beam finder card.
  • Once the chamber cover has been removed, cover the annulus with donut. We forgot to do this :(

Optics removal and changes

We removed the following optics from the BSC table and stored them in X-end flowbench with fan on. See attachment 1 and 2.

  1. IPPOS SM2
  2. GRX SM2
  3. PRM OL1
  4. PRMOL4
  5. IPPOS SM3
  6. IPANG SM1
  7. PRM OL2
  8. Unidentified optic inbetween IPPOS45P and IPPOS SM3
  9. Beam block behing PR3
  10. Beam block behind GR PBS
  11. GR PBS
  12. GRPERI1L (Periscope)
  13. PRMOL3
  14. IPPOS45P
  15. Cylindrical counterweight on North-west end of table.
  16. Cheap rectangular mirror on South west end of table (probably used for some camera, but not in use anymore)
  17. IPANGSM2

We also changed the direction of clamp of MMT1 to move it away from the center of the able (where PRM will be placed)

We screwed in the earthquake stops on PRM and BS from front face and top.

We unscrewed the cable post for BS and PRM oplevs and loved it in between SR3 and BS and screwed it lightly.

We moved the PRM, turned it anti-clockwise 90 degrees and brought it in between TT2 and BS. Now there is a clear line of sight between TT2 and PR2 on ITMY table.


Some next steps:

  • We align the input beam to TT2 by opening the "Injection Chamber" (formerly known as OMC chamber). While doing so, we'll clear unwanted optics from this table as well.
  • We open ITMX chamber, clear some POP optics. If SOS are ready, we would replace PR2 with SOS and put it in a new position.
  • Then we'll replace PR3 with an SOS and align the beam to BS.

These are next few days of work. We need atleast one SOS ready by Thursday.


Photos after today's work: https://photos.app.goo.gl/EE7Mvhw5CjgZrQpG6

Attachment 1: rn_image_picker_lib_temp_44cb790a-c3b4-42aa-8907-2f9787a02acd.jpg
rn_image_picker_lib_temp_44cb790a-c3b4-42aa-8907-2f9787a02acd.jpg
Attachment 2: rn_image_picker_lib_temp_0fd8f4fd-64ae-4ccd-8422-cfe929d4eeee.jpg
rn_image_picker_lib_temp_0fd8f4fd-64ae-4ccd-8422-cfe929d4eeee.jpg
  16431   Wed Oct 27 16:27:16 2021 AnchalSummaryBHDPart II of BHR upgrade - Prep

[Anchal, Paco, Ian]

Before we could start working on Part II, which is to relocate TT2 to new location, we had to clear space in front of injection chamber door and clean the floor which was very dusty. This required us to disconnect everything we could safely from OMC North short electronics rack, remove 10-15 BNC cables, 4-5 power cords and relocate some fiber optic cables. We didn't had caps for fiber optic cables handy, so we did not remove them from the rack mounted unit and just turned it away. At the end, we mopped the floor and dried it with a dry cloth. Before and after photos in attachments.

 

Attachment 1: OMCNorthBefore.jpeg
OMCNorthBefore.jpeg
Attachment 2: OMCNorthAfter.jpeg
OMCNorthAfter.jpeg
  16432   Wed Oct 27 16:31:35 2021 AnchalSummaryBHDPart III of BHR upgrade - Removal of PR2 Small Suspension

I went inside the ITMX Chamber to read off specs from PR2 edge. This was required to confirm our calculations of LO power for BHR later. The numbers that I could read from the edge were kind of meaningless "0.5 088 or 2.0 088". To make it more worthwhile this opening of the chamber, we decided to remove the PR2 suspension unit so that the optic can be removed and installed on an SOS in the cleanroom. We covered the optic in clean aluminum foil inside the chamber, then placed in on another aluminum foil to cover completely. Then I traveled slowly to the C&B room, where I placed it on a flow bench.


Later on, we decided to use a dummy fixed mount mirror for PR2 initially with the same substrate thickness, so that we get enough LO power in transmission for alignment. In the very end, we'll swap that with the PR2 mounted on an SOS unit.

  16433   Wed Oct 27 16:38:02 2021 AnchalSummaryBHDPart II of BHR upgrade - Relocation of TT2 and MMT1/2 alignment

[Anchal, Paco]

We opened BSC and Injection Chamber doors. We removed two stacked counterweights from near the center of the BS table, from behind TT2 and placed them in the Xend flow bench. Then we unscrewed TT2 and relocated it to the new BHR layout position. This provided us with the target for the alignment of MMT1 and MMT2 mirrors.

While aligning MMT1 and MMT2, we realized that the BHR layout underestimated the clearance of the beam from MMT2 to TT2, from the TT1 suspension unit. The TT1 suspension stage was clipping our beam going to TT2. To rectify this, we decided to move the MMT2 mirror mount about a cm South and retry. We were able to align the beam to the TT2 optic, but it is a bit off-center. The reflection of TT2 now is going in the general direction of the ITMX chamber. We stopped our work here as fatigue was setting in. Following are some thoughts and future directions:

  • We realized that the output beam from the mode cleaner moves a lot (by more than a cm at MMT2) between different locks. Maybe that's just because of our presence. But we wonder how much clearance all beams must have from MC3 to TT2.
  • Currently, we think the Faraday Isolator might be less than 2 cm away from the beam between MMT1 and MMT2 and the TT1 suspension is less than 2 cm away from MMT2 and TT2.
  • Maybe we can fix these by simply changing the alignment on TT1 which was fixed for our purposes.
  • We definitely need to discuss the robustness of our path a bit more before we proceed to the next part of the upgrade.

Thu Oct 28 17:00:52 2021 After Photos: https://photos.app.goo.gl/wNL4dxPyEgYTKQFG9

  16438   Thu Oct 28 17:01:54 2021 AnchalSummaryBHDPart III of BHR upgrade - Adding temp fixed flat mirror for PR2

[Anchal, Paco, Ian]

  • We added a Y1-2037-0 mirror (former IPPOS SM2 mirror) on a fixed mount in the position of where PR2 is supposed to be in new BHR layout.
  • After turning out all lights in the lab, we were able to see a transmitted beam on our beam finder card.
  • We aligned the mirror so that it relfects the beam off to PR3 clearly and the reflection from PR3 hits BS in the center.
  • We were able to see clear gaussian beams splitted from BS going towards ITMX and ITMY.

Photos: https://photos.app.goo.gl/cKdbtLGa9NtkwqQ68

  16440   Fri Oct 29 14:39:37 2021 AnchalSummaryBHD1Y1 cleared. IY3 ready for C1SUS2 I/O and FE.

[Anchal, Paco]

We cleared 1Y1 rack today removing the following items. This stuff is sitting on the floor about 2 meters east of 1Y3 (see attachment 1):

  • A VME crate: We disconnected it's power cords from the side bus.
  • A NI PXIe-1071 crate with some SMA multiplexer units on it.

We also moved the power relay ethernet strip from the middle of the rack to the bottom of the rack clearing the space marked clear in Koji's schematics. See attachment 2.

There was nothing to clear in 1Y3. It is ready for installing c1sus2 I/O chassis and FE once the testing is complete.

We also removed some orphaned hanging SMA RG-405 cables between 1Y3 and 1Y1.

Attachment 1: RemovedStuff.jpeg
RemovedStuff.jpeg
Attachment 2: 1Y1.jpeg
1Y1.jpeg
Attachment 3: 1Y3.jpeg
1Y3.jpeg
  16450   Fri Nov 5 12:21:16 2021 AnchalSummaryBHDPart VI of BHR upgrade - Removal of ITMYC optics

Today I opened the ITMY chamber and removed the following optics and placed them in Xend flow bench (See attachment 1-3 for updated photograph):

  • OM1
  • OM2
  • ITMYOL1
  • ITMYOL2
  • SRMOL1
  • SRMOL2
  • POYM1
  • 3 counterweights one of which was double the height of others.

I also unscrewed SRM and parked it near the Western end of the table where no optical paths would intersect it. Later we will move it in place once the alignment of the rest of the optics has been done.

While doing this work, I found two unnoted things on the table:

  • One mirror mounted on a mount but not on a post was just sitting next to ITMY. I have removed this and placed it on Xend flow bench.
  • One horizontal razor or plate on the South end of table, mounted on what I thought looked like a picomotor. The motor was soldered to wires without any connector in-line, so I could not remove this. This is on the spot of AS4 and will need to be removed later.

Photos: https://photos.app.goo.gl/S5siAYguBM4UnKuf8

Attachment 1: XendFlowBenchLeftEnd.jpg
XendFlowBenchLeftEnd.jpg
Attachment 2: XendFlowBenchMiddle.jpg
XendFlowBenchMiddle.jpg
Attachment 3: XendFlowBenchRightEnd.jpg
XendFlowBenchRightEnd.jpg
  16463   Tue Nov 9 19:02:47 2021 AnchalSummaryBHD1Y0 Populated and 1Y1,1Y0 powered

[Anchal, Paco]

Today we populated 4 Sat Amp boxes for LO1, Lo2, AS1, and AS4, 2 BO boxes for C1SU2, and 1 Sat Amp Adaptor box, at 1Y0 according the latest rack plan. We also added 2 Sorenson power supplies in 1Y0 at the top slots to power +/- 18V DC strips on both 1Y1 and 1Y0. All wiring has been done for these power connections.

  16474   Wed Nov 17 17:37:53 2021 AnchalUpdateGeneralPlaced Nodus and fb1 on UPS power

Today I placed nodus and fb1 on UPS battery backed supply. Now power glitches should not hurt our cds system.

  16476   Thu Nov 18 15:16:10 2021 AnchalUpdateGeneralMoved Chiara to 1X7 above nodus powered with same UPS

[Anchal, Paco]

We moved chiara to 1X7 above nodus and powered with same UPS from a battery backed port. The UPS is at 40% load capacity. The nameserver and nfs came back online automatically on boot up.

 

  16479   Mon Nov 22 17:42:19 2021 AnchalUpdateGeneralConnected Megatron to battery backed ports of another UPS

[Anchal, Paco]

I used the UPS that was providing battery backup for chiara earlier (a APS Back-UPS Pro 1000), to provide battery backup to Megatron. This completes UPS backup to all important computers in the lab. Note that this UPS nominally consumes 36% of UPS capacity in power delivery but at start-up, Megatron was many fans that use up to 90% of the capacity. So we should not use this UPS for any other computer or equipment.

While doing so, we found that PS3 on Megatron was malfunctioning. It's green LED was not lighting up on connecting to power, so we replaced it from the PS3 of old FB computer from the same rack. This solved this issue.

Another thing we found was that Megatron on restart does not get configured to correct nameserver resolution settings and loses the ability to resolve names chiara and fb1. This results in the nfs mounts to fail which in turn results in the script services to fail. We fixed this by identifying that the NetworkManager of ubuntu was not disabled and would mess up the nameserver settings which we want to be run by systemd-resolved instead. We corrected the symbolic link: /etc/resolv.conf -> /run/systemd/resolve/resolv.conf. the we stopped and diabled the NetworkManager service to keep this persistent on reboot. Following are the steps that did this:

> sudo rm /etc/resolv.conf
> ln -s /etc/resolv.conf /run/systemd/resolve/resolv.conf
> sudo systemctl stop NetworkManager.service
> sudo systemctl disable NetworkManager.service

 

  16480   Tue Nov 23 18:02:05 2021 AnchalUpdateIMCMC autolocker shifted to python3 script running in docker

I finished copying over the current autolocker bash script functionality into a python script which runs using a simple configuration yaml file. To run this script, one needs to ssh into optimus and :

controls@optimus|~> cd /opt/rtcds/caltech/c1/Git/40m/scripts/MC
controls@optimus|MC> sudo docker-compose up -d
Creating mc_AL_MC_1 ... done

That's it. To check out running docker processes, one can:

controls@optimus|MC> sudo docker ps

And to shut down this particular script, in the same directory, one can

controls@optimus|MC> sudo docker-compose down
Removing mc_AL_MC_1 ... done

If the docker image requires to be rebuild in future, go to the directory where Dockerfile is present and run:

controls@optimus|MC> sudo docker build -t pyep .

I had to add PyYAML package in the pyepics docker image already present on docker hub, thanks to Andrew.

For now, I have disabled the MCautolocker service on Megatron. To start it back again, one would need to ssh into megatron and do following:

~> sudo systemctl enable MCautolocker
~> sudo systemctl start MCautolocker

Let's see for a day how this new script does. I've left PSL shutter open and autolocker engaged.

To do: Fix the C1:IFO-STATE epics channel definition so that it takes its bits from separate lock status channels instead of scripts writign the whole word arbitrarily.

  16509   Wed Dec 15 16:11:38 2021 AnchalSummaryBHDPart VIII of BHR upgrade - Placed LO1

[Anchal, Yehonathan, Paco]


Today we opened ITMX chamber and removed the following optics and placed them in the Xend flow bench (see attachment 1):

  • POPM1
  • POPM2

Yehonathan brought his first SOS baby next to ITMX chamber. The suspension was carried by hands throughout. He gave me the suspension over the IMC beam tube from where I placed it on the table. I checked through the OSEMs and the face magnets were still on. I could not verify the side magnet but nothing seemed out of place.

I then moved LO1 near its planned place. I had to bolt it at 1 inch North and 0.5 inch West of its planned position because the side OSEM on ITMX is long and protrudes out of the base footprint. Even if it was small, the current layout would make the OSEM pins of the side OSEMs of ITMX and LO1 very near each other. So we can not place LO1 closer to ITMX from current position. This means the layout needs to be redesigned a bit for the modified position of LO1. I believe it will significantly shift and turn the beam from LO1 to LO2, so we might need to change the beam upstream from TT2 onwards. More discussion is required.

Unfortunately, what I thought was clicking photos was just changing modes between video and image mode, so I have no photos from today but only a video that I recorded in the end.


Photos: https://photos.app.goo.gl/23kpCknP3vz7YVrS

 

Attachment 1: signal-2021-12-15-161437.jpeg
signal-2021-12-15-161437.jpeg
  16512   Thu Dec 16 12:21:16 2021 AnchalUpdateBHDCoil driver test failed for S2100619-v1

Today I found one of the coil driver boards, S2100619 failed the test on CH2. There appears to be an extra phase lag after 10 kHz and some resonant-like feature at 7 kHz. This of course is very high-frequency stuff and maybe we don't care about these deviations. But it could mean something is off with the channel and could potentially lead to failure in the relevant frequency band in the future. I'll need help to debug this. Please see the attachment for details of test failure.

Attachment 1: D1100687_S2100619-v1_TF_CH2_Not_Matching.pdf
D1100687_S2100619-v1_TF_CH2_Not_Matching.pdf D1100687_S2100619-v1_TF_CH2_Not_Matching.pdf
  16514   Thu Dec 16 15:32:59 2021 AnchalUpdateBHDFinished Coil driver (odd serial number) units tests

We have completed modifications and testing of the HAM Coil driver D1100687 units with serial numbers listed below. The DCC tree reflects these changes and tests (Run/Acq modes transfer functions).

SERIAL # TEST result
S2100609 PASS
S2100611 PASS
S2100613 PASS
S2100615 PASS
S2100617 PASS
S2100619 FAIL (CH2 phase)
S2100621 PASS
S2100623 PASS
S2100625 PASS
S2100627 PASS
S2100629 PASS
S2100631 PASS
S2100633 Waiting for more components
S2101649** PASS
S2101651** PASS
S2101653** PASS
S2101655** PASS

** A fix had to be done on the DC power supply for these. The units' regulated power boards were not connected to the raw DC power, so the cabling had to be modified accordingly.

Further, Paco fixed the two even serial number units (S2101648, S211650) that failed the test. The issues were minor soldering mistakes that were easily resolved.

  16517   Thu Dec 16 17:57:17 2021 AnchalUpdateBHDFinished Coil driver (odd serial number) units tests

S2100619 was fixed by Koji and it passed the test after that.

Quote:
SERIAL #  
S2100619 FAIL (CH2 phase)

 

  16525   Sun Dec 19 07:52:51 2021 AnchalUpdateSUSRemaining task for 2021

The I/O chassis reassigns the ADC and DAC indices on every power cycle. When we moved it, it must have changed it from the order we had. We were aware of this fact and decided to reconnect the I/O chassis to AA/AI to relect the correct order. We forgot to do that but this is not an error, it is expected behavior and can be solved easily.

Quote:

I had the fear that any mistake in the electronics chain could have been the show stopper.

So I quickly checked the signal assignments for the ADC and DAC chains.

I had initial confusion (see below), but it was confirmed that the electronics chains (at least for LO1) are correct.

Note: One 70ft cable is left around the 1Y0 rack

 


There are a few points to be fixed:

- It looks like the ADC/DAC card # assignment has been messed up.

CDS ADC0 -> Cable label ADC1 -> AA A1 -> ...
CDS ADC1 -> Cable label ADC0 -> AA A0 -> ...
CDS DAC0 -> Cable label DAC2 -> AI D2 -> ...
CDS DAC1 -> Cable label DAC0 -> AI D0 -> ...
CDS DAC2 -> Cable label DAC1 -> AI D1 -> ...
(What is going on here... please confirm and correct as they become straight forward)

Once this puzzle was solved I could confirm reasonable connections from the end of the 70 cables to the ADC/DAC.

- We also want to change the ADC card assignment. The face OSEM readings must be assigned to ADC1 and the side OSEM readings to ADC0.
  My system wiring diagram needs to be fixed accordingly too.
  This is because the last channel of the first ADC (ADC0) is not available for us and is used for DuoTone.

 

  16527   Mon Dec 20 14:10:56 2021 AnchalUpdateBHDAll coil drivers ready to be used, modified and tested

Koji found some 68nF caps from Downs and I finished modifying the last remaining coil driver box and tested it.

SERIAL # TEST result
S2100633 PASS

With this, all coil drivers have been modified and tested and are ready to be used. This DCC tree has links to all the coil driver pages which have documentation of modifications and test data.

  16530   Tue Dec 21 16:52:39 2021 AnchalSummaryElectronicsIn-air Sat Amp to Vacuum Flange cables laid for 7 new SOS

[Anchal, Yehonathan, Chub]

We today laid down 14 70 ft long DB25 cables from 1Y1 (6), 1Y0 (8) to ITMY Chamber (4), BS Chamber (6) and ITMX Chamber (4). The cables have been connected to respective satellite amplifier on the racks and the other ends are connected to the vacuum flange feedthru on ITMX for LO1 and PR2, while the others have been kept near the planned flange postions. LO1 is now ready to be connected to CDS by connecting the in-vacuum cable inside ITMX chamber to the OSEMs.

  16533   Wed Dec 22 17:40:22 2021 AnchalSummaryCDSc1su2 model updated with SUS damping blocks for 7 SOSs

[Anchal, Koji]

I've updated the c1su2 model today with model suspension blocks for the 7 new SOSs (LO1, LO2, AS1, AS4, SR2, PR2 and PR3). The model is running properly now but we had some difficulty in getting it to run.

Initially, we were getting 0x2000 error on the c1su2 model CDS screen. The issue probably was high data transmission required for all the 7 SOSs in this model. Koji dug up a script /opt/rtcds/caltech/c1/userapps/trunk/cds/c1/scripts/activateDQ.py that has been used historically for updating the data rate on some of theDQ channels in the suspension block. However, this script was not working properly for Koji, so he create a new script at /opt/rtcds/caltech/c1/chans/daq/activateSUS2DQ.py.

[Ed by KA: I could not make this modified script run so that I replaces the input file (i.e. C1SU2.ini). So the output file is named C1SU2.ini.NEW and need to manually replace the original file.]

With this, Koji was able to reduce acquisition rate of SUSPOS_IN1_DQ, SUSPIT_IN1_DQ, SUSYAW_IN1_DQ, SUSSIDE_IN1_DQ, SENSOR_UL, SENSOR_UR, SENSOR_LL,SENSOR_LR, SENSOR_SIDE, OPLEV_PERROR, OPLEV_YERROR, and OPLEV_SUM to 2048 Sa/s. The script modifies the /opt/rtcds/caltech/c1/chans/daq/C1SU2.ini file which would get re-written if c1su2 model is remade and reinstalled. After this modification, the 0x2000 error stopped appearing and the model is running fine.


Should we change the library model part for sus_single_control.mdl

We notice that all our suspension models need to go through this weird python script modifying auto-generated .ini files to reduce the data rate. Ideally, there is a simpler solution to this by simply adding the datarate 2048 in the '#DAQ Channels' block in the model library part /cvs/cds/rtcds/userapps/trunk/sus/c1/models/lib/sus_single_control.mdl which is the root model in all the suspensions. With this change, the .ini files will automatically be written with correct datarate and there will be no need for using the activateDQ script. But we couldn't find why this simple solution was not implemented in the past, so we want to know if there is more stuff going on here then we know. Changing the library model would obviously change every suspension model and we don't want a broken CDS system on our head at the begining of holidays, so we'll leave this delicate task for the near future.

  16541   Tue Jan 4 18:26:59 2022 AnchalUpdateBHDTested 2" PR2 candidates transmission

I used the rejected light from the PBS after the motorized half-wave plate between PMC and IMC injection path (used for input power control to IMC) to measure the transmission of PR2 candidates. These candidates were picked from QIL (QIL/2696). Unfortunately, I don't think either of these mirrors can be used for PR2.

  Polarization Incident Power [mW] Transmitted Power [mW] Transmission [ppm]
V2-2239 & V2-2242 s-pol 940 0.015 16.0
V2-2239 & V2-2242 p-pol 935 0.015 16.0
V6-704 & V6-705 p-pol 925 21 22703

If I remember correctly, we are looking for a 2" flat mirror with a transmission of the order of 1000 ppm. The current PR2 is supposed to have less than 100 ppm transmission which would not leave enough light for LO path.

I've kept the transmission testing setup intact on the PSL table, I'll test existing PR2 and another optic (which is 0.5" thick unfortunately) tomorrow.

  16543   Wed Jan 5 17:46:04 2022 AnchalUpdateBHDTested 2" PR2 candidates transmission

I tested 2 more optics today, the old PR2 that we took out and another optic I found in QIL. Both these optics are also not good for our purpose.

 

Polarization Incident Power [mW] Transmitted Power [mW] Transmission [ppm]
Existing PR2 p-pol 910 0.004 4.4
V2-1698 & V2-1700 p-pol 910 595 653846

I'll find thw Y1S optic and test that too. We should start looking for alternate solutions as well.

 

  16545   Thu Jan 6 11:54:20 2022 AnchalSummaryBHDPart IX of BHR upgrade - Placed AS1 and AS4

[Paco (Vacuum Work), Anchal]

Today we opened the ITMY Chamber and installed suspended AS1 and AS4 in their planned positions. In doing so, we removed the razor or plate mounted on a pico motor at the south end of the table (see 40m/16450). We needed to make way for AS4 to be installed.


Photos: https://photos.app.goo.gl/YP2ZZhQ3jip3Uhp5A


We need more dog clamps for installing the suspensions, we have used temporary clamps for now. However, knows where new C&B clamps are, please let us know.

  16546   Thu Jan 6 12:52:49 2022 AnchalUpdateCDSYearly DAQD fix 2022!

Just as predicted, all realtime models reported "0x4000" error. Read the parent post for more details. I fixed this by following the instructions. I add folowing lines to the file /opt/rtcds/rtscore/release/src/include/drv/spectracomGPS.c in fb1:

/* 2020 had 366 days and no leap second */
       pHardware->gpsOffset += 31622400;
/* 2021 had no leap seconds or leap days, so adjust for that */
       pHardware->gpsOffset += 31536000;

Then is made the package and reloaded it after stoping the daqd services. This brought back all the fast models except C1SUS2 models which are in red due to some other reason that I'll investigate further.

 

  16552   Thu Jan 6 21:04:41 2022 AnchalSummaryBHDPart VIII of BHR upgrade - LO1 OSEMs inserted

[Anchal, Koji] Part of elog: 40m/16549.

The magnets on the mirror face are arranged in a manner that the overall magnetic dipole moment is nullified faraway. Because of this, the coil output gains in all such optics need to have positive and negative signs in a butterfly mode pattern (eg. UL, LR: +ve and UR, LL: -ve).

In the particular case of LO1, we chose following coil output gains:

  COIL_GAIN
UL -1
UR 1
LR -1
LL 1
SD -1

This ensures that all damping gains have positive signs. Following damping gain values were chosen:

DOF C1:SUS-LO1_SUSXXX_GAIN
POS 5
PIT 2
YAW 0.2
SIDE 10

Having said that, this is a convention and we need to discuss more on what we want to set a convention (or follow a previous one if it exists). My discussion with Koji came up with the idea of fixing the motion response of an OSEM with respect to coil offset by balancing the coil gains across all optics and use same servo gains for all optics afterwards. But it is a complicated thought coming out of tired minds, needs more discussion.


Important notes for suspending the optics:

  • Do not insert the OSEMs fully. Leave all of the magnet out of the OSEMs before transportation.
  • Tighten the OSEMs completely while adjusting the height of the optic. Adjust height of OSEM holder plate if necessary.
  • Ensure the all cage screws are screwed tight completely.

Photos: https://photos.app.goo.gl/CJsS18vFwjo73Tzs5

  16554   Fri Jan 7 16:17:42 2022 AnchalSummaryBHDPart IX of BHR upgrade - Placed AS1 and AS4 filters

[paco]

Added input filters, input matrix, damping filters, output matrix, coil filters, and copy the state over from LO1 into AS1 screen in anticipation for damping.

Added input filters, input matrix, damping filters, output matrix, coil filters, and copy the state over from LO1 into AS4 screen in anticipation for damping.

  16555   Fri Jan 7 17:54:13 2022 AnchalUpdateBHDPR2 Sat Amp has a bad channel

[Anchal, Paco]

Yesterday we noticed that one of the ADC channels was overflowing. I checked the signal chain and found that CH3 on PR2 Sat Amp was railing. After a lot of debugging, our conclusion is that possible the PD current input trace is shorted to the positive supply through a finite resistance on the PCB. This would mean this PCB has a manufacturing defect. The reason we come to this conclusion is that even after removing the opamp U3 (AD822ARZ), we still measure 12.5 V at the pins of R25 (100 Ohm input resistance)

Please see the schematic for reference. We also checked the resistance between input of R25 (marked PDA above) and positive voltage rail and it came out as 3 kOhms. While I all other channels, this value was 150 kOhms.

I would like it if someone else also takes a look at this. We probably would need to change the PCB in this chassis or use a spare chassis.

  16560   Mon Jan 10 13:35:52 2022 AnchalUpdateBHDPR2 Sat Amp has a bad channel

The unit was tested before by Tege. The test included testing the testpoint voltages only. He summarized his work in this doc. The board number is S2100737. Here are the two comments about it:
"This unit presented with an issue on the PD1 circuit of channel 1-4 PCB where the voltage reading on TP6, TP7 and TP8 are -15.1V,  -14.2V, and +14.7V respectively, instead of ~0V.  The unit also has an issue on the PD2 circuit of channel 1-4 PCB because the voltage reading on TP7 and TP8 are  -14.2V, and +14.25V respectively, instead of ~0V."

"Debugging showed that the opamp, AD822ARZ, for PD2 circuit was not working as expected so we replaced with a spare and this fixed the problem. Somehow, the PD1 circuit no longer presents any issues, so everything is now fine with the unit."

Note:  No issues were reported on PD3 circuit is is malfunctioning now.

Quote:

Also: Was this unit tested before? If so, what was the testing result at the time?

 

  16562   Mon Jan 10 14:52:51 2022 AnchalSummaryBHDLO1 OSEMs roughly calibrated and noise measured

I used the open light level output of 908 for ITMX side OSEM from 40m/16549 to roughly calibrate cts2um filter module in LO1 OSEM input filters. All values were close to 0.033. As the calibration reduces the signal value by about 30 times, I increased all damping gains by a factor of 30. None of loops went into any unstable oscillations and I witnessed damping of kicks to the optic.


In-loop power spectrum

I also compared in-loop power spectrum of ETMX and LO1 while damping. ETMX was chosen because it is one of the unaffected optics by the upgrade work. ITMX is held by earthquake stops to avoid unnecessary hits to it while doing chamber work.

Attachment 1 and 2 show the power spectrum of in-loop OSEM values (calibrated in um). At high frequencies, we see about 6 times less noise in LO1 OSEM channel noise floor in comparison to ETMX. Some peaks at 660 Hz and 880 Hz are also missing. At low frequencies, the performance of LO1 is mostly similar to EMTX except for a peak (might be loop instability oscillation) at 1.9 Hz and another one at 5.6 Hz. I'll not get into noise hunting or loop optimization at this stage for the suspension. For now, I believe the new electronics are damping the suspensions as good as the old electronics.

Attachment 1: LO1_vs_ETMX_OSEM_Spectrum_LF_x30_Gain.pdf
LO1_vs_ETMX_OSEM_Spectrum_LF_x30_Gain.pdf
Attachment 2: LO1_vs_ETMX_OSEM_Spectrum_HF_x30_Gain.pdf
LO1_vs_ETMX_OSEM_Spectrum_HF_x30_Gain.pdf
  16565   Mon Jan 10 17:04:47 2022 AnchalUpdateBHDAS1 Sat Amp CH2 had offset

We found that there was a small offset (~300 mV) at TP6 and TP8, in PD2 circuit (CH2 of the board). I replaced U3 AD822ARZ but did not see any affect. I disconnected the adaptor board in the back and saw that the offset went away. This might mean that the cable had some flaky short to a power supply pin. However, when I just reinserted the adaptor board back again, there was no offset. We could not find any issue with the board after that to fix, so we left it as it is. If this board gives offset issues in the future, most probably the ribbon cable would be the suspect.

Now all ADC channels are showing no offset or overflows in C1SU2 chassis.

ELOG V3.1.3-