ID |
Date |
Author |
Type |
Category |
Subject |
16290
|
Mon Aug 23 19:00:05 2021 |
Koji | Update | General | Campus Wide Power Glitch Reported: Monday, 8/23/21 at 9:30am |
Restarting the RTS was unsuccessful because of the timing discrepancy error between the RT machines and the FB. This time no matter how we try to set the time, the IOPs do not run with "DC status" green. (We kept having 0x4000)
We then decided to work on the recovery without the data recorded. After some burtrestores, the IMC was locked and the spot appeared on the AS port. However, IPC seemed down and no WFS could run. |
16291
|
Mon Aug 23 22:51:44 2021 |
Anchal | Update | General | Time synchronization efforts |
Related elog thread: 16286
I didn't really achieve anything but I'm listing what I've tried.
- I know now that the timesyncd isn't working because systemd-timesyncd is known to have issues when running on a read-only file system. In particular, the service does not have privileges to change the clock or drift settings at /run/systemd/clock or /etc/adjtime.
- The workarounds to these problems are poorly rated/reviews in stack exchange and require me to change the /etc/systmd/timesyncd.conf file but I'm unable to edit this file.
- I know that Paco was able to change these files earlier as the files are now changed and configured to follow a debian ntp pool server which won't work as the FEs do not have internet access. So the conf file needs to be restored to using ntpserver as the ntp server.
- From system messages, the ntpserver is recognized by the service as shown in the second part of 16285. I really think the issue is in file permissions. the file /etc/adjtime has never been updated since 2017.
- I got help from Paco on how to edit files for FE machines. The FE machines directories are exported from fb1:/diskless/root.jessie/
- I restored the /etc/systmd/timesyncd.conf file to how it as before with just servers=ntpserver line. Restarted timesyncd service on all FEs,I tried a few su the synchronization did not happen.
- I tried a few suggestions from stackexchange but none of them worked. The only rated solution creates a tmpfs directory outside of read-only filesystem and uses that to run timesyncd. So, in my opinion, timesyncd would never work in our diskless read-only file system FE machines.
- One issue in an archlinux discussion ended by the questioner resorting to use opennptd from openBSD distribution. The user claimed that opennptd is simple enough that it can run ntp synchornization on a read-only file system.
- Somehwat painfully, I 'kind of' installed the openntpd tool in the fb1:/diskless/root.jessie directory following directions from here. I had to manually add user group and group for the FEs (which I might not have done correctly). I was not able to get the openntpd daemon to start properly after soe tries.
- I restored everything back to how it was and restarted timesyncd in c1sus even though it would not do anything really.
Quote: |
This time no matter how we try to set the time, the IOPs do not run with "DC status" green. (We kept having 0x4000)
|
|
16292
|
Tue Aug 24 09:22:48 2021 |
Anchal | Update | General | Time synchronization working now |
Jamie told me to use chroot to log in into the chroot jail of debian os that are exported for the FEs and install ntp there. I took following steps at the end of which, all FEs have NTP synchronized now.
- I logged into fb1 through nodus.
- chroot /diskless/root.jessie /bin/bash took me to the bash terminal for debian os that is exported to all FEs.
- Here, I ran sudo apt-get install ntp which ran without any errors.
- I then edited the file in /etc/ntp.conf , i removed the default servers and added following lines for servers (fb1 and nodus ip addresses):
server 192.113.168.201
server 192.113.168.201
- I logged into each FE machine and ran following commands:
sudo systemctl stop systemd-timesyncd.service; sudo systemctl status systemd-timesyncd.service;
timedatectl; sleep 2;sudo systemctl daemon-reload; sudo systemctl start ntp; sleep 2; sudo systemctl status ntp; timedatectl
sudo hwclock -s
- The first line ensures that systemd-timesyncd.service is not running anymore. I did not uninstall timesyncd and left its configuration file as it is.
- The second line first shows the times of local and RTC clocks. Then reloads the daemon services to get ntp registered. Then starts ntp.service and shows it's status. Finally, the timedatectl command shows the synchronized clocks and that NTP synchronization has occured.
- The last line sets the local clock same as RTC clock. Even though this wasn't required as I saw that the clocks were already same to seconds, I just wanted a point where all the local clocks are synchronized to the ntp server.
- Hopefully, this would resolve our issue of restarting the models anytime some glitch happens or when we need ot update something in one of them.
Edit Tue Aug 24 10:19:11 2021:
I also disabled timesyncd on all FEs using sudo systemctl disable systemd-timesyncd.service
I've added this wiki page for summarizing the NTP synchronization knowledge. |
16293
|
Tue Aug 24 18:11:27 2021 |
Paco | Update | General | Time synchronization not really working |
tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?
Spent some time investigating the ntp synchronization. In the morning, after Anchal set up all the ntp servers / FE clients I tried restarting the rts IOPs with no success. Later, with Tega we tried the usual manual matching of the date between c1iscex and fb1 machines but we iterated over different n-second offsets from -10 to +10, also without success.
This afternoon, I tried debugging the FE and fb1 timing differences. For this I inspected the ntp configuration file under /etc/ntp.conf in both the fb1 and /diskless/root.jessie/etc/ntp.conf (for the FE machines) and tried different combinations with and without nodus, with and without restrict lines, all while looking at the output of sudo journalctl -f on c1iscey. Everytime I changed the ntp config file, I restarted the service using sudo systemctl restart ntp.service . Looking through some online forums, people suggested basic pinging to see if the ntp servers were up (and broadcasting their times over the local network) but this failed to run (read-only filesystem) so I went into fb1, and ran sudo chroot /diskless/root.jessie/ /bin/bash to allow me to change file permissions. The test was first done with /bin/ping which couldn't even open a socket (root access needed) by running chmod 4755 /bin/ping then ssh-ing into c1iscey and pinging the fb1 machine successfully. After this, I ran chmod 4755 /usr/sbin/ntpd so that the ntp daemon would have no problem in reaching the server in case this was blocking the synchronization. I exited the chroot shell and the ntp daemon in c1iscey; but the ntpstat still showed unsynchronised status. I also learned that when running an ntp query with ntpq -p if a client has succeeded in synchronizing its time to the server time, an asterisk should be appended at the end. This was not the case in any FE machine... and looking at fb1, this was also not true. Although the fb1 peers are correctly listed as nodus, the caltech ntp server, and a broadcast (.BCST.) server from local time (meant to serve the FE machines), none appears to have synchronized... Going one level further, in nodus I checked the time synchronization servers by running chronyc sources the output shows
controls@nodus|~> chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* testntp1.superonline.net 1 10 377 280 +1511us[+1403us] +/- 92ms
^+ 38.229.59.9 2 10 377 206 +8219us[+8219us] +/- 117ms
^+ tms04.deltatelesystems.ru 2 10 377 23m -17ms[ -17ms] +/- 183ms
^+ ntp.gnc.am 3 10 377 914 -8294us[-8401us] +/- 168ms
I then ran chronyc clients to find if fb1 was listed (as I would have expected) but the output shows this --
Hostname Client Peer CmdAuth CmdNorm CmdBad LstN LstC
========================= ====== ====== ====== ====== ====== ==== ====
501 Not authorised
So clearly chronyd succeeded in synchronizing nodus' time to whatever server it was pointed at but downstream from there, neither the fb1 or any FE machines seem to be synchronizing properly. It may be as simple as figuring out the correct ntp configuration file, or switching to chronyd for all machines (for the sake of homogeneity?) |
16294
|
Tue Aug 24 18:44:03 2021 |
Koji | Update | CDS | FB is writing the frames with a year old date |
Dan Kozak pointed out that the new frame files of the 40m has not been written in 2021 GPS time but 2020 GPS time.
Current GPS time is 1313890914 (or something like that), but the new files are written as C-R-1282268576-16.gwf
I don't know how this can happen but this may explain why we can't have the agreement between the FB gps time and the RTS gps time.
(dataviewer seems dependent on the FB GPS time and it indicates 2020 date. DTT/diaggui does not.)
This is the way to check the gpstime on fb1. It's apparently a year off.
controls@fb1:~ 0$ cat /proc/gps
1282269402.89 |
16295
|
Tue Aug 24 22:37:40 2021 |
Anchal | Update | General | Time synchronization not really working |
I attempted to install chrony and run it on one of the FE machines. It didn't work and in doing so, I lost the working NTP client service on the FE computers as well. Following are some details:
- I added the following two mirrors in the apt source list of root.jessie at /etc/apt/sources.list
deb http://ftp.us.debian.org/debian/ jessie main contrib non-free
deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free
- Then I installed chrony in the root.jessie using
sudo apt-get install chrony
- I was getting an error E: Can not write log (Is /dev/pts mounted?) - posix_openpt (2: No such file or directory) . To fix this, I had to run:
sudo mount -t devpts none "$rootpath/dev/pts" -o ptmxmode=0666,newinstance
sudo ln -fs "pts/ptmx" "$rootpath/dev/ptmx"
- Then, I had another error to resolve.
Failed to read /proc/cmdline. Ignoring: No such file or directory
start-stop-daemon: nothing in /proc - not mounted?
To fix this, I had to exit to fb1 and run:
sudo mount --bind /proc /diskless/root.jessie/proc
- With these steps, chrony was finally installed, but I immediately saw an error message saying:
Starting /usr/sbin/chronyd...
Could not open NTP sockets
- I figured this must be due to ntp running in the FE machines. I logged into c1iscex and stopped and disabled the ntp service:
sudo systemctl stop ntp
sudo systemctl disable ntp
- I saw some error messages from the above coomand as FEs are read only file systems:
Synchronizing state for ntp.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d ntp defaults
insserv: fopen(.depend.stop): Read-only file system
Executing /usr/sbin/update-rc.d ntp disable
update-rc.d: error: Read-only file system
- So I went back to chroot in fb1 and ran the two command sabove that failed:
/usr/sbin/update-rc.d ntp defaults
/usr/sbin/update-rc.d ntp disable
- The last line gave the output:
insserv: warning: current start runlevel(s) (empty) of script `ntp' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (2 3 4 5) of script `ntp' overrides LSB defaults (empty).
- I igored this and moved forward.
- I copied the chronyd.service from nodus to the chroot in fb1 and configured it to use nodus as the server. The I started the chronyd.service
sudo systemctl status chronyd.service
but got the saem issue of NTP sockets.
â—Â chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled)
Active: failed (Result: exit-code) since Tue 2021-08-24 21:52:30 PDT; 5s ago
Process: 790 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=1/FAILURE)
Aug 24 21:52:29 c1iscex systemd[1]: Starting NTP client/server...
Aug 24 21:52:30 c1iscex chronyd[790]: Could not open NTP sockets
Aug 24 21:52:30 c1iscex systemd[1]: chronyd.service: control process exited, code=exited status=1
Aug 24 21:52:30 c1iscex systemd[1]: Failed to start NTP client/server.
Aug 24 21:52:30 c1iscex systemd[1]: Unit chronyd.service entered failed state.
-
I tried a few things to resolve this, but couldn't get it to work. So I gave up on using chrony and decided to go back to ntp service atleast.
-
I stopped, disabled and checked status of chrony:
sudo systemctl stop chronyd
sudo systemctl disable chronyd
sudo systemctl status chronyd
This gave the output:
â—Â chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled)
Active: failed (Result: exit-code) since Tue 2021-08-24 22:09:07 PDT; 25s ago
Aug 24 22:09:07 c1iscex systemd[1]: Starting NTP client/server...
Aug 24 22:09:07 c1iscex chronyd[2490]: Could not open NTP sockets
Aug 24 22:09:07 c1iscex systemd[1]: chronyd.service: control process exited, code=exited status=1
Aug 24 22:09:07 c1iscex systemd[1]: Failed to start NTP client/server.
Aug 24 22:09:07 c1iscex systemd[1]: Unit chronyd.service entered failed state.
Aug 24 22:09:15 c1iscex systemd[1]: Stopped NTP client/server.
-
I went back to fb1 chroot and removed chrony package and deleted the configuration files and systemd service files:
sudo apt-get remove chrony
-
But when I started ntp daemon service back in c1iscex, it gave error:
sudo systemctl restart ntp
Job for ntp.service failed. See 'systemctl status ntp.service' and 'journalctl -xn' for details.
-
Status shows:
sudo systemctl status ntp
â—Â ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp)
Active: failed (Result: exit-code) since Tue 2021-08-24 22:09:56 PDT; 9s ago
Process: 2597 ExecStart=/etc/init.d/ntp start (code=exited, status=5)
Aug 24 22:09:55 c1iscex systemd[1]: Starting LSB: Start NTP daemon...
Aug 24 22:09:56 c1iscex systemd[1]: ntp.service: control process exited, code=exited status=5
Aug 24 22:09:56 c1iscex systemd[1]: Failed to start LSB: Start NTP daemon.
Aug 24 22:09:56 c1iscex systemd[1]: Unit ntp.service entered failed state.
-
I tried to enable back the ntp service by sudo systemctl enable ntp. I got similar error messages of read only filesystem as earlier.
Synchronizing state for ntp.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d ntp defaults
insserv: warning: current start runlevel(s) (empty) of script `ntp' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (2 3 4 5) of script `ntp' overrides LSB defaults (empty).
insserv: fopen(.depend.stop): Read-only file system
Executing /usr/sbin/update-rc.d ntp enable
update-rc.d: error: Read-only file system
-
I came back to c1iscex and tried restarting the ntp service but got same error messages as above with exit code 5.
-
I checked c1sus, the ntp was running there. I tested the configuration by restarting the ntp service, and then it failed with same error message. So the remaining three FEs, c1lsc, c1ioo and c1iscey have running ntp service, but they won't be able to restart.
-
As a last try, I rebooted c1iscex to see if ntp comes back online nicely, but it doesn't.
Bottom line, I went to try chrony in the FEs, and I ended up breaking the ntp client services on the computers as well. We have no NTP synchronization in any of the FEs.
Even though Paco and I are learning about the ntp and cds stuff, I think it's time we get help from someone with real experience. The lab is not in a good state for far too long.
Quote: |
tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?
|
|
16296
|
Wed Aug 25 08:53:33 2021 |
Jordan | Update | SUS | 2" Adapter Ring for SOS Arrived 8/24/21 |
8 of the 2"->3" adapter rings (D2100377) arrived from RDL yesterday. I have not tested the threads but dimensional inspection on SN008 cleared. Parts look very good. The rest of the parts should be shipping out in the next week. |
16297
|
Wed Aug 25 11:48:48 2021 |
Yehonathan | Update | CDS | c1auxey assembly |
After confirming that, indeed, leaving the RTN connection floating can cause reliability issues we decided to make these connections in the c1auxex analog input units.
According to Johannes' wiring scheme (excluding the anti-image and OPLEV since they are decommissioned), Acromag unit 1221b accepts analog inputs from two modules. All of these channels are single-ended according to their schematics.
One option is to use the Acromag ground and connect it to the RTNs of both 1221b and 1221c. Another is to connect the minus wire of one module, which is tied to the module's ground, to the RTN. We shouldn't tie the grounds of the different modules together by connecting them to the same RTN point.
We should take some OSEM spectra of the X end arm before and after this work to confirm we didn't produce more noise by doing so. Right now, it is impossible due to issues caused by the recent power surge.
Quote: |
{Yehonathan, Jon}
We poked (looked in situ with a flashlight, not disturbing any connections) around c1auxex chassis to understand better what is the wiring scheme.
To our surprise, we found that nothing was connected to the RTNs of the analog input Acromag modules. From previous experience and the Acromag manual, there can't be any meaningful voltage measurement without it.
|
|
16298
|
Wed Aug 25 17:31:30 2021 |
Paco | Update | CDS | FB is writing the frames with a year old date |
[paco, tega, koji]
After invaluable assistance from Jamie in fixing this yearly offset in the gps time reported by cat /proc/gps, we managed to restart the real time system correctly (while still manually synchronizing the front end machine times). After this, we recovered the mode cleaner and were able to lock the arms with not much fuss.
Nevertheless, tega and I noticed some weird noise in the C1:LSC-TRX_OUT which was not present in the YARM transmission, and that is present even in the absence of light (we unlocked the arms and just saw it on the ndscope as shown in Attachment #1). It seems to affect the XARM and in general the lock acquisition...
We took some quick spectrum with diaggui (Attachment #2) but it doesn't look normal; there seems to be broadband excess noise with a remarkable 1 kHz component. We will probably look into it in more detail. |
16299
|
Wed Aug 25 18:20:21 2021 |
Jamie | Update | CDS | GPS time on fb1 fixed, dadq writing correct frames again |
I have no idea what happened to the GPS timing on fb1, but it seems like the issue was coincident with the power glitch on Monday.
As was noted by Koji above, the GPS time kernel interface was off by a year, which was causing the frame builder to write out files with the wrong names. fb1 was using DAQD components from the advligorts 3.3 release, which used the old "symmetricom" kernel module for the GPS time. This old module was also known to have issues with time offsets. This issue is remniscent of previous timing issues with the DAQ on fb1.
I noted that a newer version of the advligorts, version 3.4, was available on debian jessie, the system running on fb1. advligorts 3.4 includes a newer version of the GPS time module, renamed gpstime. I checked with Jonathan Hanks that the interfaces did not change between 3.3 and 3.4, and 3.4 was mostly a bug fix and packaging release, so I decided to upgrade the DAQ to get the new components. I therefore did the following
-
updated the archive info in /etc/apt/sources.list.d/cdssoft.list, and added the "jessie-restricted" archive which includes the mx packages: https://git.ligo.org/cds-packaging/docs/-/wikis/home
-
removed the symmetricom module from the kernel
sudo rmmod symmetricom
-
upgraded the advligorts-daqd components (NOTE I did not upgrade the rest of the system, although there are outstanding security upgrades needed):
sudo apt install advligorts-daqd advligorts-daqd-dc-mx
-
loaded the new gpstime module and checked that the GPS time was correct:
sudo modprobe gpstime
-
restarted all the daqd processes
sudo systemctl restart daqd_*
Everything came up fine at that point, and I checked that the correct frames were being written out. |
16300
|
Thu Aug 26 10:10:44 2021 |
Paco | Update | CDS | FB is writing the frames with a year old date |
[paco, ]
We went over the X end to check what was going on with the TRX signal. We spotted the ground terminal coming from the QPD is loosely touching the handle of one of the computers on the rack. When we detached it completely from the rack the noise was gone (attachment 1).
We taped this terminal so it doesn't touch anything accidently. We don't know if this is the best solution since it is probably needs a stable voltage reference. In the Y end those ground terminals are connected to the same point on the rack. The other ground terminals in the X end are just cut.
We also took the PSD of these channels (attachment 2). The noise seem to be gone but TRX is still a bit noisier than TRY. Maybe we should setup a proper ground for the X arm QPD?
We saw that the X end station ALS laser was off. We turned it on and also the crystal oven and reenabled the temperature controller. Green light immidiately appeared. We are now working to restore the ALS lock. After running XARM ASS we were unable to lock the green laser so we went to the XEND and moved the piezo X ALS alignment mirrors until we maximized the transmission in the right mode. We then locked the ALS beams on both arms successfully. It very well could be that the PZT offsets were reset by the power glitch. The XARM ALS still needs some tweaking, its level is ~ 25% of what it was before the power glitch. |
16305
|
Wed Sep 1 14:16:21 2021 |
Jordan | Update | VAC | Empty N2 Tanks |
The right N2 tank had a bad/loose valve and did not fully open. This morning the left tank was just about empty and the right tank showed 2000+ psi on the gauge. Once the changeover happened the copper line emptied but the valve to the N2 tank was not fully opened. I noticed the gauges were both reading zero at ~1pm just before the meeting. I swapped the left tank, but not in time. The vacuum interlocks tripped at 1:04 pm today when the N2 pressure to the vacuum valves fell below 65psi. After the meeting, Chub tightened the valve, fully opened it and refilled the lines. I will monitor the tank pressures today and make sure all is ok.
There used to be a mailer that was sent out when the sum pressure of the two tanks fell <600 psi, telling you to swap tanks. Does this no longer exist? |
16308
|
Thu Sep 2 19:28:02 2021 |
Koji | Update | | This week's FB1 GPS Timing Issue Solved |
After the disk system trouble, we could not make the RTS running at the nominal state. A part of the troubleshoot FB1 was rebooted. But the we found that the GPS time was a year off from the current time
controls@fb1:/diskless/root/etc 0$ cat /proc/gps
1283046156.91
controls@fb1:/diskless/root/etc 0$ date
Thu Sep 2 18:43:02 PDT 2021
controls@fb1:/diskless/root/etc 0$ timedatectl
Local time: Thu 2021-09-02 18:43:08 PDT
Universal time: Fri 2021-09-03 01:43:08 UTC
RTC time: Fri 2021-09-03 01:43:08
Time zone: America/Los_Angeles (PDT, -0700)
NTP enabled: no
NTP synchronized: yes
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2021-03-14 01:59:59 PST
Sun 2021-03-14 03:00:00 PDT
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2021-11-07 01:59:59 PDT
Sun 2021-11-07 01:00:00 PST
Paco went through the process described in Jamie's elog [40m ELOG 16299] (except for the installation part) and it actually made the GPS time even strange
controls@fb1:~ 0$ cat /proc/gps
967861610.89
I decided to remove the gpstime module and then load it again. This made the gps time back to normal again.
controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 0$ cat /proc/gps
cat: /proc/gps: No such file or directory
controls@fb1:~ 1$ sudo modprobe gpstime
controls@fb1:~ 0$ cat /proc/gps
1314671254.11
|
16309
|
Thu Sep 2 19:47:38 2021 |
Koji | Update | CDS | This week's FB1 GPS Timing Issue Solved |
After the reboot daqd_dc was not working, but manual starting of open-mx / mx services solved the issue.
sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*
|
16310
|
Thu Sep 2 20:44:18 2021 |
Koji | Update | CDS | Chiara DHCP restarted |
We had the issue of the RT machines rebooting. Once we hooked up the display on c1iscex, it turned out that the IP was not given at it's booting-up.
I went to chiara and confirmed that the DHCP service was not running
~>sudo service isc-dhcp-server status
[sudo] password for controls:
isc-dhcp-server stop/waiting
So the DHCP service was manually restarted
~>sudo service isc-dhcp-server start
isc-dhcp-server start/running, process 24502
~>sudo service isc-dhcp-server status
isc-dhcp-server start/running, process 24502
|
16311
|
Thu Sep 2 20:47:19 2021 |
Koji | Update | CDS | Chiara DHCP restarted |
[Paco, Tega, Koji]
Once chiara's DHCP is back, things got much more straight forward.
c1iscex and c1iscey were rebooted and the IOPs were launched without any hesitation.
Paco ran rebootC1LSC.sh and for the first time in this year we had the launch of the processes without any issue. |
16316
|
Wed Sep 8 18:00:01 2021 |
Koji | Update | VAC | cronjobs & N2 pressure alert |
In the weekly meeting, Jordan pointed out that we didn't receive the alert for the low N2 pressure.
To check the situation, I went around the machines and summarized the cronjob situation.
[40m wiki: cronjob summary]
Note that this list does not include the vacuum watchdog and mailer as it is not on cronjob.
Now, I found that there are two N2 scripts running:
1. /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh on megatron and is running every minute (!)
2. /opt/rtcds/caltech/c1/scripts/Admin/N2check/pyN2check.sh on c1vac and is running every 3 hours.
Then, the N2 log file was checked: /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log
Wed Sep 1 12:38:01 PDT 2021 : N2 Pressure: 76.3621
Wed Sep 1 12:38:01 PDT 2021 : T1 Pressure: 112.4
Wed Sep 1 12:38:01 PDT 2021 : T2 Pressure: 349.2
Wed Sep 1 12:39:02 PDT 2021 : N2 Pressure: 76.0241
Wed Sep 1 12:39:02 PDT 2021 : N2 pressure has fallen to 76.0241 PSI !
Tank pressures are 94.6 and 98.6 PSI!
This email was sent from Nodus. The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh
Wed Sep 1 12:40:02 PDT 2021 : N2 Pressure: 75.5322
Wed Sep 1 12:40:02 PDT 2021 : N2 pressure has fallen to 75.5322 PSI !
Tank pressures are 93.6 and 97.6 PSI!
This email was sent from Nodus. The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh
...
The error started at 11:39 and lasted until 13:01 every minute. So this was coming from the script on megatron. We were supposed to have ~20 alerting emails (but did none).
So what's happened to the mails? I tested the script with my mail address and the test mail came to me. Then I sent the test mail to 40m mailing list. It did not reach.
-> Decided to put the mail address (specified in /etc/mailname , I believe) to the whitelist so that the mailing list can accept it.
I did run the test again and it was successful. So I suppose the system can now send us the alert again.
And alerting every minute is excessive. I changed the check frequency to every ten minutes.
What's happened to the python version running on c1vac?
1) The script is running, spitting out some error in the cron report (email on c1vac). But it seems working.
2) This script checks the pressures of the bottles rather than the N2 pressure downstream. So it's complementary.
3) During the incident on Sept 1, the checker did not trip as the pressure drop happened between the cronjob runs and the script didn't notice it.
4) On top of them, the alert was set to send the mails only to an our former grad student. I changed it to deliver to the 40m mailing list. As the "From" address is set to be some ligox...@gmail.com, which is a member of the mailing list (why?), we are supposed to receive the alert. (And we do for other vacuum alert from this address).
|
16317
|
Wed Sep 8 19:06:14 2021 |
Koji | Update | General | Backup situation |
Tega mentioned in the meeting that it could be safer to separate some of nodus's functions from the martian file system.
That's an interesting thought. The summary pages and other web services are linked to the user dir. This has high traffic and can cause the issure of the internal network once we crash the disk.
Or if the internal system is crashed, we still want to use elogs as the source of the recovery info. Also currently we have no backup of the elog. This is dangerous.
We can save some of the risks by adding two identical 2TB disks to nodus to accomodate svn/elog/web and their daily backup.
host |
file system or contents |
condition |
note |
nodus |
root |
none or unknown |
|
nodus |
home (svn, elog) |
none |
|
nodus |
web (incl summary pages) |
backed up |
linked to /cvs/cds |
chiara |
root |
maybe |
need to check with Jon/Anchal |
chiara |
/home/cds |
local copy |
The backup disk is smaller than the main disk. |
chiara |
/home/cds |
remote copy - stalled |
we used to have, but stalled since 2017/11/17 |
fb1 |
root |
maybe |
need to check with Jon/Anchal |
fb1 |
frame |
rsync |
pulled from LDAS according to Tega |
|
|
|
|
|
16319
|
Mon Sep 13 04:12:01 2021 |
Tega | Update | General | Added temperature sensors at Yend and Vertex too |
I finally got the modbus part working on chiara, so we can now view the temperature data on any machine on the martian network, see Attachment 1.
I also updated the entries on /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini, as suggested by Koji, to include the SensorGatway temperature channels, but I still don't see their EPICs channels on https://ldvw.ligo.caltech.edu/ldvw/view. This means the channels are not available via nds so I think the temperature data is not being to be written to frame files on framebuilder but I am not sure what this entails, since I assumed C0EDCU.ini is the framebuilder daq channel list.
When the EPICs channels are available via nds, we should be able to display the temperature data on the summary pages.
Quote: |
I've added the other two temperature sensor modules on Y end (on 1Y4, IP: 192.168.113.241) and in the vertex on (1X2, IP: 192.168.113.242). I've updated the martian host table accordingly. From inside martian network, one can go to the browser and go to the IP address to see the temperature sensor status . These sensors can be set to trigger alarm and send emails/sms etc if temperature goes out of a defined range.
I feel something is off though. The vertex sensor shows temperature of ~28 degrees C, Xend says 20 degrees C and Yend says 26 degrees C. I believe these sensors might need calibration.
Remaining tasks are following:
- Modbus TCP solution:
- If we get it right, this will be easiest solution.
- We just need to add these sensors as streaming devices in some slow EPICS machine in there .cmd file and add the temperature sensing channels in a corresponding database file.
- Python workaround:
- Might be faster but dirty.
- We run a python script on megatron which requests temperature values every second or so from the IP addresses and write them on a soft EPICs channel.
- We still would need to create a soft EPICs channel fro this and add it to framebuilder data acquisition list.
- Even shorted workaround for near future could be to just write temperature every 30 min to a log file in some location.
[anchal, paco]
We made a script under scripts/PEM/temp_logger.py and ran it on megatron. The script uses the requests package to query the latest sensor data from the three sensors every 10 minutes as a json file and outputs accordingly. This is not a permanent solution.
|
|
16320
|
Mon Sep 13 09:15:15 2021 |
Paco | Update | LSC | MC unlocked? |
Came in at ~ 9 PT this morning to find the IFO "down". The IMC had lost its lock ~ 6 hours before, so at about 03:00 AM. Nothing seemed like the obvious cause; there was no record of increased seismic activity, all suspensions were damped and no watchdog had tripped, and the pressure trends similar to those in recent pressure incidents show nominal behavior (Attachment #1). What happened?
Anyways I simply tried reopening the PSL shutter, and the IMC caught its lock almost immediately. I then locked the arms and everything seems fine for now . |
16321
|
Mon Sep 13 14:32:25 2021 |
Yehonathan | Update | CDS | c1auxey assembly |
So we agreed that the RTNs points on the c1auxex Acromag chassis should just be grounded to the local Acromag ground as it just needs a stable reference. Normally, the RTNs are not connected to any ground so there is should be no danger of forming ground loops by doing that. It is probably best to use the common wire from the 15V power supplies since it also powers the VME crate. I took the spectra of the ETMX OSEMs (attachment) for reference and proceeding with the grounding work.
|
16322
|
Mon Sep 13 15:14:36 2021 |
Anchal | Update | LSC | Xend Green laser injection mirrors M1 and M2 not responsive |
I was showing some green laser locking to Tega, I noticed that changing the PZT sliders of M1/M2 angular position on Xend had no effect on locked TEM01 or TEM00 mode. This is odd as changing these sliders should increase or decrease the mode-matching of these modes. I suspect that the controls are not working correctly and the PZTs are either not powered up or not connected. We'll investigate this in near future as per priority. |
16324
|
Mon Sep 13 18:19:25 2021 |
Tega | Update | Computer Scripts / Programs | Moved modbus service from chiara to c1susaux |
[Tega, Anchal, Paco]
After talking to Anchal, it was made clear that chiara is not the place to host the modbus service for the temperature sensors. The obvious machine is c1pem, but the startup cmd script loads c object files and it is not clear how easy it would integrate the modbus functionality since we can only login via telnet, so we decided to instead host the service on c1susaux. We also modified the /etc/motd file on c1susaucx which displays the welcome message during login to inform the user that this machine hosts the modbus service for the temperature sensor. Anchal plans to also document this information on the temperature sensor wiki at some point in the future when the page is updated to include what has been learnt so far.
We might also consider updating the database file to a more modern way of reading the temperature sensor data using FLOAT32_LE which is available on EPICs version 3.14 and above, instead of the current method which works but leaves the reader bemused by the bitwise operations that convert the two 16 bits words (A and B) to IEEE-754 32-bit float, via
field(CALC, "(A&D?(A&C?-1:1):0)*((G|A&E)*J+B)*2^((A&D)/G-F)")
where
field(INPA, "$HiWord")
field(INPB, "$LoWord")
field(INPC, "0x8000") # Hi word, sign bit
field(INPD, "0x7F80") # Hi word, exponent mask
field(INPE, "0x00FF") # Hi word, mantissa mask (incl hidden bit)
field(INPF, "150") # Exponent offset plus 23-bit mantissa shift
field(INPG, "0x0080") # Mantissa hidden bit
field(INPJ, "65536") # Hi/Lo mantissa ratio
field(CALC, "(A&D?(A&C?-1:1):0)*((G|A&E)*J+B)*2^((A&D)/G-F)")
field(PREC, "4")
as opposed to the more modern form
field(INP,"@asyn($(PORT) $(OFFSET))FLOAT32_LE") |
16326
|
Tue Sep 14 16:12:03 2021 |
Jordan | Update | SUS | SOS Tower Hardware |
Yehonathan noticed today that the silver plated hardware on the assembled SOS towers had some pretty severe discoloration on it. See attached picture.
These were all brand new screws from UC components, and have been sitting on the flow bench for a couple months now. I believe this is just oxidation and is not an issue, I spoke to Calum as well and showed him the attached picture and he agreed it was likely oxidation and should not be a problem once installed.
He did mention if there is any concern from anyone, we could take an FTIR sample and send it to JPL for analysis, but this would cost a few hundred dollars.
I don't believe this to be an issue, but it is odd that they oxidized so quickly. Just wanted to relay this to everyone else to see if there was any concern. |
16328
|
Tue Sep 14 17:14:46 2021 |
Koji | Update | SUS | SOS Tower Hardware |
Yup this is OK. No problem.
|
16330
|
Tue Sep 14 17:22:21 2021 |
Anchal | Update | CDS | Added temp sensor channels to DAQ list |
[Tega, Paco, Anchal]
We attempted to reboot fb1 daqd today to get the new temperature sensor channels recording. However, the FE models got stuck, apparantely due to reasons explaine din 40m/16325. Jamie cleared the /var/logs in fb1 so that FE can reboot. We were able to reboot the FE machines after this work successfully and get the models running too. During the day, the FE machines were shut down manually and brought back on manually, a couple of times on the c1iscex machine. Only change in fb1 is in the /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini where the new channels were added, and some hacking was done by Jamie in gpstime module (See 40m/16327). |
16332
|
Wed Sep 15 11:27:50 2021 |
Yehonathan | Update | CDS | c1auxey assembly |
{Yehonathan, Paco}
We turned off the ETMX watchdogs and OpLevs. We went to the X end and shut down the Acromag chassi. We labeled the chassi feedthroughs and disconnected all the cables from it.
We took it out and tied the common wire of the power supplies (the commons of the 20V and 15V power supplies were shorted so there is no difference which we connect) to the RTNs of the analog inputs.
The chassi was put back in place. All the cables were reconnected. Power turn on.
We rebooted c1auxex and the channels went back online. We turned on the watchdogs and watched the ETMX motion get damped. We turned on the OpLev. We waited until the beam position got centered on the ETMX.
Attachment shows a comparison between the OSEM spectra before and after the grounding work. Seems like there is no change.
We were able to lock the arms with no issues.
|
16333
|
Wed Sep 15 23:38:32 2021 |
Koji | Update | ALS | ALS ASX PZT HV was off -> restored |
It was known that the Y end ALS PZTs are not working. But Anchal reported in the meeting that the X end PZTs are not working too.
We went down to the X arm in the afternoon and checked the status. The HV (KEPCO) was off from the mechanical switch. I don't know this KEPCO has the function to shutdown the switch at the power glitch or not.
But anyway the power switch was engaged. We also saw a large amount of misalignment of the X end green. The alignment was manually adjusted. Anchal was able to reach ~0.4 Green TRX, but no more. He claimed that it was ~0.8.
We tried to tweak the SHG temp from 36.4. We found that the TRX had the (local) maximum of ~0.48 at 37.1 degC. This is the new setpoint right now. |
16335
|
Thu Sep 16 00:00:20 2021 |
Koji | Update | General | RIO Planex 1064 Lasers in the south cabinet |
RIO Planex 1064 Lasers in the south cabinet
Property Number C30684/C30685/C30686/C30687 |
16336
|
Thu Sep 16 01:16:48 2021 |
Koji | Update | General | Frozen 2 |
It happened again. Defrosting required. |
16337
|
Thu Sep 16 10:07:25 2021 |
Anchal | Update | General | Melting 2 |
Put outside.
Quote: |
It happened again. Defrosting required.
|
|
16338
|
Thu Sep 16 12:06:17 2021 |
Tega | Update | Computer Scripts / Programs | Temperature sensors added to the summary pages |
We can now view the minute trend of the temperature sensors under the PEM tab of the summary pages. See attachment 1 for an example of today's temperature readings. |
16340
|
Thu Sep 16 20:18:13 2021 |
Anchal | Update | General | Reset |
Fridge brought back inside.
Quote: |
Put outside.
Quote: |
It happened again. Defrosting required.
|
|
|
16341
|
Fri Sep 17 00:56:49 2021 |
Koji | Update | General | Awesome |
The Incredible Melting Man!
|
16342
|
Fri Sep 17 20:22:55 2021 |
Koji | Update | SUS | EQ M4.3 Long beach |
EQ M4.3 @longbeach
2021-09-18 02:58:34 (UTC) / 07:58:34 (PDT)
https://earthquake.usgs.gov/earthquakes/eventpage/ci39812319/executive
- All SUS Watchdogs tripped, but the SUSs looked OK except for the stuck ITMX.
- Damped the SUSs (except ITMX)
- IMC automatically locked
- Turned off the damping of ITMX and shook it only with the pitch bias -> Easily unstuck -> damping recovered -> realignment of the ITMX probably necessary.
- Done.
|
16344
|
Mon Sep 20 14:11:40 2021 |
Koji | Update | BHD | End DAC Adapter Unit D2100647 |
I've uploaded the schematic and PCB PDF for End DAC Adapter Unit D2100647.
Please review the design.
- CH1-8 SUS actuation channels.
- 5CHs out of 8CHs are going to be used, but for future extensions, all the 8CHs are going to be filled.
- It involves diff-SE conversion / dewhitening / SE-diff conversion. Does this make sense?
- CH9-12 PZT actuation channels. It is designed to send out 4x SE channels for compatibility. The channels have the jumpers to convert it to pass through the diff signals.
- CH13-16 are general purpose DIFF/SE channels. CH13 is going to be used for ALS Laser Slow control. The other 3CHs are spares.
The internal assembly drawing & BOM are still coming. |
16346
|
Mon Sep 20 15:23:08 2021 |
Yehonathan | Update | Computers | Wifi internet fixed |
Over the weekend and today, the wifi was acting bad with frequent disconnections and no internet access. I tried to log into the web interface of the ASUS wifi but with no success.
I pushed the reset button for several seconds to restore factory settings. After that, I was able to log in. I did the automatic setup and defined the wifi passwords to be what they used to be.
Internet access was restored. I also unplugged and plugged back all the wifi extenders in the lab and moved the extender from the vertex inner wall to the outer wall of the lab close to the 1X3.
Now, there seems to be wifi reception both in X and Y arms (according to my android phone).
|
16349
|
Mon Sep 20 20:43:38 2021 |
Tega | Update | Electronics | Sat Amp modifications |
Running update of Sat Amp modification work, which involves the following procedure (x8) per unit:
- Replace R20 & R24 with 4.99K ohms, R23 with 499 ohms, and remove C16.
- (Testing) Connect LEDDrive output to GND and check that
- Install 40m Satellite to Flange Adapter (D2100148-v1)
Unit Serial Number |
Issues |
Status |
S1200740 |
NONE |
DONE |
S1200742 |
NONE |
DONE |
S1200743 |
NONE |
DONE |
S1200744 |
TP4 @ LED1,2 on PCB S2100568 is 13V instead of 5V
TP4 @ LED4 on PCB S2100559 is 13V instead of 5V
|
DONE |
S1200752 |
NONE |
DONE |
|
16350
|
Mon Sep 20 21:56:07 2021 |
Koji | Update | Computers | Wifi internet fixed |
Ug, factory resets... Caltech IMSS announced that there was an intermittent network service due to maintenance between Sept 19 and 20. And there seemed some aftermath of it. Check out "Caltech IMSS"
|
16356
|
Wed Sep 22 17:22:59 2021 |
Tega | Update | Electronics | Sat Amp modifications |
[Koji, Tega]
Decided to do a quick check of the remaining Sat Amp units before component replacement to identify any unit with defective LED circuits. Managed to examine 5 out of 10 units, so still have 5 units remaining. Also installed the photodiode bias voltage jumper (JP1) on all the units processed so far.
Unit Serial Number |
Issues |
Debugging |
Status |
S1200738 |
TP4 @ LED3 on chan 1-4 PCB was ~0.7 V instead of 5V
|
Koji checked the solder connections of the various components, then swapped out the IC OPAMP. Removed DB9 connections to the front panel to get access to the bottom of the board. Upon close inspection, it looked like an issue of a short connection between the Emitter & Base legs of the Q1 transistor.
Solution - Remove the short connection between the Emitter & Base legs of the Q1 transistor legs.
|
DONE |
S1200748 |
TP4 @ LED2 on chan 1-4 PCB was ~0.7 V instead of 5V |
This issue was caused by a short connection between the Emitter & Base legs of the Q1 transistor.
Solution - Remove the short connection between the Emitter & Base legs of the Q1 transistor legs.
|
DONE |
S1200749 |
NONE |
N/A |
DONE |
S1200750 |
NONE |
N/A |
DONE |
S1200751 |
NONE |
N/A |
DONE |
Defective unit with updated resistors and capacitors in the previous elog
Unit Serial Number |
Issues |
Debugging |
Status |
S1200744 |
TP4 @ LED1,2 on PCB S2100568 is 13V instead of 5V
TP4 @ LED4 on PCB S2100559 is 13V instead of 5V
|
This issue was caused by a short between the Collector & Base legs of the Q1 transistor.
Solution - Remove the short connection between the Collector & Base legs of the Q1 transistor legs
Complications - During the process of flipping the board to get access to the bottom of the board, a connector holding the two middle black wires, on P1, came loose. I resecured the wires to the connector and checked all TP4s on the board afterwards to make sure things are as expected.
|
DONE |
Quote: |
Running update of Sat Amp modification work, which involves the following procedure (x8) per unit:
- Replace R20 & R24 with 4.99K ohms, R23 with 499 ohms, and remove C16.
- (Testing) Connect LEDDrive output to GND and check that
- Install 40m Satellite to Flange Adapter (D2100148-v1)
Unit Serial Number |
Issues |
Status |
S1200740 |
NONE |
DONE |
S1200742 |
NONE |
DONE |
S1200743 |
NONE |
DONE |
S1200744 |
TP4 @ LED1,2 on PCB S2100568 is 13V instead of 5V
TP4 @ LED4 on PCB S2100559 is 13V instead of 5V
|
DONE |
S1200752 |
NONE |
DONE |
|
|
16357
|
Thu Sep 23 14:17:44 2021 |
Tega | Update | Electronics | Sat Amp modifications debugging update |
Debugging complete.
All units now have the correct TP4 voltage reading needed to drive a nominal current of 35 mA through to OSEM LED. The next step is to go ahead and replace the components and test afterward that everything is OK.
Unit Serial Number |
Issues |
Debugging |
Status |
S1200736 |
TP4 @ LED4 on chan 1-4 PCB reads 13V instead of 5V |
This issue was caused by a short between the Collector & Base legs of the Q1 transistor.
Solution - Remove the short connection between the Collector & Base legs of the Q1 transistor legs
|
DONE |
S1200737 |
NONE |
N/A |
DONE |
S1200739 |
NONE |
N/A |
DONE |
S1200746 |
TP4 @ LED3 on chan 5-8 PCB reads 0.765 V instead of 5V |
This issue was caused by a short between the Emitter & Base legs of the Q1 transistor.
Solution - Remove the short connection between the Emitter & Base legs of the Q1 transistor legs
Complications - I was extra careful this time because of the problem of loose cable from the last flip-over of the right PCB containing chan 5-8. Anyways, after I was done I noticed one of the pink wires (it carries the +14V to the left PCB) had come off on P1. At least this time I could also see that the corresponding front panel green LED turn off as a result. So I resecured the wire to the connector (using solder as my last attempt yesterday to reattach the via crimping didn't work after a long time trying. I hope this is not a problem.) and checked the front panel LED turns on when the unit is powered before closing the unit. These connectors are quite flimsy.
|
DONE |
S1200747 |
TP4 @ LED2 on chan 1-4 PCB reads 13V instead of 5V |
This issue was caused by a short between the Collector & Base legs of the Q1 transistor.
Solution - Remove the short connection between the Collector & Base legs of the Q1 transistor legs
|
DONE |
|
16359
|
Thu Sep 23 18:18:07 2021 |
Yehonathan | Update | BHD | SOS assembly |
I have noticed that the dumbells coming back from C&B had glue residues on them. An example is shown in attachment 1: it can be seen that half of the dumbell's surface is covered with glue.
Jordan gave me a P800 sandpaper to remove the glue. I picked the dumbells with the dirty face down and slid them over the sandpaper in 8 figures several times to try and keep the surface untilted. Attachment 2 shows the surface from attachment 1 after this process.
Next, the dumbells will be sent to another C&B. |
16364
|
Wed Sep 29 09:36:26 2021 |
Jordan | Update | SUS | 2" Adapter Ring Parts for SOS Arrived 9/28/21 |
The remaining machined parts for the SOS adapter ring have arrived. I will inspect these today and get them ready for C&B. |
16368
|
Thu Sep 30 14:13:18 2021 |
Anchal | Update | LSC | HV supply to Xend Green laser injection mirrors M1 and M2 PZT restored |
Late elog, original date Sep 15th
We found that the power switch of HV supply that powers the PZT drivers for M1 and M2 on Xend green laser injection alignment was tripped off. We could not find any log of someone doing it, it is a physical switch. Our only explanation is that this supply might have a solenoid mechansm to shut off during power glitches and it probably did so on Aug 23 (see 40m/16287). We were able to align the green laser using PZT again, however, the maximum power at green transmission from X arm cavity is now about half of what it used to be before the glitch. Maybe the seed laser on the X end died a little. |
16370
|
Fri Oct 1 12:12:54 2021 |
Stephen | Update | BHD | ITMY (3002) CAD layout pushed to Box |
Koji requested current state of BHD 3D model. I pushed this to Box after adding the additional SOSs and creating an EASM representation (also posted, Attachment 1). I also post the PDF used to dimension this model (Attachment 2). This process raised some points that I'll jot down here:
1) Because the 40m CAD files are not 100% confirmed to be clean of any student license efforts, we cannot post these files to the PDM Vault or transmit them this way. When working on BHD layout efforts, these assemblies which integrate new design work therefore must be checked for most current revisions of vault-managed files - this Frankenstein approach is not ideal but can be managed for this effort.
2) Because the current files reflect the 40m as built state (as far as I can tell), I shared the files in a zip directory without increasing the revisions. It is unclear whether revision control is adequate to separate [current 40m state as reflected in CAD] from [planned 40m state after BHD upgrade]. Typically a CAD user would trust that we could find the version N assembly referenced in the drawing from year Y, so we wouldn't hesitate to create future design work in a version N+1 assembly file pending a current drawing. However, this form of revision control is not implemented. Perhaps we want to use configurations to separate design states (in other words, create a parallel model of every changed component, without creating paralle files - these configurations can be selected internal to the assembly without a need to replace files)? Or more simply (and perhaps more tenuously), we could snapshot the Box revisions and create a DCC page which notes the point of departure for BHD efforts?
Anyway, the cold hard facts:
- Box location: 40m/40m_cad_models/Solidworks_40m (LINK)
- Filenames: 3002.zip and 3002 20211001 ITMY BHD for Koji presentation images.easm (healthy disregard for concerns about spaces in filenames) |
16373
|
Mon Oct 4 15:50:31 2021 |
Hang | Update | Calibration | Fisher matrix estimation on XARM parameters |
[Anchal, Hang]
What: Anchal and I measured the XARM OLTF last Thursday.
Goal: 1. measure the 2 zeros and 2 poles in the analog whitening filter, and potentially constrain the cavity pole and an overall gain.
2. Compare the parameter distribution obtained from measurements and that estimated analytically from the Fisher matrix calculation.
3. Obtain the optimized excitation spectrum for future measurements.
How: we inject at C1:SUS-ETMX_LSC_EXC so that each digital count should be directly proportional to the force applied to the suspension. We read out the signal at C1:SUS-ETMX_LSC_OUT_DQ. We use an approximately white excitation in the 50-300 Hz band, and intentionally choose the coherence to be only slightly above 0.9 so that we can get some statistical error to be compared with the Fisher matrix's prediction. For each measurement, we use a bandwidth of 0.25 Hz and 10 averages (no overlapping between adjacent segments).
The 2 zeros and 2 poles in the analog whitening filter and an overall gain are treated as free parameters to be fitted, while the rest are taken from the model by Anchal and Paco (elog:16363). The optical response of the arm cavity seems missing in that model, and thus we additionally include a real pole (for the cavity pole) in the model we fit. Thus in total, our model has 6 free parameters, 2 zeros, 3 poles, and 1 overall gain.
The analysis codes are pushed to the 40m/sysID repo.
===========================================================
Results:
Fig. 1 shows one measurement. The gray trace is the data and the olive one is the maximum likelihood estimation. The uncertainty for each frequency bin is shown in the shaded region. Note that the SNR is related to the coherence as
SNR^2 = [coherence / (1-coherence)] * (# of average),
and for a complex TF written as G = A * exp[1j*Phi], one can show the uncertainty is given by
\Delta A / A = 1/SNR, \Delta \Phi = 1/SNR [rad].
Fig. 2. The gray contours show the 1- and 2-sigma levels of the model parameters using the Fisher matrix calculation. We repeated the measurement shown in Fig. 1 three times, and the best-fit parameters for each measurement are indicated in the red-crosses. Although we only did a small number of experiments, the amount of scattering is consistent with the Fisher matrix's prediction, giving us some confidence in our analytical calculation.
One thing to note though is that in order to fit the measured data, we would need an additional pole at around 1,500 Hz. This seems a bit low for the cavity pole frequency. For aLIGO w/ 4km arms, the single-arm pole is about 40-50 Hz. The arm is 100 times shorter here and I would naively expect the cavity pole to be at 3k-4k Hz if the test masses are similar.
Fig. 3. We then follow the algorithm outlined in Pintelon & Schoukens, sec. 5.4.2.2, to calculate how we should change the excitation spectrum. Note that here we are fixing the rms of the force applied to the suspension constant.
Fig. 4 then shows how the expected error changes as we optimize the excitation. It seems in this case a white-ish excitation is already decent (as the TF itself is quite flat in the range of interest), and we only get some mild improvement as we iterate the excitation spectra (note we use the color gray, olive, and purple for the results after the 0th, 1st, and 2nd iteration; same color-coding as in Fig. 3).
|
16377
|
Mon Oct 4 18:35:12 2021 |
Paco | Update | Electronics | Satellite amp box adapters |
[Paco]
I have finished assembling the 1U adapters from 8 to 5 DB9 conn. for the satellite amp boxes. One thing I had to "hack" was the corners of the front panel end of the PCB. Because the PCB was a bit too wide, it wasn't really flush against the front panel (see Attachment #1), so I just filed the corners by ~ 3 mm and covered with kapton tape to prevent contact between ground planes and the chassis. After this, I made DB9 cables, connected everything in place and attached to the rear panel (Attachment #2). Four units are resting near the CAD machine (next to the bench area), see Attachment #3. |
16378
|
Mon Oct 4 20:46:08 2021 |
Koji | Update | Electronics | Satellite amp box adapters |
Thanks. You should be able to find the chassis-related hardware on the left side of the benchtop drawers at the middle workbench.
Hardware: The special low profile 4-40 standoff screw / 1U handles / screws and washers for the chassis / flat-top screws for chassis panels and lids |
16379
|
Mon Oct 4 21:58:17 2021 |
Tega | Update | Electronics | Sat Amp modifications |
Trying to finish 2 more Sat Amp units so that we have the 7 units needed for the X-arm install.
S2100736 - All good
S2100737 - This unit presented with an issue on the PD1 circuit of channel 1-4 PCB where the voltage reading on TP6, TP7 and TP8 are -15.1V, -14.2V, and +14.7V respectively, instead of ~0V. The unit also has an issue on the PD2 circuit of channel 1-4 PCB because the voltage reading on TP7 and TP8 are -14.2V, and +14.25V respectively, instead of ~0V.
|
16380
|
Tue Oct 5 17:01:20 2021 |
Koji | Update | Electronics | Sat Amp modifications |
Make sure the inputs for the PD amps are open. This is the current amplifier and we want to leave the input pins open for the test of this circuit.
TP6 is the first stage of the amps (TIA). So this stage has the issue. Usual check if the power is properly supplied / if the pins are properly connected/isolated / If the opamp is alive or not.
For TP8, if TP8 get railed. TP5 and TP7 are going to be railed too. Is that the case, if so, check this whitening stage in the same way as above.
If the problem is only in the TP5 and/or TP7 it is the differential driver issue. Check the final stage as above. Replacing the opamp could help.
|