ID |
Date |
Author |
Type |
Category |
Subject |
16365
|
Wed Sep 29 17:10:09 2021 |
Anchal | Summary | CDS | c1teststand problems summary |
[anchal, ian]
We went and collected some information for the overlords to fix the c1teststand DAQ network issue.
- from c1teststand, c1bhd and c1sus2 computers were not accessible through ssh. (No route to host). So we restarted both the computers (the I/O chassis were ON).
- After the computers restarted, we were able to ssh into c1bhd and c1sus, ad we ran rtcds start c1x06 and rtcds start c1x07.
- The first page in attachment shows the screenshot of GDS_TP screens of the IOP models after this step.
- Then we started teh user models by running rtcds start c1bhd and rtcds start c1su2.
- The second page shows the screenshot of GDS_TP screens. You can notice that DAQ status is red in all the screens and the DC statuses are blank.
- So we checked if daqd_ services are running in the fb computer. They were not. So we started them all by sudo systemctl start daqd_*.
- Third page shows the status of all services after this step. the daqd_dc.service remained at failed state.
- open-mx_stream.service was not even loaded in fb. We started it by running sudo systemctl start open-mx_stream.service.
- The fourth page shows the status of this service. It started without any errors.
- However, when we went to check the status of mx_stream.service in c1bhd and c1sus2, they were not loaded and we we tried to start them, they showed failed state and kept trying to start every 3 seconds without success. (See page 5 and 6).
- Finally, we also took a screenshot of timedatectl command output on the three computers fb, c1bhd, and c1sus2 to show that their times were not synced at all.
- The ntp service is running on fb but it probably does not have access to any of the servers it is following.
- The timesyncd on c1bhd and c1sus2 (FE machines) is also running but showing status 'Idle' which suggested they are unable to find the ntp signal from fb.
- I believe this issue is similar to what jamie ficed in the fb1 on martian network in 40m/16302. Since the fb on c1teststand network was cloned before this fix, it might have this dysfunctional ntp as well.
We would try to get internet access to c1teststand soon. Meanwhile, someone with more experience and knowledge should look into this situation and try to fix it. We need to test the c1teststand within few weeks now. |
Attachment 1: c1teststand_issues_summary.pdf
|
|
16366
|
Thu Sep 30 11:46:33 2021 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary |
First and foremost I have the updated bode plot with the mode moved to 10 Hz. See Attachment 1. Note that the comparison measurement is a % difference whereas in the previous bode plot it was just the difference. I also wrapped the phase so that jumps from -180 to 180 are moved down. This eliminates massive jumps in the % difference.
Next, I have two comparison plots: 32 bit and 64bit. As mentioned above I moved the mode to 10 Hz and just excited both systems at 3.4283Hz with an amplitude of 1. As we can see on the plot the two models are practically the same when using 64bits. With the 32bit system, we can see that the noise in the IIR filter is much greater than in the State-space model at frequencies greater than our mode.
Note about windowing and averaging: I used a Hanning window with averaging over 4 neighbor points. I came to this number after looking at the results with less averaging and more averaging. In the code, this can be seen as nperseg=num_samples/4 which is then fed into signal.welch |
Attachment 1: SS-IIR-Bode.pdf
|
|
Attachment 2: PSD_32bit.pdf
|
|
Attachment 3: PSD_64bit.pdf
|
|
16367
|
Thu Sep 30 14:09:37 2021 |
Anchal | Summary | CDS | New way to ssh into c1teststand |
Late elog, original time Wed Sep 29 14:09:59 2021
We opened a new port (22220) in the router to the martian subnetwork which is forwarded to port 22 on c1teststand (192.168.113.245) allowing direct ssh access to c1teststand computer from the outside world using:
Checkout this wiki page for unredadcted info. |
16369
|
Thu Sep 30 18:04:31 2021 |
Paco | Summary | Calibration | XARM OLTF (calibration) with three lines |
[anchal, paco]
We repeated the same procedure as before, but with 3 different lines at 55.511, 154.11, and 1071.11 Hz. We overlay the OLTF magnitudes and phases with our latest model (which we have updated with Koji's help) and include the rms uncertainties as errorbars in Attachment #1.
We also plot the noise ASDs of calibrated OLTF magnitudes at the line frequencies in Attachment #2. These curves are created by calculating power spectral density of timeseries of OLTF values at the line frequencies generated by demodulated XARM_IN and ETMX_LSC_OUT signals. We have overlayed the TRX noise spectrum here as an attempt to see if we can budget the noise measured in values of G to the fluctuation in optical gain due to changing power in the arms. We multiplied the the transmission ASD with the value of OLTF at those frequencies as the transfger function from normalized optical gain to the total transfer function value.
It is weird that the fluctuations in transmission power at 1 mHz always crosses the total noise in the OLTF value in all calibration lines. This could be an artificat of our data analysis though.
Even if the contribution of the fluctuating power is correct, there is remaining excess noise in the OLTF to be budgeted. |
Attachment 1: XARM_OLTF_Model_and_Meas.pdf
|
|
Attachment 2: Gmag_ASD_nb_withTRX.pdf
|
|
16371
|
Fri Oct 1 14:25:27 2021 |
yehonathan | Summary | SUS | PRM and BS Angular Actuation transfer function magnitude measurements |
{Paco, Yehonathan, Hang}
We measured the sensing PRMI sensing matrix. Attachment 1 shows the results, the magnitude of the response is not calibrated. The orthogonality between PRCL and MICH is still bad (see previous measurement for reference).
Hang suggested that since MICH actuation with BS and PRM is not trivial (0.5*BS - 0.34*PRM) and since PRCL is so sensitive to PRM movement there might be a leakage to PRCL when we are actuating on MICH. So there may be a room to tune the PRM coefficient in the MICH output matrix.
Attachment 2 shows the sensing matrix after we changed the MICH->PRM coefficient in the OSC output matrix to -0.1.
It seems like it made things a little bit better but not much and also there is a huge uncertainty in the MICH sensing. |
Attachment 1: MICH_PRM_-0.34.png
|
|
Attachment 2: MICH_PRM_-0.1.png
|
|
16372
|
Mon Oct 4 11:05:44 2021 |
Anchal | Summary | CDS | c1teststand problems summary |
[Anchal, Paco]
We tried to fix the ntp synchronization in c1teststand today by repeating the steps listed in 40m/16302. Even though teh cloned fb1 now has the exact same package version, conf & service files, and status, the FE machines (c1bhd and c1sus2) fail to sync to the time. the timedatectl shows the same stauts 'Idle'. We also, dug bit deeper into the error messages of daq_dc on cloned fb1 and mx_stream on FE machines and have some error messages to report here.
Attempt on fixing the ntp
- We copied the ntp package version 1:4.2.6 deb file from /var/cache/apt/archives/ntp_1%3a4.2.6.p5+dfsg-7+deb8u3_amd64.deb on the martian fb1 to the cloned fb1 and ran.
controls@fb1:~ 0$ sudo dbpg -i ntp_1%3a4.2.6.p5+dfsg-7+deb8u3_amd64.deb
- We got error messages about missing dependencies of libopts25 and libssl1.1. We downloaded oldoldstable jessie versions of these packages from here and here. We ensured that these versions are higher than the required versions for ntp. We installed them with:
controls@fb1:~ 0$ sudo dbpg -i libopts25_5.18.12-3_amd64.deb
controls@fb1:~ 0$ sudo dbpg -i libssl1.1_1.1.0l-1~deb9u4_amd64.deb
- Then we installed the ntp package as described above. It asked us if we want to keep the configuration file, we pressed Y.
- However, we decided to make the configuration and service files exactly same as martian fb1 to make it same in cloned fb1. We copied /etc/ntp.conf and /etc/systemd/system/ntp.service files from martian fb1 to cloned fb1 in the same positions. Then we enabled ntp, reloaded the daemon, and restarted ntp service:
controls@fb1:~ 0$ sudo systemctl enable ntp
controls@fb1:~ 0$ sudo systemctl daemon-reload
controls@fb1:~ 0$ sudo systemctl restart ntp
- But ofcourse, since fb1 doesn't have internet access, we got some errors in status of the ntp.service:
controls@fb1:~ 0$ sudo systemctl status ntp
● ntp.service - NTP daemon (custom service)
Loaded: loaded (/etc/systemd/system/ntp.service; enabled)
Active: active (running) since Mon 2021-10-04 17:12:58 UTC; 1h 15min ago
Main PID: 26807 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/ntp.service
├─30408 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:107
└─30525 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:107
Oct 04 17:48:42 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
Oct 04 17:48:52 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
Oct 04 18:05:05 fb1 ntpd_intres[30525]: host name not found: 0.debian.pool.ntp.org
Oct 04 18:05:15 fb1 ntpd_intres[30525]: host name not found: 1.debian.pool.ntp.org
Oct 04 18:05:25 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
Oct 04 18:05:35 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
Oct 04 18:21:48 fb1 ntpd_intres[30525]: host name not found: 0.debian.pool.ntp.org
Oct 04 18:21:58 fb1 ntpd_intres[30525]: host name not found: 1.debian.pool.ntp.org
Oct 04 18:22:08 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
Oct 04 18:22:18 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
- But the ntpq command is giving the saem output as given by ntpq comman in martian fb1 (except for the source servers), that the broadcasting is happening in the same manner:
controls@fb1:~ 0$ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
192.168.123.255 .BCST. 16 u - 64 0 0.000 0.000 0.000
- On the FE machines side though, the systemd-timesyncd are still unable to read the time signal from fb1 and show the status as idle:
controls@c1bhd:~ 3$ timedatectl
Local time: Mon 2021-10-04 18:34:38 UTC
Universal time: Mon 2021-10-04 18:34:38 UTC
RTC time: Mon 2021-10-04 18:34:38
Time zone: Etc/UTC (UTC, +0000)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
controls@c1bhd:~ 0$ systemctl status systemd-timesyncd -l
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
Active: active (running) since Mon 2021-10-04 17:21:29 UTC; 1h 13min ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 244 (systemd-timesyn)
Status: "Idle."
CGroup: /system.slice/systemd-timesyncd.service
└─244 /lib/systemd/systemd-timesyncd
- So the time synchronization is still not working. We expected the FE machined to just synchronize to fb1 even though it doesn't have any upstream ntp server to synchronize to. But that didn't happen.
- I'm (Anchal) working on getting internet access to c1teststand computers.
Digging into mx_stream/daqd_dc errors:
- We went and changed the Restart fileld in /etc/systemd/system/daqd_dc.service on cloned fb1 to 2. This allows the service to fail and stop restarting after two attempts. This allows us to see the real error message instead of the systemd error message that the service is restarting too often. We got following:
controls@fb1:~ 3$ sudo systemctl status daqd_dc -l
● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
Active: failed (Result: exit-code) since Mon 2021-10-04 17:50:25 UTC; 22s ago
Process: 715 ExecStart=/usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc (code=exited, status=1/FAILURE)
Main PID: 715 (code=exited, status=1/FAILURE)
Oct 04 17:50:24 fb1 systemd[1]: Started Advanced LIGO RTS daqd data concentrator.
Oct 04 17:50:25 fb1 daqd_dc_mx[715]: [Mon Oct 4 17:50:25 2021] Unable to set to nice = -20 -error Unknown error -1
Oct 04 17:50:25 fb1 daqd_dc_mx[715]: Failed to do mx_get_info: MX not initialized.
Oct 04 17:50:25 fb1 daqd_dc_mx[715]: 263596
Oct 04 17:50:25 fb1 systemd[1]: daqd_dc.service: main process exited, code=exited, status=1/FAILURE
Oct 04 17:50:25 fb1 systemd[1]: Unit daqd_dc.service entered failed state.
- It seemed like the only thing daqd_dc process doesn't like is that mx_stream services are in failed state in teh FE computers. So we did the same process on FE machines to get the real error messages:
controls@fb1:~ 0$ sudo chroot /diskless/root
fb1:/ 0#
fb1:/ 0# sudo nano /etc/systemd/system/mx_stream.service
fb1:/ 0#
fb1:/ 0# exit
- Then I ssh'ed into c1bhd to see the error message on mx_stream service properly.
controls@c1bhd:~ 0$ sudo systemctl daemon-reload
controls@c1bhd:~ 0$ sudo systemctl restart mx_stream
controls@c1bhd:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
Active: failed (Result: exit-code) since Mon 2021-10-04 17:57:20 UTC; 24s ago
Process: 11832 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
Main PID: 11832 (code=exited, status=1/FAILURE)
Oct 04 17:57:20 c1bhd systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 04 17:57:20 c1bhd systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: send len = 263596
Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: mx_connect failed Nic ID not Found in Peer Table
Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: c1x06_daq mmapped address is 0x7f516a97a000
Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: c1bhd_daq mmapped address is 0x7f516697a000
Oct 04 17:57:20 c1bhd systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 04 17:57:20 c1bhd systemd[1]: Unit mx_stream.service entered failed state.
- c1sus2 shows the same error. I'm not sure I understand these errors at all. But they seem to have nothing to do with timing issues
!
As usual, some help would be helpful |
16374
|
Mon Oct 4 16:00:57 2021 |
Yehonathan | Summary | SUS | PRM and BS Angular Actuation transfer function magnitude measurements |
{Yehonathan, Anchel}
In an attempt to fix the actuation of the PRMI DOFs we set to modify the output matrix of the BS and PRM such that the response of the coils will be similar to each other as much as possible.
To do so, we used the responses at a single frequency from the previous measurement to infer the output matrix coefficients that will equilize the OpLev responses (arbitrarily making the LL coil as a reference). This corrected the imbalance in BS almost completely while it didn't really work for PRM (see attachment 1).
The new output matrices are shown in attachment 2-3. |
Attachment 1: BS_PRM_ANG_ACT_TF_20211004.pdf
|
|
Attachment 2: BS_out_mat_20211004.txt
|
9.839999999999999858e-01 8.965770586285104482e-01 9.486710352885977526e-01 3.099999999999999978e-01
1.016000000000000014e+00 9.750242104232501594e-01 -9.291967546765563801e-01 3.099999999999999978e-01
9.839999999999999858e-01 -1.086765190351774768e+00 1.009798093279114628e+00 3.099999999999999978e-01
1.016000000000000014e+00 -1.031706735496689786e+00 -1.103142995587099939e+00 3.099999999999999978e-01
0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 1.000000000000000000e+00
|
Attachment 3: PRM_out_mat_20211004.txt
|
1.000000000000000000e+00 1.033455230230304611e+00 9.844796282226820905e-01 0.000000000000000000e+00
1.000000000000000000e+00 9.342329554807877745e-01 -1.021296201828568506e+00 0.000000000000000000e+00
1.000000000000000000e+00 -1.009214777246558503e+00 9.965113815550634691e-01 0.000000000000000000e+00
1.000000000000000000e+00 -1.020129700278567197e+00 -9.973560027273553619e-01 0.000000000000000000e+00
0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 1.000000000000000000e+00
|
16375
|
Mon Oct 4 16:10:09 2021 |
rana | Summary | SUS | PRM and BS Angular Actuation transfer function magnitude measurements |
not sure that this is necessary. If you look at teh previous entries Gautam made on this topic, it is clear that the BS/PRM PRMI matrix is snafu, whereas the ITM PRMI matrix is not.
Is it possible that the ~5% coil imbalance of the BS/PRM can explain the observed sensing matrix? If not, then there is no need to balance these coils. |
16376
|
Mon Oct 4 18:00:16 2021 |
Koji | Summary | CDS | c1teststand problems summary |
I don't know anything about mx/open-mx, but you also need open-mx,don't you?
controls@c1ioo:~ 0$ systemctl status *mx*
● open-mx.service - LSB: starts Open-MX driver
Loaded: loaded (/etc/init.d/open-mx)
Active: active (running) since Wed 2021-09-22 11:54:39 PDT; 1 weeks 5 days ago
Process: 470 ExecStart=/etc/init.d/open-mx start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/open-mx.service
└─620 /opt/3.2.88-csp/open-mx-1.5.4/bin/fma -d
● mx_stream.service - Advanced LIGO RTS front end mx stream
Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
Active: active (running) since Wed 2021-09-22 12:08:00 PDT; 1 weeks 5 days ago
Main PID: 5785 (mx_stream)
CGroup: /system.slice/mx_stream.service
└─5785 /usr/bin/mx_stream -e 0 -r 0 -w 0 -W 0 -s c1x03 c1ioo c1als c1omc -d fb1:0
|
16381
|
Tue Oct 5 17:58:52 2021 |
Anchal | Summary | CDS | c1teststand problems summary |
open-mx service is running successfully on the fb1(clone), c1bhd and c1sus.
Quote: |
I don't know anything about mx/open-mx, but you also need open-mx,don't you?
|
|
16382
|
Tue Oct 5 18:00:53 2021 |
Anchal | Summary | CDS | c1teststand time synchronization working now |
Today I got a new router that I used to connect the c1teststand, fb1 and chiara. I was able to see internet access in c1teststand and fb1, but not in chiara. I'm not sure why that is the case.
The good news is that the ntp server on fb1(clone) is working fine now and both FE computers, c1bhd and c1sus2 are succesfully synchronized to the fb1(clone) ntpserver. This resolves any possible timing issues in this DAQ network.
On running the IOP and user models however, I see the same errors are mentioned in 40m/16372. Something to do with:
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: mx_connect failed Nic ID not Found in Peer Table
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1x07_daq mmapped address is 0x7fa4819cc000
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1su2_daq mmapped address is 0x7fa47d9cc000
Thu Oct 7 17:04:31 2021
I fixed the issue of chiara not getting internet. Now c1teststand, fb1 and chiara, all have internet connections. It was the issue of default gateway and interface and findiing the DNS. I have found the correct settings now. |
16383
|
Tue Oct 5 20:04:22 2021 |
Paco | Summary | SUS | PRM and BS Angular Actuation transfer function magnitude measurements |
[Paco, Rana]
We had a look at the BS actuation. Along the way we created a couple of issues that we fixed. A summary is below.
- First, we locked MICH. While doing this, we used the /users/Templates/ndscope/LSC/MICH.yml ndscope template to monitor some channels. I edited the yaml file to look at C1:LSC-ASDC_OUT_DQ instead of the REFL_DC. Rana pointed out that the C1:LSC-MICH_OUT_DQ (MICH control point) had a big range (~ 5000 counts rms) and this should not be like that.
- We tried to investigate the aforementioned thing by looking at the whitening / uwhitening filters but all the slow epics channels where "white" on the medm screen. Looking under CDS/slow channel monitors, we realized that both c1iscaux and c1auxey were weird, so we tried telnet to c1iscaux without success. Therefore, we followed the recommended wiki procedure of hard rebooting this machine. While inside the lab and looking for this machine, we touched things around the 'rfpd' rack and once we were back in the control room, we couldn't see any light on the AS port camera. But the whitening filter medm screens were back up.
- While rana ssh'd into c1auxey to investigate about its status, and burtrestored the c1iscaux channels, we looked at trends to figure out if anything had changed (for example TT1 or TT2) but this wasn't the case. We decided to go back inside to check the actual REFL beams and noticed it was grossly misaligned (clipping)... so we blamed it on the TTs and again, went around and moved some stuff around the 'rfpd' rack. We didn't really connect or disconnect anything, but once we were back in the control room, light was coming from the AS port again. This is a weird mystery and we should systematically try to repeat this and fix the actual issue.
- We restored the MICH, and returned to BS actuation problems. Here, we essentially devised a scheme to inject noise at 310.97 Hz and 313.74. The choice is twofold, first it lies outside the MICH loop UGF (~150 Hz), and second, it matches the sensing matrix OSC frequencies, so it's more appropriate for a comparison.
- We injected two lines using the BS SUS LOCKIN1 and LOCKIN2 oscilators so we can probe two coils at once, with the LSC loop closed, and read back using the C1:LSC-MICH_IN1_DQ channel. We excited with an amplitude of 1234.0 counts and 1254 counts respectively (to match the ~ 2 % difference in frequency) and noted that the magnitude response in UR was 10% larger than UL, LL, and LR which were close to each other at the 2% level.
[Paco]
After rana left, I did a second pass at the BS actuation. I took TF measurements at the oscilator frequencies noted above using diaggui, and summarize the results below:
TF |
UL (310.97 Hz) |
UR (313.74 Hz) |
LL (310.97 Hz) |
LR (313.74 Hz) |
Magnitude (dB) |
93.20 |
92.20 |
94.27 |
93.85 |
Phase (deg) |
-128.3 |
-127.9 |
-128.4 |
-127.5 |
This procedure should be done with PRM as well and using the PRCL instead of MICH. |
16385
|
Wed Oct 6 15:39:29 2021 |
Anchal | Summary | SUS | PRM and BS Angular Actuation transfer function magnitude measurements |
Note that your tests were done with the output matrix for BS and PRM in the compensated state as done in 40m/16374. The changes made there were supposed to clear out any coil actuation imbalance in the angular degrees of freedom. |
16391
|
Mon Oct 11 17:31:25 2021 |
Anchal | Summary | CDS | Fixed mounting of mx devices in fb. daqd_dc is running now. |
I compared the fb1 in main network with the cloned fb1 and I found a crucial difference. The main fb1 where cds is running fine as mx devices mounted in /dev/ like mx0, mx1 upto mx7, mxctlm mxctlp, mxp0, mxp1 upto mxp7. The cloned fb does not have any of these mx devices mounted. I think this is where the issue was coming in from.
However, lspci | grep 'Myri' shows following output on both computers:
controls@fb1:/dev 0$ lspci | grep 'Myri'
02:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC (rev 01)
Which means that the computer detects the card on PCie slot.
I tried to add this to /etc/rc.local to run this script at every boot, but it did not work. So for now, I'll just manually do this step everytime. Once the devices are loaded, we get:
controls@fb1:/etc 0$ ls /dev/*mx*
/dev/mx0 /dev/mx4 /dev/mxctl /dev/mxp2 /dev/mxp6 /dev/ptmx
/dev/mx1 /dev/mx5 /dev/mxctlp /dev/mxp3 /dev/mxp7
/dev/mx2 /dev/mx6 /dev/mxp0 /dev/mxp4 /dev/open-mx
/dev/mx3 /dev/mx7 /dev/mxp1 /dev/mxp5 /dev/open-mx-raw
The, restarting all daqd_ processes, I found that daqd_dc was running succesfully now. Here is the status:
controls@fb1:/etc 0$ sudo systemctl status daqd_* -l
● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
Active: active (running) since Mon 2021-10-11 17:48:00 PDT; 23min ago
Main PID: 2308 (daqd_dc_mx)
CGroup: /daqd.slice/daqd_dc.service
├─2308 /usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc
└─2370 caRepeater
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 006 thread priority error Operation not permitted[Mon Oct 11 17:48:06 2021]
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 005 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] [Mon Oct 11 17:48:06 2021] mx receiver 006 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: mx receiver 007 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread - label dqmx003 pid=2362
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread priority error Operation not permitted
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] mx receiver 003 thread put on CPU 0
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: warning:regcache incompatible with malloc
Oct 11 17:48:07 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:48:06 2021] EDCU has 410 channels configured; first=0
Oct 11 17:49:06 fb1 daqd_dc_mx[2308]: [Mon Oct 11 17:49:06 2021] ->4: clear crc
● daqd_fw.service - Advanced LIGO RTS daqd frame writer
Loaded: loaded (/etc/systemd/system/daqd_fw.service; enabled)
Active: active (running) since Mon 2021-10-11 17:48:01 PDT; 23min ago
Main PID: 2318 (daqd_fw)
CGroup: /daqd.slice/daqd_fw.service
└─2318 /usr/bin/daqd_fw -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.fw
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] [Mon Oct 11 17:48:09 2021] Producer thread - label dqproddbg pid=2440
Oct 11 17:48:09 fb1 daqd_fw[2318]: Producer crc thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] [Mon Oct 11 17:48:09 2021] Producer crc thread put on CPU 0
Oct 11 17:48:09 fb1 daqd_fw[2318]: Producer thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread put on CPU 0
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread - label dqprod pid=2434
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread priority error Operation not permitted
Oct 11 17:48:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:09 2021] Producer thread put on CPU 0
Oct 11 17:48:10 fb1 daqd_fw[2318]: [Mon Oct 11 17:48:10 2021] Minute trender made GPS time correction; gps=1318034906; gps%60=26
Oct 11 17:49:09 fb1 daqd_fw[2318]: [Mon Oct 11 17:49:09 2021] ->3: clear crc
● daqd_rcv.service - Advanced LIGO RTS daqd testpoint receiver
Loaded: loaded (/etc/systemd/system/daqd_rcv.service; enabled)
Active: active (running) since Mon 2021-10-11 17:48:00 PDT; 23min ago
Main PID: 2311 (daqd_rcv)
CGroup: /daqd.slice/daqd_rcv.service
└─2311 /usr/bin/daqd_rcv -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.rcv
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1X07_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_STATUS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_CRC_CPS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1BHD_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_STATUS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_CRC_CPS
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1SU2_CRC_SUM
Oct 11 17:50:21 fb1 daqd_rcv[2311]: Creating C1:DAQ-NDS0_C1OM[Mon Oct 11 17:50:21 2021] Epics server started
Oct 11 17:50:24 fb1 daqd_rcv[2311]: [Mon Oct 11 17:50:24 2021] Minute trender made GPS time correction; gps=1318035040; gps%120=40
Oct 11 17:51:21 fb1 daqd_rcv[2311]: [Mon Oct 11 17:51:21 2021] ->3: clear crc
Now, even before starting teh FE models, I see DC status as ox2bad in the CDS screens of the IOP and user models. The mx_stream service remains in a failed state at teh FE machines and remain the same even after restarting the service.
controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
Active: failed (Result: exit-code) since Mon 2021-10-11 17:50:26 PDT; 15min ago
Process: 382 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
Main PID: 382 (code=exited, status=1/FAILURE)
Oct 11 17:50:25 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 17:50:25 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 17:50:25 c1sus2 mx_stream_exec[382]: Failed to open endpoint Not initialized
Oct 11 17:50:26 c1sus2 systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 11 17:50:26 c1sus2 systemd[1]: Unit mx_stream.service entered failed state.
But if I restart the mx_stream service before starting the rtcds models, the mx-stream service starts succesfully:
controls@c1sus2:~ 0$ sudo systemctl restart mx_stream
controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
Active: active (running) since Mon 2021-10-11 18:14:13 PDT; 25s ago
Main PID: 1337 (mx_stream)
CGroup: /system.slice/mx_stream.service
└─1337 /usr/bin/mx_stream -e 0 -r 0 -w 0 -W 0 -s c1x07 c1su2 -d fb1:0
Oct 11 18:14:13 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 18:14:13 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: send len = 263596
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: Connection Made
However, the DC status on CDS screens still show 0x2bad. As soon as I start the rtcds model c1x07 (the IOP model for c1sus2), the mx_stream service fails:
controls@c1sus2:~ 0$ sudo systemctl status mx_stream -l
● mx_stream.service - Advanced LIGO RTS front end mx stream
Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
Active: failed (Result: exit-code) since Mon 2021-10-11 18:18:03 PDT; 27s ago
Process: 1337 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
Main PID: 1337 (code=exited, status=1/FAILURE)
Oct 11 18:14:13 c1sus2 systemd[1]: Starting Advanced LIGO RTS front end mx stream...
Oct 11 18:14:13 c1sus2 systemd[1]: Started Advanced LIGO RTS front end mx stream.
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: send len = 263596
Oct 11 18:14:13 c1sus2 mx_stream_exec[1337]: Connection Made
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: isendxxx failed with status Remote Endpoint Unreachable
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: disconnected from the sender
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: c1x07_daq mmapped address is 0x7fe3620c3000
Oct 11 18:18:03 c1sus2 mx_stream_exec[1337]: c1su2_daq mmapped address is 0x7fe35e0c3000
Oct 11 18:18:03 c1sus2 systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
Oct 11 18:18:03 c1sus2 systemd[1]: Unit mx_stream.service entered failed state.
This shows that the start of rtcds model, causes the fail in mx_stream, possibly due to inability of finding the endpoint on fb1. I've again reached to the edge of my knowledge here. Maybe the fiber optic connection between fb and the network switch that connects to FE is bad, or the connection between switch and FEs is bad.
But we are just one step away from making this work.
|
16392
|
Mon Oct 11 18:29:35 2021 |
Anchal | Summary | CDS | Moving forward? |
The teststand has some non-trivial issue with Myrinet card (either software or hardware) which even teh experts are saying they don't remember how to fix it. CDS with mx was iin use more than a decade ago, so it is hard to find support for issues with it now and will be the same in future. We need to wrap up this test procedure one way or another now, so I have following two options moving forward:
Direct integration with main CDS and testing
- We can just connect the c1sus2 and c1bhd FE computers to martian network directly.
- We'll have to connect c1sus2 and c1bhd to the optical fiber subnetwork as well.
- On booting, they would get booted through the exisitng fb1 boot server which seems to work fine for the other 5 FE machines.
- We can update teh DHCP in chiara and reload it so that we can ssh into these FEs with host names.
- Hopefully, presence of these computers won't tank the existing CDS even if they themselves have any issues, as they have no shared memory with other models.
- If this works, we can do the loop back testing of I/O chassis using the main DAQ network and move on with our upgrade.
- If this does not work and causes any harm to exisitng CDS network, we can disconnect these computers and go back to existing CDS. Recently, our confidence on rebooting the CDS has increased with the robust performance as some legacy issues were fixed.
- We'll however, continue to use a CDS which is no more supported by the current LIGO CDS group.
Testing CDS upgrade on teststand
- From what I could gather, most of the hardware in I/O chassis that I could find, is still used in CDS of LLO and LHO, with their recent tests and documents using the same cards and PCBs.
- There might be some difference in the DAQ network setup that I need to confirm.
- I've summarised the current c1teststand hardware on this wiki page.
- If the latest CDS is backwards compatible with our hardware, we can test the new CDS in teh c1teststand setup without disrupting our main CDS. We'll have ample help and support for this upgrade from the current LIGO CDS group.
- We can do the loop back testing of the I/O chassis as well.
- If the upgrade is succesfull in the teststand without many hardware changes, we can upgrade the main CDS of 40m as well, as it has the same hardware as our teststand.
- Biggest plus point would be that out CDS will be up-to-date and we will be able to take help from CDS group if any trouble occurs.
So these are the two options we have. We should discuss which one to take in the mattermost chat or in upcoming meeting. |
16393
|
Tue Oct 12 11:32:54 2021 |
Yehonathan | Summary | SUS | PRM and BS Angular Actuation transfer function magnitude measurements |
Late submission (From Thursday 10/07):
I measured the PRMI sensing matrix to see if the BS and PRMI output matrices tweaking had any effect.
While doing so, I noticed I made a mistake in the analysis of the previous sensing matrix measurement. It seems that I have used the radar plot function with radians where degrees should have been used (the reason is that the azimuthal uncertainty looked crazy when I used degrees. I still don't know why this is the case with this measurement).
In any case, attachment 1 and 2 show the PRMI radar plots with the modified output matrices and and in the normal state, respectively.
It seems like the output matrix modification didn't do anything but REFL55 has good orthogonality. Problem gone?? |
Attachment 1: modified_output_matrices_radar_plots.png
|
|
Attachment 2: normal_output_matrices_radar_plots.png
|
|
16394
|
Tue Oct 12 16:39:52 2021 |
rana | Summary | SUS | PRM and BS Angular Actuation transfer function magnitude measurements |
should compare side by side with the ITM PRMI radar plots to see if there is a difference. How do your new plots compare with Gautam's plots of PRMI? |
16395
|
Tue Oct 12 17:10:56 2021 |
Anchal | Summary | CDS | Some more information |
Chris pointed out some information displaying scripts, that show if the DAQ network is working or not. I thought it would be nice to log this information here as well.
controls@fb1:/opt/mx/bin 0$ ./mx_info
MX Version: 1.2.16
MX Build: controls@fb1:/opt/src/mx-1.2.16 Mon Aug 14 11:06:09 PDT 2017
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0: 364.4 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
Status: Running, P0: Link Up
Network: Ethernet 10G
MAC Address: 00:60:dd:45:37:86
Product code: 10G-PCIE-8B-S
Part number: 09-04228
Serial number: 423340
Mapper: 00:60:dd:45:37:86, version = 0x00000000, configured
Mapped hosts: 3
ROUTE COUNT
INDEX MAC ADDRESS HOST NAME P0
----- ----------- --------- ---
0) 00:60:dd:45:37:86 fb1:0 1,0
1) 00:25:90:05:ab:47 c1bhd:0 1,0
2) 00:25:90:06:69:c3 c1sus2:0 1,0
controls@c1bhd:~ 1$ /opt/open-mx/bin/omx_info
Open-MX version 1.5.4
build: root@fb1:/opt/src/open-mx-1.5.4 Tue Aug 15 23:48:03 UTC 2017
Found 1 boards (32 max) supporting 32 endpoints each:
c1bhd:0 (board #0 name eth1 addr 00:25:90:05:ab:47)
managed by driver 'igb'
Peer table is ready, mapper is 00:60:dd:45:37:86
================================================
0) 00:25:90:05:ab:47 c1bhd:0
1) 00:60:dd:45:37:86 fb1:0
2) 00:25:90:06:69:c3 c1sus2:0
controls@c1sus2:~ 0$ /opt/open-mx/bin/omx_info
Open-MX version 1.5.4
build: root@fb1:/opt/src/open-mx-1.5.4 Tue Aug 15 23:48:03 UTC 2017
Found 1 boards (32 max) supporting 32 endpoints each:
c1sus2:0 (board #0 name eth1 addr 00:25:90:06:69:c3)
managed by driver 'igb'
Peer table is ready, mapper is 00:60:dd:45:37:86
================================================
0) 00:25:90:06:69:c3 c1sus2:0
1) 00:60:dd:45:37:86 fb1:0
2) 00:25:90:05:ab:47 c1bhd:0
These outputs prove that the framebuilder and the FEs are able to see each other in teh DAQ network.
Further, the error that we see when IOP model is started which crashes the mx_stream service on the FE machines (see 40m/16391) :
isendxxx failed with status Remote Endpoint Unreachable
This has been seen earlier when Jamie was troubleshooting the current fb1 in martian network in 40m/11655 in Oct, 2015. Unfortunately, I could not find what Jamie did over a year to fix this issue. |
16396
|
Tue Oct 12 17:20:12 2021 |
Anchal | Summary | CDS | Connected c1sus2 to martian network |
I connected c1sus2 to the martian network by splitting the c1sim connection with a 5-way switch. I also ran another ethernet cable from the second port of c1sus2 to the DAQ network switch on 1X7.
Then I logged into chiara and added the following in chiara:/etc/dhcp/dhcpd.conf :
host c1sus2 {
hardware ethernet 00:25:90:06:69:C2;
fixed-address 192.168.113.92;
}
And following line in chiara:/var/lib/bind/martian.hosts :
c1sus2 A 192.168.113.92
Note that entires c1bhd is already added in these files, probably during some earlier testing by Gautam or Jon. Then I ran following to restart the dhcp server and nameserver:
~> sudo service bind9 reload
[sudo] password for controls:
* Reloading domain name service... bind9 [ OK ]
~> sudo service isc-dhcp-server restart
isc-dhcp-server stop/waiting
isc-dhcp-server start/running, process 25764
Now, As I switched on c1sus2 from front panel, it booted over network from fb1 like other FE machines and I was able to login to it by first logging to fb1 and then sshing to c1sus2.
Next, I copied the simulink models and the medm screens of c1x06, xc1x07, c1bhd, c1sus2 from the paths mentioned on this wiki page. I also copied the medm screens from chiara(clone):/opt/rtcds/caltech/c1/medm to martian network chiara in the appropriate places. I have placed the file /opt/rtcds/caltech/c1/medm/teststand_sitemap.adl which can be used to open sitemap for c1bhd and c1sus2 IOP and user models.
Then I logged into c1sus2 (via fb1) and did make, install, start procedure:
controls@c1sus2:~ 0$ rtcds make c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### building c1x07...
Cleaning c1x07...
Done
Parsing the model c1x07...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1x07...
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl
Successfully compiled c1x07
***********************************************
Compile Warnings, found in c1x07_warnings.log:
***********************************************
***********************************************
controls@c1sus2:~ 0$ rtcds install c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### installing c1x07...
Installing system=c1x07 site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1X07.txt
Installing /opt/rtcds/caltech/c1/target/c1x07/c1x07epics
Installing /opt/rtcds/caltech/c1/target/c1x07
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1x07
/opt/rtcds/caltech/c1/scripts/startc1x07
sudo: unable to resolve host c1sus2
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_211012_174226.par -gds_node=24 -site_letter=C -system=c1x07 -host=c1sus2
Installing GDS node 24 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1X07.ini
Installing Epics MEDM screens
Running post-build script
safe.snap exists
controls@c1sus2:~ 0$ rtcds start c1x07
Cannot start/stop model 'c1x07' on host c1sus2.
controls@c1sus2:~ 4$ rtcds list
controls@c1sus2:~ 0$
One can see that even after making and installing, the model c1x07 is not listed as available models in rtcds list. Same is the case for c1sus2 as well. So I could not proceed with testing.
Good news is that nothing that I did affect the current CDS functioning. So we can probably do this testing safely from the main CDS setup. |
16397
|
Tue Oct 12 23:42:56 2021 |
Koji | Summary | CDS | Connected c1sus2 to martian network |
Don't you need to add the new hosts to /diskless/root/etc/rtsystab at fb1? --> There looks many elogs talking about editing "rtsystab".
controls@fb1:/diskless/root/etc 0$ cat rtsystab
#
# host list of control systems to run, starting with IOP
#
c1iscex c1x01 c1scx c1asx
c1sus c1x02 c1sus c1mcs c1rfm c1pem
c1ioo c1x03 c1ioo c1als c1omc
c1lsc c1x04 c1lsc c1ass c1oaf c1cal c1dnn c1daf
c1iscey c1x05 c1scy c1asy
#c1test c1x10 c1tst2
|
16398
|
Wed Oct 13 11:25:14 2021 |
Anchal | Summary | CDS | Ran c1sus2 models in martian CDS. All good! |
Three extra steps (when adding new models, new FE):
- Chris pointed out that the sudo command in c1sus2 is giving error
sudo: unable to resolve host c1sus2
This error comes in when the computer could not figure out it's own hostname. Since FEs are network booted off the fb1, we need to update the /etc/hosts in /diskless/root everytime we add a new FE.
controls@fb1:~ 0$ sudo chroot /diskless/root
fb1:/ 0# sudo nano /etc/hosts
fb1:/ 0# exit
I added the following line in /etc/hosts file above:
192.168.113.92 c1sus2 c1sus2.martian
This resolved the issue of sudo giving error. Now, the rtcds make and install steps had no errors mentioned in their outputs.
- Another thing that needs to be done, as Koji pointed out, is to add the host and models in /etc/rtsystab in /diskless/root of fb:
controls@fb1:~ 0$ sudo chroot /diskless/root
fb1:/ 0# sudo nano /etc/rtsystab
fb1:/ 0# exit
I added the following lines in /etc/rtsystab file above:
c1sus2 c1x07 c1su2
This told rtcds what models would be available on c1sus2. Now rtcds list is displaying the right models:
controls@c1sus2:~ 0$ rtcds list
c1x07
c1su2
- The above steps are still not sufficient for the daqd_ processes to know about the new models. This part is supossed to happen automatically, but does not happen in our CDS apparently. So everytime there is a new model, we need to edit the file /opt/rtcds/caltech/c1/target/daqd/master and add following lines to it:
# Fast Data Channel lists
# c1sus2
/opt/rtcds/caltech/c1/chans/daq/C1X07.ini
/opt/rtcds/caltech/c1/chans/daq/C1SU2.ini
# test point lists
# c1sus2
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1su2.par
I needed to restart the daqd_ processes in fb1 for them to notice these changes:
controls@fb1:~ 0$ sudo systemctl restart daqd_*
This finally lit up the status channels of DC in C1X07_GDS_TP.adl and C1SU2_GDS_TP.adl . However the channels C1:DAQ-DC0_C1X07_STATUS and C1:DAQ-DC0_C1SU2_STATUS both have values 0x2bad. This persists on restarting the models. I then just simply restarted teh mx_stream on c1sus2 and boom, it worked! (see attached all green screen, never seen before!)
So now Ian can work on testing the I/O chassis and we would be good to move c1sus2 FE and I/O chassis to 1Y3 after that. I've also done following extra changes:
- Updated CDS_FE_STATUS medm screen to show the new c1sus2 host.
- Updated global diag rest script to act on c1xo7 and c1su2 as well.
- Updated mxstream restart script to act on c1sus2 as well.
|
Attachment 1: CDS_screens_running.png
|
|
16402
|
Thu Oct 14 13:40:49 2021 |
Yehonathan | Summary | SUS | PRM and BS Angular Actuation transfer function magnitude measurements |
Here is a side by side comparison of the PRMI sensing matrix using PRM/BS actuation (attachment 1) and ITMs actuation (attachment 2). The situation looks similar in both cases. That is, good orthogonality on REFL55 and bad seperation in the rest of the RFPDs.
Quote: |
should compare side by side with the ITM PRMI radar plots to see if there is a difference. How do your new plots compare with Gautam's plots of PRMI?
|
|
Attachment 1: BSPRM_Actuation_Radar_plots.png
|
|
Attachment 2: ITM_Actuation_Radar_plots.png
|
|
16404
|
Thu Oct 14 18:30:23 2021 |
Koji | Summary | VAC | Flange/Cable Stand Configuration |
Flange Configuration for BHD
We will need total 5 new cable stands. So Qty.6 is the number to be ordered.
Looking at the accuglass drawing, the in-vaccum cables are standard dsub 25pin cables only with two standard fixing threads.
https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110070_3.pdf
For SOSs, the standard 40m style cable bracket works fine. https://dcc.ligo.org/D010194-x0
However, for the OMCs, we need to make the thread holes available so that we can mate DB25 male cables to these cables.
One possibility is to improvise this cable bracket to suspend the cables using clean Cu wires or something. I think we can deal with this issue in situ.
Ha! The male side has the 4-40 standoff (jack) screws. So we can hold the male side on the bracket using the standoff (jack) screws and plug the female cables. OK! The issue solved!
https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110029_3.pdf |
Attachment 1: 40m_flange_layout_20211014.pdf
|
|
16407
|
Fri Oct 15 16:46:27 2021 |
Anchal | Summary | Optical Levers | Vent Prep |
I centered all the optical levers on ITMX, ITMY, ETMX, ETMY, and BS to a position where the single arm lock on both were best aligned. Unfortunately, we are seeing the TRX at 0.78 and TRY at 0.76 at the most aligned positions. It seems less power is getting out of PMC since last month. (Attachment 1).
Then, I tried to lock PRMI with carrier with no luck. But I was able to see flashing of up to 4000 counts in POP_DC. At this position, I centered the PRM optical lever too (Attachment 2). |
Attachment 1: Screen_Shot_2021-10-15_at_4.34.45_PM.png
|
|
Attachment 2: Screen_Shot_2021-10-15_at_4.45.31_PM.png
|
|
Attachment 3: Screen_Shot_2021-10-15_at_4.34.45_PM.png
|
|
Attachment 4: Screen_Shot_2021-10-15_at_4.34.45_PM.png
|
|
16408
|
Fri Oct 15 17:17:51 2021 |
Koji | Summary | General | Vent Prep |
I took over the vent prep: I'm going through the list in [ELOG 15649] and [ELOG 15651]. I will also look at [ELOG 15652] at the day of venting.
- IFO alignment: Two arms are already locking. The dark port beam is well overlapped. We will move PRM/SRM etc. So we don't need to worry about them. [Attachment 1]
scripts>z read C1:SUS-BS_PIT_BIAS C1:SUS-BS_YAW_BIAS
-304.7661529521767
-109.23924626857811
scripts>z read C1:SUS-ITMX_PIT_BIAS C1:SUS-ITMX_YAW_BIAS
15.534616817500943
-503.4536332290159
scripts>z read C1:SUS-ITMY_PIT_BIAS C1:SUS-ITMY_YAW_BIAS
653.0100945988496
-478.16260735781225
scripts>z read C1:SUS-ETMX_PIT_BIAS C1:SUS-ETMX_YAW_BIAS
-136.17863332517527
181.09285307121306
scripts>z read C1:SUS-ETMY_PIT_BIAS C1:SUS-ETMY_YAW_BIAS
-196.6200333695437
-85.40819256078339
- IMC alignment: Locking nicely. I ran WFS relief to move the WFS output on to the alignment sliders. All the WFS feedback values are now <10. Here is the slider snapshots. [Attachment 2]
- PMC alignmnet: The PMC looked like it was quite misaligned -> aligned. IMC/PMC locking snapshot [Attachment 3]
Arm transmissions:
scripts>z avg 10 C1:LSC-TRX_OUT C1:LSC-TRY_OUT
C1:LSC-TRX_OUT 0.9825591325759888
C1:LSC-TRY_OUT 0.9488834202289581
- Suspension Status Snapshot [Attachment 4]
- Anchal aligned the OPLEV beams [ELOG 16407]
I also checked the 100 days trend of the OPLEV sum power. The trend of the max values look flat and fine. [Attachment 5] For this purpose, the PRM and SRM was aligned and the SRM oplev was also aligned. The SRM sum was 23580 when aligned and it was just fine (this is not so visible in the trend plot).
- The X and Y green beams were aligned for the cavity TEM00s. Y end green PZT values were nulled. The transmission I could reach was as follows.
>z read C1:ALS-TRX_OUTPUT C1:ALS-TRY_OUTPUT
0.42343354488901286
0.24739624058377277
It seems that these GTRX and GTRY seemed to have crosstalk. When each green shutters were closed the transmissino and the dark offset were measured to be
>z read C1:ALS-TRX_OUTPUT C1:ALS-TRY_OUTPUT
0.41822833190834546
0.025039383697636856
>z read C1:ALS-TRX_OUTPUT C1:ALS-TRY_OUTPUT
0.00021112720155274818
0.2249448773499293
Note that Y green seemed to have significant (~0.1) of 1st order HOM. I don't know why I could not transfer this power into TEM00. I could not find any significant clipping of the TR beams on the PSL table PDs.
- IMC Power reduction
Now we have nice motorized HWP. sitemap -> PSL -> Power control
== Initial condition == [Attachment 6]
C1:IOO-HWP_POS 38.83
Measured input power = 0.99W
C1:IOO-MC_RFPD_DCMON = 5.38
== Power reduction == [Attachment 7]
- The motor was enabled upon rotation on the screen
C1:IOO-HWP_POS 74.23
Measured input power = 98mW
C1:IOO-MC_RFPD_DCMON = 0.537
- Then, the motor was disabled
- Went to the detection table and swapped the 10% reflector with the 98% reflector stored on the same table. [Attachments 8/9]
After the beam alignment the MC REFL PD received about the same amount of the light as before.
C1:IOO-MC_RFPD_DCMON = 5.6
There is no beam delivered to the WFS paths.
CAUTION: IF THE POWER IS INCREASED TO THE NOMINAL WITH THIS CONFIGURATION, MC REFL PD WILL BE DESTROYED.
- The IMC can already be locked with this configuration. But for the MC Autolocker, the MCTRANS threshold for the autolocker needs to be reduced as well.
This was done by swapping a line in /opt/rtcds/caltech/c1/scripts/MC/AutoLockMC.init
# BEFORE
/bin/csh ./AutoLockMC.csh >> $LOGFILE
#/bin/csh ./AutoLockMC_LowPower.csh >> $LOGFILE
--->
# AFTER
#/bin/csh ./AutoLockMC.csh >> $LOGFILE
/bin/csh ./AutoLockMC_LowPower.csh >> $LOGFILE
Confirmed that the autolocker works a few times by toggling the PSL shutter. The PSL shutter was closed upon the completion of the test
- Walked around the lab and checked all the bellows - the jam nuts are all tight, and I couldn't move them with my hands. So this is okay according to the ancient tale by Steve.
|
Attachment 1: Screenshot_2021-10-15_17-36-00.png
|
|
Attachment 2: Screenshot_2021-10-15_17-39-58.png
|
|
Attachment 3: Screenshot_2021-10-15_17-42-20.png
|
|
Attachment 4: Screenshot_2021-10-15_17-46-13.png
|
|
Attachment 5: Screenshot_2021-10-15_18-05-54.png
|
|
Attachment 6: Screen_Shot_2021-10-15_at_19.45.05.png
|
|
Attachment 7: Screen_Shot_2021-10-15_at_19.47.10.png
|
|
16409
|
Fri Oct 15 20:53:49 2021 |
Koji | Summary | General | Vent Prep |
From the IFO point of view, all look good and we are ready for venting from Mon Oct 18 9AM |
16414
|
Tue Oct 19 18:20:33 2021 |
Ian MacMillan | Summary | CDS | c1sus2 DAC to ADC test |
I ran a DAC to ADC test on c1sus2 channels where I hooked up the outputs on the DAC to the input channels on the ADC. We used different combinations of ADCs and DACs to make sure that there were no errors that cancel each other out in the end. I took a transfer function across these channel combinations to reproduce figure 1 in T2000188.
As seen in the two attached PDFs the channels seem to be working properly they have a flat response with a gain of 0.5 (-6 dB). This is the response that is expected and is the result of the DAC signal being sent as a single ended signal and the ADC receiving as a differential input signal. This should result in a recorded signal of 0.5 the amplitude of the actual output signal.
The drop off on the high frequency end is the result of the anti-aliasing filter and the anti-imaging filter. Both of these are 8-pole elliptical filters so when combined we should get a drop off of 320dB per decade. I measured the slope on the last few points of each filter and the averaged value was around 347dB per decade. This is slightly steeper than expected but since it is to cut off higher frequencies it shouldn't have an effect on the operation of the system. Also it is very close to the expected value.
The ripples seen before the drop off are also an effect of the elliptical filters and are seen in T2000188.
Note: the transfer function that doesn't seem to match the others is the heartbeat timing signal. |
Attachment 1: data3_Plots.pdf
|
|
Attachment 2: data2_Plots.pdf
|
|
16415
|
Tue Oct 19 23:43:09 2021 |
Koji | Summary | CDS | c1sus2 DAC to ADC test |
(Because of a totally unrelated reason) I was checking the electronics units for the upgrade. And I realized that the electronics units at the test stand have not been properly powered.
I found that the AA/AI stack at the test stand (Attachment 1) has an unusual powering configuration (Attachment 2).
- Only the positive power supply was used / - The supply voltage is only +15V / - The GND reference is not connected to anywhere.
For confirmation, I checked the voltage across the DC power strip (Attachments 3/4). The positive was +5.3V and the negative was -9.4V. This is subject to change depending on the earth potential.
This is not a good condition at all. The asymmetric powering of the circuit may cause damages to the opamps. So I turned off the switches of the units.
The power configuration should be immediately corrected.
- Use both positive and negative supply (2 power supply channels) to produce the positive and the negative voltage potentials. Connect the reference potential to the earth post of the power supply.
https://www.youtube.com/watch?v=9_6ecyf6K40 [Dual Power Supply Connection / Serial plus minus electronics laboratory PS with center tap]
- These units have DC power regulator which produces +/-15V out of +/-18V. So the DC power supplies are supposed to be set at +18V.
|
Attachment 1: P_20211019_224433.jpg
|
|
Attachment 2: P_20211019_224122.jpg
|
|
Attachment 3: P_20211019_224400.jpg
|
|
Attachment 4: P_20211019_224411.jpg
|
|
16416
|
Wed Oct 20 11:16:21 2021 |
Anchal | Summary | PEM | Particle counter setup near BS Chamber |
I have placed a GT321 particle counter on top of the MC1/MC3 chamber next to the BS chamber. The serial cable is connected to c1psl computer on 1X2 using 2 usb extenders (blue in color) over the PSL enclosure and over the 1X1 rack.
The main serial communication script for this counter by Radhika is present in 40m/labutils/serial_com/gt321.py.
A 40m specific application script is present in the new git repo for 40m scripts, in 40m/scripts/PEM/particleCounter.py. Our plan is to slowly migrate the legacy scripts directory to this repo overtime. I've cloned this repo in the nfs shared directory at /opt/rtcds/caltech/c1/Git/40m/scripts which makes the scripts available at all computers and keep them upto date in all computers.
The particle counter script is running on c1psl through a systemd service, using service file 40m/scripts/PEM/particleCounter.service. Locally in c1psl, /etc/systemd/system/particleCounter.service is symbollically linked to the file in the file.
Following channels for particle counter needed to be created as I could not find any existing particle counter channels.
[C1:PEM-BS_PAR_CTS_0p3_UM]
[C1:PEM-BS_PAR_CTS_0p5_UM]
[C1:PEM-BS_PAR_CTS_1_UM]
[C1:PEM-BS_PAR_CTS_2_UM]
[C1:PEM-BS_PAR_CTS_5_UM]
These are created from 40m/softChansModbus/particleCountChans.db database file. Computer optimus is running a docker container to serve as EPICS server for such soft channels. To add or edit channels, one just need to add new database file or edit database files in thsi repo and on optimus do:
controls@optimus|~> sudo docker container restart softchansmodbus_SoftChans_1
softchansmodbus_SoftChans_1
that's it.
I've added the above channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini to record them in framebuilder. Starting from 11:20 am Oct 20, 2021 PDT, the data on these channels is from BS chamber area. Currently the script is running continuosly, which means 0.3u particles are sampled every minute, 0.5u twice in 5 minutes and 1u, 2u, and 5u particles are sampled once in 5 minutes. We can reduce the sampling rate if this seems unncessary to us. |
Attachment 1: PXL_20211020_183728734.jpg
|
|
16417
|
Wed Oct 20 11:48:27 2021 |
Anchal | Summary | CDS | Power supple configured correctly. |
This was horrible! That's my bad, I should have checked the configuration before assuming that it is right.
I fixed the power supply configuration. Now the strip has two rails of +/- 18V and the GND is referenced to power supply earth GND.
Ian should redo the tests. |
16420
|
Thu Oct 21 11:41:31 2021 |
Anchal | Summary | PEM | Particle counter setup near BS Chamber |
The particle count channel names were changes yesterday to follow naming conventions used at the sites. Following are the new names:
C1:PEM-BS_DUST_300NM
C1:PEM-BS_DUST_500NM
C1:PEM-BS_DUST_1000NM
C1:PEM-BS_DUST_2000NM
C1:PEM-BS_DUST_5000NM
The legacy count channels are kept alive with C1:PEM-count_full copying C1:PEM-BS_DUST_1000NM channel and C1:PEM-count_half copying C1:PEM-BS_DUST_500NM channel.
Attachment one is the particle counter trend since 8:30 am morning today when the HVAC wokr started. Seems like there was some peak particle presence around 11 am. The particle counter even counted 8 counts of particles size above 5um!
|
Attachment 1: ParticleCountData20211021.pdf
|
|
16421
|
Thu Oct 21 15:22:35 2021 |
rana | Summary | PEM | Particle counter setup near BS Chamber |
SVG doesn't work in my browser(s). Can we use PDF as our standard for all graphics other than photos (PNG/JPG) ? |
16422
|
Thu Oct 21 15:24:35 2021 |
rana | Summary | PEM | Particle counter setup near BS Chamber |
rethinking what I said on Wednesday - its not a good idea to put the particle counter on a vac chamber with optics inside. The rumble from the air pump shows up in the acoustic noise of the interferometer. Let's look for a way to mount it near the BS chamber, but attached to something other than vacuum chambers and optical tables.
Quote: |
I have placed a GT321 particle counter on top of the MC1/MC3 chamber next to the BS chamber.
|
|
16423
|
Fri Oct 22 17:35:08 2021 |
Ian MacMillan | Summary | PEM | Particle counter setup near BS Chamber |
I have done some reading about where would be the best place to put the particle counter. The ISO standard (14644-1:2015) for cleanrooms is one every 1000 m^2 so one for every 30m x 30m space. We should have the particle counter reasonably close to the open chamber and all the manufactures that I read about suggest a little more than 1 every 30x30m. We will have it much closer than this so it is nice to know that it should still get a good reading. They also suggest keeping it in the open and not tucked away which is a little obvious. I think the best spot is attached to the cable tray that is right above the door to the control room. This should put it out of the way and within about 5m of where we are working. I ordered some cables to route it over there last night so when they come in I can put it up there. |
16424
|
Mon Oct 25 13:23:45 2021 |
Anchal | Summary | BHD | Before photos of BSC |
[Yehonathan, Anchal]
On thursday Oct 21 2021, Yehonathan and I opened the door to BSC and took some photos. We setup the HEPA stand next to the door with anti-static curtains covering all sides. We spend about 15 minutes trying to understand the current layout and taking photos and a video. Any suggestions on improvement in our technique and approach would be helpful.
Links to photos:
https://photos.app.goo.gl/fkkdu9qAvH1g5boq6 |
16425
|
Mon Oct 25 17:37:42 2021 |
Anchal | Summary | BHD | Part I of BHR upgrade - Removed optics from BSC |
[Anchal, Paco, Ian]
Clean room etiquettes
- Two people in coverall suits, head covers, masks and AccuTech ultra clean gloves.
- One person in just booties to interact with outside "dirty" world.
- Anything that comes in chamber, first cleaned outside with clean cloth and IPA. Then cleaned by the "clean" folks. We followed this for allen keys, camera and beam finder card.
- Once the chamber cover has been removed, cover the annulus with donut. We forgot to do this :(
Optics removal and changes
We removed the following optics from the BSC table and stored them in X-end flowbench with fan on. See attachment 1 and 2.
- IPPOS SM2
- GRX SM2
- PRM OL1
- PRMOL4
- IPPOS SM3
- IPANG SM1
- PRM OL2
- Unidentified optic inbetween IPPOS45P and IPPOS SM3
- Beam block behing PR3
- Beam block behind GR PBS
- GR PBS
- GRPERI1L (Periscope)
- PRMOL3
- IPPOS45P
- Cylindrical counterweight on North-west end of table.
- Cheap rectangular mirror on South west end of table (probably used for some camera, but not in use anymore)
- IPANGSM2
We also changed the direction of clamp of MMT1 to move it away from the center of the able (where PRM will be placed)
We screwed in the earthquake stops on PRM and BS from front face and top.
We unscrewed the cable post for BS and PRM oplevs and loved it in between SR3 and BS and screwed it lightly.
We moved the PRM, turned it anti-clockwise 90 degrees and brought it in between TT2 and BS. Now there is a clear line of sight between TT2 and PR2 on ITMY table.
Some next steps:
- We align the input beam to TT2 by opening the "Injection Chamber" (formerly known as OMC chamber). While doing so, we'll clear unwanted optics from this table as well.
- We open ITMX chamber, clear some POP optics. If SOS are ready, we would replace PR2 with SOS and put it in a new position.
- Then we'll replace PR3 with an SOS and align the beam to BS.
These are next few days of work. We need atleast one SOS ready by Thursday.
Photos after today's work: https://photos.app.goo.gl/EE7Mvhw5CjgZrQpG6 |
Attachment 1: rn_image_picker_lib_temp_44cb790a-c3b4-42aa-8907-2f9787a02acd.jpg
|
|
Attachment 2: rn_image_picker_lib_temp_0fd8f4fd-64ae-4ccd-8422-cfe929d4eeee.jpg
|
|
16427
|
Tue Oct 26 13:27:07 2021 |
Tega | Summary | Electronics | Sat Amp modification Summary |
Modifications and testing of SatAmp units COMPLETE. Attachments 1 & 2 show all 19 units, one installed unit and the remaining 18 units are stacked and ready for install. Detailed notes of the modification for each unit are presented in the summary document in the dcc.
|
Attachment 1: SapAmpModStack.jpg
|
|
Attachment 2: SatAmpInstalled.jpg
|
|
16429
|
Tue Oct 26 16:56:22 2021 |
Paco | Summary | BHD | Part I of BHR upgrade - Locked PMC and IMC |
[Paco, Ian]
We opened the laser head shutter. Then, we scanned around the PMC resonance and locked it. We then opened the PSL shutter, touched the MC1, MC2 and MC3 alignment (mostly yaw) and managed to lock the IMC. The transmission peaked at ~ 1070 counts (typical is 14000 counts, so at 10% of PSL power we would expect a peak transmission of 1400 counts, so there might still be some room for improvement). The lock was engaged at ~ 16:53, we'll see for how long it lasts.
There should be IR light entering the BSC!!! Be alert and wear laser safety goggles when working there.
We should be ready to move forward into the TT2 + PR3 alignment. |
16430
|
Tue Oct 26 18:24:00 2021 |
Ian MacMillan | Summary | CDS | c1sus2 DAC to ADC test |
[Ian, Anchal, Paco]
After the Koji found that there was a problem with the power source Anchal and I fixed the power then reran the measurment. The only change this time around is that I increased the excitation amplitude to 100. In the first run the excitation amplitude was 1 which seemed to come out noise free but is too low to give a reliable value.
link to previous results
The new plots are attached. |
Attachment 1: data2_Plots.pdf
|
|
Attachment 2: data3_Plots.pdf
|
|
16431
|
Wed Oct 27 16:27:16 2021 |
Anchal | Summary | BHD | Part II of BHR upgrade - Prep |
[Anchal, Paco, Ian]
Before we could start working on Part II, which is to relocate TT2 to new location, we had to clear space in front of injection chamber door and clean the floor which was very dusty. This required us to disconnect everything we could safely from OMC North short electronics rack, remove 10-15 BNC cables, 4-5 power cords and relocate some fiber optic cables. We didn't had caps for fiber optic cables handy, so we did not remove them from the rack mounted unit and just turned it away. At the end, we mopped the floor and dried it with a dry cloth. Before and after photos in attachments.
|
Attachment 1: OMCNorthBefore.jpeg
|
|
Attachment 2: OMCNorthAfter.jpeg
|
|
16432
|
Wed Oct 27 16:31:35 2021 |
Anchal | Summary | BHD | Part III of BHR upgrade - Removal of PR2 Small Suspension |
I went inside the ITMX Chamber to read off specs from PR2 edge. This was required to confirm our calculations of LO power for BHR later. The numbers that I could read from the edge were kind of meaningless "0.5 088 or 2.0 088". To make it more worthwhile this opening of the chamber, we decided to remove the PR2 suspension unit so that the optic can be removed and installed on an SOS in the cleanroom. We covered the optic in clean aluminum foil inside the chamber, then placed in on another aluminum foil to cover completely. Then I traveled slowly to the C&B room, where I placed it on a flow bench.
Later on, we decided to use a dummy fixed mount mirror for PR2 initially with the same substrate thickness, so that we get enough LO power in transmission for alignment. In the very end, we'll swap that with the PR2 mounted on an SOS unit. |
16433
|
Wed Oct 27 16:38:02 2021 |
Anchal | Summary | BHD | Part II of BHR upgrade - Relocation of TT2 and MMT1/2 alignment |
[Anchal, Paco]
We opened BSC and Injection Chamber doors. We removed two stacked counterweights from near the center of the BS table, from behind TT2 and placed them in the Xend flow bench. Then we unscrewed TT2 and relocated it to the new BHR layout position. This provided us with the target for the alignment of MMT1 and MMT2 mirrors.
While aligning MMT1 and MMT2, we realized that the BHR layout underestimated the clearance of the beam from MMT2 to TT2, from the TT1 suspension unit. The TT1 suspension stage was clipping our beam going to TT2. To rectify this, we decided to move the MMT2 mirror mount about a cm South and retry. We were able to align the beam to the TT2 optic, but it is a bit off-center. The reflection of TT2 now is going in the general direction of the ITMX chamber. We stopped our work here as fatigue was setting in. Following are some thoughts and future directions:
- We realized that the output beam from the mode cleaner moves a lot (by more than a cm at MMT2) between different locks. Maybe that's just because of our presence. But we wonder how much clearance all beams must have from MC3 to TT2.
- Currently, we think the Faraday Isolator might be less than 2 cm away from the beam between MMT1 and MMT2 and the TT1 suspension is less than 2 cm away from MMT2 and TT2.
- Maybe we can fix these by simply changing the alignment on TT1 which was fixed for our purposes.
- We definitely need to discuss the robustness of our path a bit more before we proceed to the next part of the upgrade.
Thu Oct 28 17:00:52 2021 After Photos: https://photos.app.goo.gl/wNL4dxPyEgYTKQFG9 |
16434
|
Wed Oct 27 18:11:37 2021 |
Koji | Summary | BHD | Part II of BHR upgrade - Relocation of TT2 and MMT1/2 alignment |
Closed the PSL shutter @18:11
During the vent, we want to keep the cavity unlocked if not necessary.
|
16435
|
Wed Oct 27 18:16:45 2021 |
Koji | Summary | BHD | Part II of BHR upgrade - Relocation of TT2 and MMT1/2 alignment |
- Moving the MMT2 south by a cm is fine. This will give you ~0.5cm at TT1 without changing the other alignment much.
- IMC mode is moving because of your presence + HEPA blow.
- 2cm at Faraday is plenty for the beam diameter of a few mm.
|
16436
|
Wed Oct 27 19:34:52 2021 |
Koji | Summary | Electronics | New electronics racks |
1. The rack we cleaned today (came from West Bridge) will be placed between 1X3 and 1X4, right next to 1X4 (after removing the plastic boxes). (Attachment 1)
For easier work at the side of the 1X4, the side panel of the 1X4 should be removed before placing the new rack. Note that this rack is imperial and has 10-32 threads
2. In terms of the other rack for the Y arm, we found the rack in the storage is quite dirty. Anchal pointed out that we have a few racks standing along the Y arm (as the storage of the old VME/Euro card electronics) (Attachments 2/3)
They are not too dirty and also doing nothing there. Let's vacate one of them (the one right next to the optics preparation table). Use this space as a new storage area placing a wire shelving rack for something.
BTW, I thought it is good to have the rack at the vertex side of 1Y1 (as 1Y0?), but the floor has "KEEP OUT" marking. I have no idea why we have this marking. Is this for crane operation??? Does any one know? |
Attachment 1: P_20211027_180737.jpg
|
|
Attachment 2: P_20211027_180443.jpg
|
|
Attachment 3: P_20211027_180408.jpg
|
|
Attachment 4: P_20211027_180139.jpg
|
|
16437
|
Thu Oct 28 16:32:32 2021 |
Paco | Summary | BHD | Part IV of BHR upgrade - Removal of BSC eastern optics |
[Ian, Paco, Anchal]
We turned off the BSC oplev laser by turning the key counterclockwise. Ian then removed the following optics from the east end in the BSC:
- OM4-PJ (wires were disconnected before removal)
- GRX_SM1
- OM3
- BSOL1
We placed them in the center-front area of the XEND flow bench.
Photos: https://photos.app.goo.gl/rjZJD2zitDgxBfAdA |
16438
|
Thu Oct 28 17:01:54 2021 |
Anchal | Summary | BHD | Part III of BHR upgrade - Adding temp fixed flat mirror for PR2 |
[Anchal, Paco, Ian]
- We added a Y1-2037-0 mirror (former IPPOS SM2 mirror) on a fixed mount in the position of where PR2 is supposed to be in new BHR layout.
- After turning out all lights in the lab, we were able to see a transmitted beam on our beam finder card.
- We aligned the mirror so that it relfects the beam off to PR3 clearly and the reflection from PR3 hits BS in the center.
- We were able to see clear gaussian beams splitted from BS going towards ITMX and ITMY.
Photos: https://photos.app.goo.gl/cKdbtLGa9NtkwqQ68 |
16440
|
Fri Oct 29 14:39:37 2021 |
Anchal | Summary | BHD | 1Y1 cleared. IY3 ready for C1SUS2 I/O and FE. |
[Anchal, Paco]
We cleared 1Y1 rack today removing the following items. This stuff is sitting on the floor about 2 meters east of 1Y3 (see attachment 1):
- A VME crate: We disconnected it's power cords from the side bus.
- A NI PXIe-1071 crate with some SMA multiplexer units on it.
We also moved the power relay ethernet strip from the middle of the rack to the bottom of the rack clearing the space marked clear in Koji's schematics. See attachment 2.
There was nothing to clear in 1Y3. It is ready for installing c1sus2 I/O chassis and FE once the testing is complete.
We also removed some orphaned hanging SMA RG-405 cables between 1Y3 and 1Y1. |
Attachment 1: RemovedStuff.jpeg
|
|
Attachment 2: 1Y1.jpeg
|
|
Attachment 3: 1Y3.jpeg
|
|
16444
|
Tue Nov 2 16:42:00 2021 |
Paco | Summary | BHD | 1Y1 rack work |
[paco, ian]
After the new 1Y0 rack was placed near the 1Y1 rack by Chub and Anchal, today we worked on the 1Y1 rack. We removed some rails from spaces ~ 25 - 30. We then drilled a pair of ~ 10-32 thru-holes on some L-shaped bars to help support the c1sus2 machine weight. The hole spacing was set to 60 cm; this number is not a constant across all racks. Then, we mounted c1sus2. While doing this, Paco's knee clicked some of the video MUX box buttons (29 and 8 at least). We then opened the rack's side door to investigate the DC power strips on it before removing stuff. We did power off the DC33 supplies on there. No connections were made to allow us to keep building this rack.
When coming back to the control room, we noticed 3/4 video feed (analog) for the Test masses had gone down... why?
Next steps:
- Remove sorensen (x5) power supplies from top of 1Y1 .. what are they actually powering???
- Make more bars to support heavy IO exp and acromag chassis.
- Make all connections (neat).
Update Tue Nov 2 18:52:39 2021
- After turning Sorensens back up, the ETM/ITM video feed was restored. I will need to hunt the power lines carefully before removing these.
|
16447
|
Wed Nov 3 16:55:23 2021 |
Ian MacMillan | Summary | SUS | SUS Plant Plan for New Optics |
[Ian, Tega, Raj]
This is the rough plan for the testing of the new suspension models with the created plant model. We will test the suspensions on the plant model before we implement them into the full
- Get State-space matrices from the surf model for the SOS. Set up simplant model on teststand
- The state-space model is only 3 degrees of freedom. (even the surf's model)
- There are filter modules that have the 6 degrees of freedom for the suspensions. We will use these instead. I have implemented them in the same suspension model that would hold the state space model. If we ever get the state space matrices then we can easily substitute them.
- Load new controller model onto test stand. This new controller will be a copy of an existing suspension controller.
- Hook up controller to simplant. These should form a closed loop where the outputs from the controller go into the plant inputs and the plant outputs go to the controller inputs.
- Do tests on set up.
- Look at the step response for each degree of freedom. Then compare them to the results from an existing optic.
- Also, working with Raj let him do the same model in python then compare the two.
- Make sure that the data is being written to the local frame handler.
MEDM file location
/opt/rtcds/userapps/release/sus/common/medm/hsss_tega_gautam
run using
For ITMX display, use:
hsss_tega_gautam>medm -x -macro "site=caltech,ifo=c1,IFO=C1,OPTIC=ITMX,SUSTYPE=IM,DCU_ID=21,FEC=45" SUS_CUST_HSSS_OVERVIEW.adl
|