ID |
Date |
Author |
Type |
Category |
Subject |
15167
|
Tue Jan 28 17:36:45 2020 |
gautam | Configuration | Computers | Local EPICS7.0 installed on megatron |
[Jon, gautam]
We found that the caput commands were taking much longer to execute on megatron than on pianosa (for example). Suspecting that this had something to do with the fact that megatron was using EPICS binaries from the shared NFS drive which were compiled for a much older OS, I installed the latest stable release of EPICS on megatron. The new caput commands execute much faster. I also added the local EPICS directory to the head of the $PATH variable used by the MC autolocker and FSS Slow scripts, so that they use the new caput command. But mcup is still slow - maybe my new path definition isn't picked up and it is still using the NFS binaries? To be looked into...
Quote: |
There were a bunch of medm processes stalled on megatron (connected with screenshot taking). To see if they were interfering with the other scripts, I killed all of the medm processes, and commented out the line in the crontab that runs the screenshots every 10 mins. Let's see if this improves stability.
|
|
15168
|
Tue Jan 28 19:12:30 2020 |
Jon | Configuration | PSL | Spare channels added to c1psl chassis |
After some discussion with Gautam, I decided to build more spare channels into the new c1psl machine. This is anticipation of adding new laser and ISS channels in the near future, to avoid having to disconnect the installed chassis and pull it out of the rack. The spare channels will be wired to DB37M feedthroughs on the front side of the chassis, with enough wire length to be able to pull the breakout boards out of the front to reconfigure their wiring as needed (e.g., split off channels onto a separate connector).
To have enough overhead, this will require installing 1 additional ADC unit (XT1221) and 1 additional DAC (XT1541). We have enough spare BIO channels among the existing units (both sinking and sourcing). This will give us:
- 13 spare ADC channels
- 14 spare DAC channels
- 16 spare sinking BIO channels
- 12 spare sourcing BIO channels
The updated c1psl chassis wiring assignments are attached. It adds 4 new DB37M connectors for the spare channels (highlighted in yellow) and fixes one typo Jordan found while wiring today. The most current spreadsheet is available here. |
Attachment 1: c1psl_feedthrough_wiring_v2.pdf
|
|
15421
|
Mon Jun 22 10:43:25 2020 |
Jon | Configuration | VAC | Vac maintenance at 11 am |
The vac system is going down at 11 am today for planned maintenance:
- Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
- Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]
We will advise when the work is completed. |
15424
|
Mon Jun 22 20:06:06 2020 |
Jon | Configuration | VAC | Vac maintenance complete |
This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging.
For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time.
Edit: The new interlock flag channel is named C1:Vac-interlock_flag.
Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed.
The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍
Quote: |
The vac system is going down at 11 am today for planned maintenance:
- Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
- Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]
|
|
Attachment 1: Pumpdown-6-22-20.png
|
|
15425
|
Tue Jun 23 17:54:56 2020 |
rana | Configuration | VAC | Vac maintenance complete |
I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS.
It avoids us having to force them all to UPPER in the scripts and channel lists. |
15446
|
Wed Jul 1 18:03:04 2020 |
Jon | Configuration | VAC | UPS replacements |
I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:
- Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
- Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
- Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.
I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant. |
15465
|
Thu Jul 9 18:00:35 2020 |
Jon | Configuration | VAC | UPS replacements |
Chub has placed the order for two new UPS units (115V for TP2/3 and a 220V version for TP1).
They will arrive within the next two weeks.
Quote: |
I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:
- Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
- Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
- Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.
I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.
|
|
15510
|
Sat Aug 8 07:36:52 2020 |
Sanika Khadkikar | Configuration | Calibration-Repair | BS Seismometer - Multi-channel calibration |
Summary :
I have been working on analyzing the seismic data obtained from the 3 seismometers present in the lab. I noticed while looking at the combined time series and the gain plots of the 3 seismometers that there is some error in the calibration of the BS seismometer. The EX and the EY seismometers seem to be well-calibrated as opposed to the BS seismometer.
The calibration factors have been determined to be :
BS-X Channel: 
BS-Y Channel: 
BS-Z Channel: 
Details :
The seismometers each have 3 channels i.e X, Y, and Z for measuring the displacements in all the 3 directions. The X channels of the three seismometers should more or less be coherent in the absence of any seismic excitation with the gain amongst all the similar channels being 1. So is the case with the Y and Z channels. After analyzing multiple datasets, it was observed that the values of all the three channels of the BS seismometer differed very significantly from their corresponding channels in the EX and the EY seismometers and they were not calibrated in the region that they were found to be coherent as well.
Method :
Note: All the frequency domain plots that have been calculated are for a sampling rate of 32 Hz. The plots were found to be extremely coherent in a certain frequency range i.e ~0.1 Hz to 2 Hz so this frequency range is used to understand the relative calibration errors. The spread around the function is because of the error caused by coherence values differing from unity and the averages performed for the Welch function. 9 averages have been performed for the following analysis keeping in mind the needed frequency resolution(~0.01Hz) and the accuracy of the power calculated at every frequency.
- I first analyzed the regions in which the similar channels were found to be coherent to have a proper gain analysis. The EY seismometer was found to be the most stable one so it has been used as a reference. I saw the coherence between similar channels of the 2 seismometers and the bode plots together. A transfer function estimator was used to analyze the relative calibration in between all 3 pairs of seismometers. In the given frequency range EX and EY have a gain of 1 so their relative calibration is proper. The relative calibration in between the BS and the EY seismometers is not proper as the resultant gain is not 1. The attached plots show the discrepancies clearly :
- BS-X & EY-X Transfer Function : Attachment #1
- BS-Y & EY-Y Transfer Function : Attachment #2
The gain in the given frequency range is ~3. The phase plotting also shows a 180-degree phase as opposed to 0 so a negative sign would also be required in the calibration factor. Thus the calibration factor for the Y channel of the BS seismometer should be around ~3.
- BS-Z & EY-Z Transfer Function : Attachment #3
The mean value of the gain in the given frequency range is the desired calibration factor and the error would be the mean of the error for the gain dataset chosen which is caused due to factors mentioned above.
Note: The standard error envelope plotted in the attached graphs is calculated as follows :
1. Divide the data into n segments according to the resolution wanted for the Welch averaging to be performed later.
2. Calculate PSD for every segment (no averaging).
3. Calculate the standard error for every value in the data segment by looking at distribution formed by the n number values we obtain by taking that respective value from every segment.
Discussions :
The BS seismometer is a different model than the EX and the EY seismometers which might be a major cause as to why we need special calibration for the BS seismometer while EX and EY are fine. The sign flip in the BS-Y seismometer may cause a lot of errors in future data acquisitions. The time series plots in Attachment #4 shows an evident DC offset present in the data. All of the information mentioned above indicates that there is some electrical or mechanical defect present in the seismometer and may require a reset. Kindly let me know if and when the seismometer is reset so that I can calibrate it again. |
Attachment 1: BS_X-EY_X.png
|
|
Attachment 2: BS_Y-EY_Y.png
|
|
Attachment 3: BS_Z-EY_Z.png
|
|
Attachment 4: timeseries.png
|
|
15526
|
Fri Aug 14 10:10:56 2020 |
Jon | Configuration | VAC | Vacuum repairs today |
The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up. |
15527
|
Sat Aug 15 02:02:13 2020 |
Jon | Configuration | VAC | Vacuum repairs today |
Vacuum work is completed. The TP2 and TP3 interlocks have been overhauled as proposed in ELOG 15499 and seem to be performing reliably. We're now back in the nominal system state, with TP2 again backing for TP1 and TP3 pumping the annuli. I'll post the full implementation details in the morning.
I did not get to setting up the new UPS units. That will have to be scheduled for another day.
Quote: |
The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.
|
|
15528
|
Sat Aug 15 15:12:22 2020 |
Jon | Configuration | VAC | Overhaul of small turbo pump interlocks |
Summary
Yesterday I completed the switchover of small turbo pump interlocks as proposed in ELOG 15499. This overhaul altogether eliminates the dependency on RS232 readbacks, which had become unreliable (glitchy) in both controllers. In their place, the V4(5) valve-close interlocks are now predicated on an analog controller output whose voltage goes high when the rotation speed is >= 80% of the nominal setpoint. The critical speed is 52.8 krpm for TP2 and 40 krpm for TP3. There already exist hardware interlocks of V4(5) using the same signals, which I have also tested.
Interlock signal
Unlike the TP1 controller, which exposes simple relays whose open/closed states are sensed by Acromags, what the TP2(3) controllers output is an energized 24V signal for controlling such a relay (output circuit pictured below). I hadn't appreciated this difference and it cost me time yesterday. The ultimate solution was to route the signals through a set of new 24V Phoenix Contact relays installed inside the Acromag chassis. However, this required removing the chassis from the rack and bringing it to the electronics bench (rather than doing the work in situ, as I had planned). The relays are mounted to the second DIN rail opposite the Acromags. Each TP2(3) signal controls the state of a relay, which in turn is sensed using an Acromag XT1111.

Signal routing
The TP2(3) "normal-speed" signals are already in use by hardware interlocks of V4(5). Each signal is routed into the main AC relay box, where it controls an "interrupter" relay through which the Acromag control signal for the main V4(5) relay is passed. These signals are now shared with the digital controls system using a passive DB15 Y-splitter. The signal routing is shown below.

Interlock conditions
The new turbo-pump-related interlock conditions and their channel predicates are listed below. The full up-to-date channel list and wiring assignments for c1vac are maintained here.
Channel |
Type |
New? |
Interlock-triggering condition |
C1:Vac-TP1_norm |
BI |
No |
Rotation speed < 90% nominal setpoint (29 krpm) |
C1:Vac-TP1_fail |
BI |
No |
Critical fault occurrence |
C1:Vac-TP1_current |
AI |
No |
Current draw > 4 A |
C1:Vac-TP2_norm |
BI |
Yes |
Rotation speed < 80% nominal setpoint (52.8 krpm) |
C1:Vac-TP3_norm |
BI |
Yes |
Rotation speed < 80% nominal setpoint (40 krpm) |
There are two new channels, both of which provide a binary indication of whether the pump speed is outside its nominal range. I did not have enough 24V relays to also add the C1:Vac-TP2(3)_fail channels listed in ELOG 15499. However, these signals are redundant with the existing interlocks, and the existing serial "Status" readback will already print failure messages to the MEDM screens. All of the TP2(3) serial readback channels remain, which monitor voltage, current, operational status, and temperature. The pump on/off and low-speed mode on/off controls remain implemented with serial signals as well.
The new analog readbacks have been added to the MEDM controls screens, circled below:

Other incidental repairs
- I replaced the (dead) LED monitor at the vac controls console. In the process of finding a replacement, I came across another dead spare monitor as well. Both have been labeled "DEAD" and moved to Jordan's desk for disposal.
- I found the current TP3 Varian V70D controller to be just as glitchy in the analog outputs as well. That likely indicates there is a problem with the microprocessor itself, not just the serial communications card as I thought might be the case. I replaced the controller with the spare unit which was mounted right next to it in the rack [ELOG 13143]. The new unit has not glitched since the time I installed it around 10 pm last night.
|
Attachment 1: small_tp_signal_routing.png
|
|
Attachment 3: small_tp_signal_routing.png
|
|
Attachment 4: medm_screen.png
|
|
15738
|
Fri Dec 18 22:59:12 2020 |
Jon | Configuration | CDS | Updated CDS upgrade plan |
Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:
-
Existing FEs stay where they are (they are not moved to a single rack)
-
Dolphin IPC remains PCIe Gen 1
-
RFM network is entirely replaced with Dolphin IPC
Please send me any omissions or corrections to the layout. |
Attachment 1: CDS_2020_Dec.pdf
|
|
Attachment 2: CDS_2020_Dec.graffle
|
15742
|
Mon Dec 21 09:28:50 2020 |
Jamie | Configuration | CDS | Updated CDS upgrade plan |
Quote: |
Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:
-
Existing FEs stay where they are (they are not moved to a single rack)
-
Dolphin IPC remains PCIe Gen 1
-
RFM network is entirely replaced with Dolphin IPC
Please send me any omissions or corrections to the layout.
|
I just want to point out that if you move all the FEs to the same rack they can all be connected to the Dolphin switch via copper, and you would only have to string a single fiber to every IO rack, rather than the multiple now (for network, dolphin, timing, etc.). |
15746
|
Wed Dec 23 23:06:45 2020 |
gautam | Configuration | CDS | Updated CDS upgrade plan |
- The diagram should clearly show the host machines and the expansion chassis and the interconnects between them.
- We no longer have any Gentoo bootserver or diskless FEs.
- The "c1lsc" host is in 1X4 not 1Y3.
- The connection between c1lsc and Dolphin switch is copper not fiber. I don't know how many Gbps it is. But if the switch is 10 Gbps, are they really selling interface cables that have lower speed? The datasheet says 10 Gbps.
- The control room workstations - Debian10 (rossa) is the way forward I believe. it is true pianosa remains SL7 (and we should continue to keep it so until all other machines have been upgraded and tested on Debian 10).
- There is no "IOO/OAF". The host is called "c1ioo".
- The interconnect between Dolphin switch and c1ioo host is via fiber not copper.
- It'd be good to have an accurate diagram of the current situation as well (with the RFM network).
- I'm not sure if the 1Y1 rack can accommodate 2 FEs and 2 expansion chassis. Maybe if we clear everything else there out...
- There are 2 "2GB/s" Copper traces. I think the legend should make clear what's going on - i.e. which cables are ethernet (Cat 6? Cat 5? What's the speed limitation? The cable? Or the switch?), which are PCIe cables etc etc.
I don't have omnigraffle - what about uploading the source doc in a format that the excellent (and free) draw.io can handle? I think we can do a much better job of making this diagram reflect reality. There should also be a corresponding diagram for the Acromag system (but that doesn't have to be tied to this task). Megatron (scripts machine) and nodus should be added to that diagram as well.
Please send me any omissions or corrections to the layout.
|
|
15771
|
Tue Jan 19 14:05:25 2021 |
Jon | Configuration | CDS | Updated CDS upgrade plan |
I've produced updated diagrams of the CDS layout, taking the comments in 15476 into account. I've also converted the 40m's diagrams from Omnigraffle ($150/license) to the free, cloud-based platform draw.io. I had never heard of draw.io, but I found that it has most all the same functionality. It also integrates nicely with Google Drive.
Attachment 1: The planned CDS upgrade (2 new FEs, fully replace RFM network with Gen 1 Dolphin IPC)
Attachment 2: The current 40m CDS topology
The most up-to-date diagrams are hosted at the following links:
Please send me any further corrections or omissions. Anyone logged in with LIGO.ORG credentials can also directly edit the diagrams. |
Attachment 1: 40m_CDS_Network_-_Planned.pdf
|
|
Attachment 2: 40m_CDS_Network_-_Current.pdf
|
|
15772
|
Tue Jan 19 15:43:24 2021 |
gautam | Configuration | CDS | Updated CDS upgrade plan |
Not sure if 1Y1 can accommodate both c1sus2 and c1bhd as well as the various electronics chassis that will have to be installed. There may need to be some distribution between 1Y1 and 1Y3. Does Koji's new wiring also specify which racks hold which chassis?
Some minor improvements to the diagram:
- The GPS receiver in 1X7 should be added. All the timing in the lab is synced to the 1pps from this.
- We should add hyperlinks to the various parts datasheets (e.g. Dolphin switch, RFM switch, etc etc) so that the diagram will be truly informative and self-contained.
- Megatron and nodus, but especially chiara (NFS server), should be added to the diagram.
|
15921
|
Mon Mar 15 20:40:01 2021 |
rana | Configuration | Computers | installed QTgrace on donatella for dataviewer |
I installed QTgrace using yum on donatella. Both Grace and XMgrace are broken due to some boring fight between the Fedora package maintainers and the (non existent) Grace support team. So I have symlinked it:
controls@donatella|bin> sudo mv xmgrace xmgrace_bak
controls@donatella|bin> sudo ln -s qtgrace xmgrace
controls@donatella|bin> pwd
/usr/bin
I checked that dataviewer works now for realtime and playback. Although the middle click paste on the mouse doesn't work yet. |
Attachment 1: cutiegrace.png
|
|
15928
|
Wed Mar 17 09:05:01 2021 |
Paco, Anchal | Configuration | Computers | 40m Control Room Changes |
- Switched positions of allegra and donatella.
- While doing so, the hdmi cable previously used by donatella snapped. We replaced this cable by another unused cable we found connected only on one end to rossa. We should get more HDMI cables if that cable was in use for some other purpose.
- Paco bought a bluetooth speaker/mic that is placed infront of allegra and it's usb adapter is connected to iMac's keyboard in the bottom. With the new camera installed, the 40m video call environment is now complete.
- Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
|
16027
|
Wed Apr 14 13:16:20 2021 |
Anchal | Configuration | Computers | 40m Control Room Changes |
- I have confirmed that the old two monitors' backlighting is not working. One can see the impression of the display without any brightness on them. Both old monitors are on the shelf behind.
- Today we got a monitor and mouse from Mike. I had to change /etc/default/grub GRUB_GFXMODE to 1920x1200@30 on allegra for it to work with the(any) monitor.
- Allegra is Debian 10 with latest cds-workstation installed on it. It is a good test station to migrate our existing scripts to start using updated cds-workstation configuration.
Quote: |
- Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
|
|
16163
|
Wed May 26 11:45:57 2021 |
Anchal, Paco | Configuration | IMC | MC2 analog camera |
[Anchal, Paco]
We went near the MC2 area and opened the lid to inspect the GigE and analog video monitors for MC2. Looked like whatever image is coming through the viewport is split into the GigE (for beam tracking) and the analog monitor. We hooked the monitor found on the floor nearby and tweaked the analog video camera around to get a feel for how the "ghost" image of the transmission moves around. It looks like in order to try and remove this "extra spots" we would need to tweak the beam tracking BS. We will consult the beam tracking authorities and return to this. |
16302
|
Thu Aug 26 10:30:14 2021 |
Jamie | Configuration | CDS | front end time synchronization fixed? |
I've been looking at why the front end NTP time synchronization did not seem to be working. I think it might not have been working because the NTP server the front ends were point to, fb1, was not actually responding to synchronization requests.
I cleaned up some things on fb1 and the front ends, which I think unstuck things.
On fb1:
- stopped/disabled the default client (systemd-timesyncd), and properly installed the full NTP server (ntp)
- the ntp server package for debian jessie is old-style sysVinit, not systemd. In order to make it more integrated I copied the auto-generated service file to /etc/systemd/system/ntp.service, and added and "[install]" section that specifies that it should be available during the default "multi-user.target".
- "enabled" the new service to auto-start at boot ("sudo systemctl enable ntp.service")
- made sure ntp was configured to serve the front end network ('broadcast 192.168.123.255') and then restarted the server ("sudo systemctl restart ntp.service")
For the front ends:
- on fb1 I chroot'd into the front-end diskless root (/diskless/root) and manually specifed that systemd-timesyncd should start on boot by creating a symlink to the timesyncd service in the multi-user.target directory:
$ sudo chroot /diskless/root
$ cd /etc/systemd/system/multi-user.target.wants
$ ln -s /lib/systemd/system/systemd-timesyncd.service
- on the front end itself (c1iscex as a test) I did a "systemctl daemon-reload" to force it to reload the systemd config, and then restarted the client ("systemctl restart systemd-timesyncd")
- checked the NTP synchronization with timedatectl:
controls@c1iscex:~ 0$ timedatectl
Local time: Thu 2021-08-26 11:35:10 PDT
Universal time: Thu 2021-08-26 18:35:10 UTC
RTC time: Thu 2021-08-26 18:35:10
Time zone: America/Los_Angeles (PDT, -0700)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2021-03-14 01:59:59 PST
Sun 2021-03-14 03:00:00 PDT
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2021-11-07 01:59:59 PDT
Sun 2021-11-07 01:00:00 PST
controls@c1iscex:~ 0$
Note that it is now reporting "NTP enabled: yes" (the service is enabled to start at boot) and "NTP synchronized: yes" (synchronization is happening), neither of which it was reporting previously. I also note that the systemd-timesyncd client service is now loaded and enabled, is no longer reporting that it is in an "Idle" state and is in fact reporting that it synchronized to the proper server, and it is logging updates:
controls@c1iscex:~ 0$ sudo systemctl status systemd-timesyncd
â— systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
Active: active (running) since Thu 2021-08-26 10:20:11 PDT; 1h 22min ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 2918 (systemd-timesyn)
Status: "Using Time Server 192.168.113.201:123 (ntpserver)."
CGroup: /system.slice/systemd-timesyncd.service
└─2918 /lib/systemd/systemd-timesyncd
Aug 26 10:20:11 c1iscex systemd[1]: Started Network Time Synchronization.
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 64s/+0.000s/0.000s/0.000s/+26ppm
Aug 26 10:21:15 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 128s/-0.000s/0.000s/0.000s/+25ppm
Aug 26 10:23:23 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 256s/+0.001s/0.000s/0.000s/+26ppm
Aug 26 10:27:40 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 512s/+0.003s/0.000s/0.001s/+29ppm
Aug 26 10:36:12 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 1024s/+0.008s/0.000s/0.003s/+33ppm
Aug 26 10:53:16 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/-0.026s/0.000s/0.010s/+27ppm
Aug 26 11:27:24 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/+0.009s/0.000s/0.011s/+29ppm
controls@c1iscex:~ 0$
So I think this means everything is working.
I then went ahead and reloaded and restarted the timesyncd services on the rest of the front ends.
We still need to confirm that everything comes up properly the next time we have an opportunity to reboot fb1 and the front ends (or the opportunity is forced upon us).
There was speculation that the NTP clients on the front ends (systemd-timesyncd) would not work on a read-only filesystem, but this doesn't seem to be true. You can't trust everything you read on the internet. |
16614
|
Mon Jan 24 12:33:41 2022 |
rana | Configuration | Wiki | AIC Wiki: txz files allowed |
I updated the mime.local.conf file for the AIC Wiki so as to allow attachments with the .txz format. THis should be persistent over upgrades, since its a local file. |
16874
|
Wed May 25 16:56:44 2022 |
Paco | Configuration | BHD | IFO recovery - IMC alignment |
[Yuta, Paco]
We aligned IMC to recover the IFO progressively. First step was to center the MC REFL beamspot on the camera as well as the WFS DC. Then slide MC2 and MC3 together. Below are the alignment slider positions before/after.
|
MC1 (before --> after) |
MC2 (before --> after) |
MC3 (before --> after) |
PIT |
-0.3398 --> -0.4768 |
4.1217 --> 4.0737 |
-1.9808 --> -1.9308 |
YAW |
-0.8947 --> -0.7557 |
-1.2350 --> -1.3350 |
1.5598 --> 1.5638 |
|
16875
|
Wed May 25 17:34:47 2022 |
yuta | Configuration | BHD | IFO recovery - IFO alignment |
IFO aligned to maximize flashings, except for GRY and LO-AS.
What we did:
0. After recovering IMC, C1:IOO-MC_TRANS_SUM was ~1300 with C1:IOO-MC_RFPD_DCMON of ~0.11 (~10% better than what we had during vent). Xarm and Yarm was already flashing and could see the beam at AS and POP cameras.
1. Aligned ETMX and ITMX to green X input beam to maximize C1:ALS-TRX_OUT, to ~0.19.
2. Aligned TT2-PR3 to get C1:SUS-ETMX_TRX_OUT flashing at 0.09 at max
3. Aligned ITMY to have nice POP blinking of MICH at POP camera
4. Aligned ETMY-PR3 to have C1:SUS-ETMX_TRX_OUT flashing at 0.06 at max
5. Misaligned ITMY (with +2 in C1:SUS-ITMY_PIT_COMM), and aligned PRM to have PRX (PRM-ITMX cavity) flashing at C1:LSC-ASDC_IN1 at ~20 (offset -70) at max
6. Misaligned PRM, and aligned SRM to have SRX (SRM-ITMX cavity) flashing at C1:LSC-ASDC_IN1 at ~20 (offset -70) at max
7. Restored all the alignment. ITMY didn't quite come back, so I need to tweak the alignement to maximize TRY flashing.
Result:
Current alignment is as attached. IR beam at AS, REFL, MCR and green beam at GTRX cameras all seem slightly to the left from monitors, but looks as it was before the pump down. GTRY is still clipped, but green Y locks stably. Oplevs were not so useful to recover the alignment. ETMX/Y oplevs did not drifted too much probably because we don't have in-vac steering mirrors.
Next:
- Tweak alignment of green Y input to follow Yarm
- Do LO-AS alignment
- REFL DC is not receiving beam. Re-alignment necessary
- Oplev centering
- BHD PDs need to be replaced to lower gain PDs and need to be connected to CDS |
Attachment 1: Screenshot_2022-05-25_17-47-57.png
|
|
16877
|
Thu May 26 19:55:43 2022 |
yuta | Configuration | BHD | Oplevs centered, BHD DCPDs are now online |
[Paco, Yuta]
We have aligned the IFO (except for LO-AS and GRY), and centered all the oplevs.
We have also restored Gautam's in-air BHD DCPD setup and placed it to ITMY table.
BHD DC PD signals are now online at C1:XO4-MADC1_EPICS_CH4 and CH5.
Oplevs:
Aligned the IFO following the steps in elog 40m/16875.
When we were woking on BHD DCPDs, we lost REFL beam on camera and both arms flashing. Alignment was restored mostly with TT2 pitch.
We centered all the oplevs after the recovery (see attached).
BHD DCPDs:
1. We removed a circuit box with M2 ISS photodetector readout board from AP table, in-air BHD photodiodes from optics graveyard. (see LIGO-E2000436 and elog 40m/15493 for wiring diagram)
2. Taken out temporary two Thorlabs PDA100A used for aligning LO-AS during the vent from ITMY table, and placed the BHD setup in ITMY table (see attached and attached).
3. DB9 cable (15ft+10ft) was connected from M2 ISS box to anti-aliasing chassis for ADC1 of C1X04 at 1Y2 rack (see attached).
4. +/-18V power for M2 ISS box was supplied from 1Y1 rack.
5. BHD DCPD signals are now available at C1:XO4-MADC1_EPICS_CH4 and CH5 (see attached).
Next:
- Tweak alignment of green Y input to follow Yarm
- Do LO-AS alignment
- Centering of PDs everywhere with IFO aligned
- Update RTS model for BHD |
Attachment 1: elog_1Y2.JPG
|
|
Attachment 2: elog_BHD.JPG
|
|
Attachment 3: elog_box.JPG
|
|
Attachment 4: Screenshot_2022-05-26_17-37-27_IFOaligned_OplevCentered.png
|
|
Attachment 5: Screenshot_2022-05-26_20-35-02.png
|
|
16880
|
Fri May 27 17:45:53 2022 |
yuta | Configuration | BHD | BHD camera installed, GRY aligned |
[JC, Paco, Yuta]
After the IFO recovery (elog 40m/16881), we installed an analog camera for BHD fringe using a BNC cable for old SRMF camera so that we can see it from the control room.
We also aligned AS-LO using LO1,LO2 and AS4.
We then aligned GRY injection to get maximum GTRY.
Maximum TEM00s right now are
C1:SUS-ETMX_TRX_OUT_DQ ~0.1
C1:SUS-ETMY_TRY_OUT_DQ ~0.05
C1:ALS-TRX_OUT_DQ ~0.20
C1:ALS-TRY_OUT_DQ ~0.18 |
16886
|
Thu Jun 2 20:05:37 2022 |
yuta | Configuration | PSL | IMC input power recovered to 1W, some alignment works |
[Paco, Yuta]
We have increased the output power from the PSL table to 951 mW (it was 96.7 mW).
IMC was recovered including WFS, and both arms are flashing nicely in IR.
We tweaked the alignment of GRX and GRY injection to align them with IR, but it was hard.
Right now IR beams are not centered on TMs. We should center them first.
What we did:
Power increase and IMC recovery
- Replaced a beam splitter which splits the beam into IMC REFL RF PD path and WFS path from R=98% to R=10% one. Reflection goes to RF PD.
- Put a R=98% beam splitter back into WFS path.
- We also tried to put a window in front of IMC REFL camera to recover the arrangement in 40m wiki, but the beam reflected from the window was too weak for us to align. So, we decided not to place a window in front of the camera.
- Attached photos are the IMC REFL path before and after the work.
- Measured the PSL output power as Koji did in elog 40m/16672. It was measured to be 96.7+/- 0.5 mW.
- Rotated the HWP using the Universal Motion Controller (it was not possible for us to do it from the MEDM screen). The position was changed from 73.99 deg to 36.99 deg. Output power was measured to be 951 +/- 1 mW
- IMC locked without any other changes.
- Changed C1:IOO-WFS_TRIGGER_THRESH_ON to 5000 (was 500). IMC WFS also worked.
- After running MC WFS relief script, WFS DC offsets and RF offsets are adjusted following the steps in elog 40m/16835. Below are the results.
C1:IOO-WFS1_SEG1_DC.AOFF => -0.0008882080010759334
C1:IOO-WFS1_SEG2_DC.AOFF => -0.0006527877490346629
C1:IOO-WFS1_SEG3_DC.AOFF => -0.0005847311617496113
C1:IOO-WFS1_SEG4_DC.AOFF => -0.0010395992663688955
C1:IOO-WFS2_SEG1_DC.AOFF => -0.0025944841559976334
C1:IOO-WFS2_SEG2_DC.AOFF => -0.003191715502180159
C1:IOO-WFS2_SEG3_DC.AOFF => -0.0036688060499727726
C1:IOO-WFS2_SEG4_DC.AOFF => -0.004011172490815322
IOO-WFS1_I1 : +1977.7 -> +2250 (Significant change)
IOO-WFS1_I2 : +3785.8 -> +3973.2
IOO-WFS1_I3 : +2014.2 -> +2277.7 (Significant change)
IOO-WFS1_I4 : -208.83 -> +430.96 (Significant change)
IOO-WFS1_Q1 : +2379.5 -> +1517.4 (Significant change)
IOO-WFS1_Q2 : +2260.4 -> +2172.6
IOO-WFS1_Q3 : +588.86 -> +978.98 (Significant change)
IOO-WFS1_Q4 : +1654.8 -> +195.38 (Significant change)
IOO-WFS2_I1 : -1619.9 -> -534.25 (Significant change)
IOO-WFS2_I2 : +1610.4 -> +1619.8
IOO-WFS2_I3 : +1919.6 -> +2179.8 (Significant change)
IOO-WFS2_I4 : +1557 -> +1426.6
IOO-WFS2_Q1 : -62.58 -> +345.56 (Significant change)
IOO-WFS2_Q2 : +777.01 -> +805.41
IOO-WFS2_Q3 : -6183.6 -> -5365.8 (Significant change)
IOO-WFS2_Q4 : +4457.2 -> +4397.
IFO Alignment
- Aligned both arms using IR. Both arm flashes at the following, which is consistent with the power increase.
C1:SUS-ETMX_TRX_OUT_DQ ~1.1
C1:SUS-ETMY_TRY_OUT_DQ ~0.6
- With this, we tried to tweak GRX and GRY injection. The following is after the work. We could increase GTRX to 0.204 when the Xarm is aligned to green. This suggests that GRX injection is not aligned nicely yet. But the beams are also not centered on TMs. We should center them first.
C1:ALS-TRX_OUT_DQ ~0.13
C1:ALS-TRY_OUT_DQ ~0.07
- GTRX and GTRY cameras are adjusted to have nicer images. In GRX path, the second and last lens before the PD and CCD was pulled ~ 1 cm behind its original position and both beams realigned. Then, on GRY path, the beam was re-centered on the first and only lens, the whole assembly pushed forward by ~ 2 cm and the beams re-centered.
Next:
- Center the IR beam on TMs (first by our eyeballs; better to use A2L after arm locking is recovered and coils are balanced)
- Tweak GRX and GRY injection (restore GRY PZTs?)
- Install ETMXT camera (if it is easy)
- Lock Xarm and Yarm (C1:LSC-TRX/Y_OUT needs to be fixed for triggering. Can we use other PDs for triggering?)
- MICH locking (REFL and AS PDs might need to be re-aligned; they are not receiving much light)
- RTS model for BHD needs to be updated
|
Attachment 1: Before.JPG
|
|
Attachment 2: After.JPG
|
|
16887
|
Fri Jun 3 12:13:58 2022 |
Paco | Configuration | CDS | Fix RFM channels |
[Paco, Yuta]
We tried fixing the issue of LSC_TRY and LSC_TRX channels not working. We first did some investigation, and just like previously reported by Chris, narrowed down the issue to the RFM channels coming from c1iscex/c1iscey.
First attempt : FAIL
In our first attempt, we
- Tripped ETMX/ETMY watchdogs, ssh to c1iscex/c1iscey and restart the rtcds models.
- Since the last step didn't fix things, we decided to do the same thing on c1lsc, c1sus, c1ioo.
- After hard rebooting c1ioo and c1lsc (because they died during the stopping of rtcds models), and not experiencing any timing issues (nice), we still don't fix the issue.
Second attempt: Success
A second attempt just followed Koji's previous fix explained here. Basic difference with our first attempt was a hard reboot of c1iscex/c1iscey in addition to the rtcds model restarting. RFM channels were then clear of errors and we recovered our IR transmission channels in the LSC model. |
Attachment 1: SoGreen.png
|
|
16893
|
Mon Jun 6 16:09:23 2022 |
rana | Configuration | DetChar | Summary Pages: seis BLRMS |
I updated the config file c1pem.ini in /users/public_html/detcharsummary/ConfigFiles, and commited it so I hope it works, but I did not have git push permissions. Does anyone know what is the idea here? Should we do our own personal git clone and modify that way or shoudl we do it with the control account.
Wiki needs to clear out all the outdated information on this workflow.
The changes are to make the y-scales useful. Currently, all of the past seis BLRMS plots are not so useful because the scales have not been set based on the actual signal levels. Let's see if this works, and we can re-evaluate after a few weeks. |
16924
|
Thu Jun 16 18:23:15 2022 |
Paco | Configuration | BHD | Recovering LO beam in BHD DCPDs |
[Paco, Yuta]
We recovered the LO beam on the BHD port. To do this, we first tried reverting to a previously "good" alignment but couldn't see LO beam hit the sensor. Then we checked the ITMY table and couldn't see LO beam either, even though the AS beam was coming out fine. The misalignment is likely due to recent changes in both injection alignment on TT1, TT2, PR2, PR3, as well as ITMX, ITMY. We remembered that LO path is quite constrained in the YAW direction, so we started a random search by steering LO1 YAW around by ~ 1000 counts in the negative direction at which point we saw the beam come out of the ITMY chamber 
We proceeded to walk the LO1-LO2 in PIT mostly to try and offload the huge alignment offset from LO2 to LO1 but this resulted in the LO beam disappearing or become dimmer (from some clipping somewhere). This is WiP and we shall continue this alignment offload task at least tomorrow, but if we can't offload significantly we will have to move forward with this alignment. Attachment #1 shows the end result of today's alignment. |
Attachment 1: Screenshot_2022-06-16_18-29-14_BHDLObeamISBACK.png
|
|
16932
|
Tue Jun 21 14:17:50 2022 |
yuta | Configuration | BHD | BHD DCPDs re-routed to c1sus2 |
After discussing with Anchal, we decided to route BHD related PD signals directly to ADC of c1sus2, which handles our new suspensions including LO1, LO2, AS1, AS4, so that we can control them directly.
BHD related PD signals will be sent to c1lsc for DARM control.
Re-cabling was done, and now they are online at C1:X07-MADC1_EPICS_CH16 (DC PD A) and CH17 (DC PD B) with 15ft DB9 cable.
Here, DC PD A is the transmission of BHD BS for AS beam, and DC PD B is the reflection of BHD BS for AS beam (see attached photo). |
Attachment 1: C1X07ADC1.JPG
|
|
Attachment 2: BHDDCPDs.JPG
|
|
17018
|
Tue Jul 19 16:00:34 2022 |
yuta | Configuration | BHD | Fast channels for BHD DCPDs now available in c1lsc but not in c1hpc |
[Paco, Anchal-remote-support, Yuta]
We added fast channels to BHD DC PDs.
C1:LSC-DCPD_(A|B)_IN1 are now available, but C1:HPC-DCPD_(A|B)_IN1 still gives us zero.
c1hpc situation -> not good
- We can see the slow signal at C1:X07-MADC1_EPICS_CH16 (DC PD A) and CH17 (DC PD B)
- C1:HPC-DCPD_(A|B)_IN1 is there, but zero.
- We have modified c1hpc model to add DCPD_(A|B) filters in front of the input matrix (see Attachment #1).
- After modifying the model, we run
ssh c1sus2
rtcds make c1hpc
rtcds install c1hpc
ssh fb1
sudo systemctl restart daqd_*
- After this, we got 0x2000 error. So, we ran the following. This removed 0x2000 error, but DCPD signals are still zero. They are also not available in C1HPC-MONITOR_ADC1.adl screen (see Attachment #3).
ssh c1sus2
rtcds restart c1hpc
c1lsc situation -> good
- We could see the slow signal at C1:X04-MADC1_EPICS_CH4 (DC PD A) and CH5 (DC PD B), and also C1:LSC-DCPD_(A|B)_NORM after making C1:LSC-DCPD_(A|B)_POW_NORM=1. The ADC channel and DCPD channel are exactly the same.
- After confirming the above, we modified the c1lsc model to add DCPD_(A|B) filters in front of the input matrix (see Attachment #2).
- After modifying the model, we run
ssh c1lsc
rtcds make c1lsc
rtcds install c1lsc
ssh fb1
sudo systemctl restart daqd_*
- After this, we also got 0x2000 error. We also noticed that, for example, C1:X04-MADC0_EPICS_CH31 and C1:LSC-ASDC_INMON are different, which used to be the same (ASDC_INMON was largely attenuated).
- In the end, we run the following to remove 0x2000 error, but it crashed c1lsc, as well as c1sus, c1ioo.
ssh c1lsc
rtcds restart c1lsc
- So, we did rebootC1LSC.sh. This made c1lsc, c1ioo and c1sus as green as before, except for RFM issue in TRX/TRY, like we saw in June. We followed the steps in 40m/16887 to hard reboot c1iscex/c1iscey and ran rebootC1LSC.sh again. This made C1CDS_FE_STATUS.adl screen as green as before (see Attachment #3).
- Fast channels C1:LSC-DCPD_(A|B)_IN1 are now available. They are also available in C1LSC-MONITOR_ADC1.adl screen (see Attachment #3). |
Attachment 1: Screenshot_2022-07-19_14-26-39_c1hpc.png
|
|
Attachment 2: Screenshot_2022-07-19_14-24-49_c1lsc.png
|
|
Attachment 3: Screenshot_2022-07-19_15-51-25_GreenGreen.png
|
|
17025
|
Thu Jul 21 21:50:47 2022 |
Tega | Configuration | BHD | c1sus2 IPC update |
IPC issue still unresolved.
Updated shared memory tag so that 'SUS' -> 'SU2' in c1hpc, c1bac and c1su2. Removed obsolete 'HPC/BAC-SUS' references from IPC file, C1.ipc. Restarted the FE models but the c1sus2 machine froze, so I did a manual reboot. This brought down the vertex machines---which I restarted using /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh---and the end machines which I restarted manually. Everything but the BHD optics now have their previous values. So need to burtrestore these.
# IPC file:
/opt/rtcds/caltech/c1/chans/ipc/C1.ipc
# Model file locations:
/opt/rtcds/userapps/release/isc/c1/models/isc/c1hpc.mdl
/opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl
/opt/rtcds/userapps/release/isc/c1/models/isc/c1bac.mdl
# Log files:
/cvs/cds/rtcds/caltech/c1/rtbuild/3.4/c1hpc.log
/cvs/cds/rtcds/caltech/c1/rtbuild/3.4/c1su2.log
/cvs/cds/rtcds/caltech/c1/rtbuild/3.4/c1bac.log
SUS overview medm screen :
- Reduced the entire screen width
- Revert to old screen style watchdog layout
|
17026
|
Fri Jul 22 15:05:26 2022 |
Tega | Configuration | BHD | c1sus2 shared memory and ADC fix |
[Tega, Yuta]
We were able to fix the shared memory issue by updating the receiver model name from ''SUS' to 'SU2' and the ADC zero issue by including both ADC0 and ADC1 in the c1hpc and c1bac models as well as removing the grounding of the unused ADC channels (including chn#16 and chn#17 which are actually used in c1hpc) in c1su2. We also used shared memory to move the DCPD_A/B error signals (after signal conditioning and mixing A/B; now named A_ERR and B_ERR) from c1hpc to c1bac.
C1:HPC-DCPD_A_IN1 and C1:HPC-DCPD_B_IN1 are now available (they are essentially the same as C1:LSC-DCPD_A_IN1 and C1:LSC-DCPD_B_IN1, except for they are ADC-ed with different ADC; see elog 40m/16954 and Attachment #1).
Dolphin IPC error in seding signal from c1hpc to c1lsc still remains. |
Attachment 1: Screenshot_2022-07-22_15-04-33_DCPD.png
|
|
Attachment 2: Screenshot_2022-07-22_15-12-19_models.png
|
|
Attachment 3: Screenshot_2022-07-22_15-15-11_ERR.png
|
|
Attachment 4: Screenshot_2022-07-22_15-32-19_GDS.png
|
|
17028
|
Fri Jul 22 17:46:10 2022 |
yuta | Configuration | BHD | c1sus2 watchdog update and DCPD ERR channels |
[Tega, Yuta]
We have added C1:HPC-DCPD_A_ERR and C1:HPC-DCPD_B_ERR testpoints, which can be used as A+B, A-B etc.
Restarting c1hpc crashed c1sus2, and also made c1lsc/ioo/sus models red.
We run /opt/rtcds/caltech/c1/Git/40m/scripts/cds/restartAllModels.sh to restart all the machines. It worked perfectly without manually pressing power buttons! Wow!
We have also edited /opt/rtcds/caltech/c1/medm/c1su2/C1SU2_WATCHDOGS.adl so that it will use new /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/medm/resetFromWatchdogTrip.sh instead of old /opt/rtcds/caltech/c1/scripts/SUS/damprestore.py. |
Attachment 1: Screenshot_2022-07-22_17-48-25.png
|
|
17033
|
Mon Jul 25 17:58:10 2022 |
Tega | Configuration | BHD | c1sus2 IPC dolphin issue update |
From the 40m wiki, I was able to use the instructions here to map out what to do to get the IPC issue resolved. Here is a summary of my findings.
I updated the /etc/dis/dishost.conf file on the frame builder machine to include the c1sus2 machine which runs the sender model, c1hpc, see below. After this, the file becomes available on c1sus2 machine, see attachment 1, and the c1sus2 node shows up in the dxadmin GUI, see attachment 2. However, the c1sus2 machine was not active. I noticed that the log file for the dis_nodemgr service, see attachment 3, which is responsible for setting things up, indicated that the dis_irm service may not be up, so I checked and confirmed that this was indeed the case, see attachment 4. I tried restarting this service but was unsuccessful. I restarted the machine but this did not help either. I have reached out to Jonathan Hanks for assistance. |
Attachment 1: Screen_Shot_2022-07-25_at_5.43.28_PM.png
|
|
Attachment 2: Screen_Shot_2022-07-25_at_5.21.10_PM.png
|
|
Attachment 3: Screen_Shot_2022-07-25_at_5.30.58_PM.png
|
|
Attachment 4: Screen_Shot_2022-07-25_at_5.35.19_PM.png
|
|
17034
|
Mon Jul 25 18:09:41 2022 |
Tega | Configuration | BHD | BHD Homodyne Phase control MEDM screen |
[Paco, Tega, Yuta]
Today, we made a custom MEDM screen for the BHD Homodyne Phase Control, which is basically an overview of the c1hpc model. See Attachments 1 & 2 for details. |
Attachment 1: Screen_Shot_2022-07-25_at_6.12.08_PM.png
|
|
Attachment 2: Screen_Shot_2022-07-25_at_6.18.09_PM.png
|
|
17052
|
Mon Aug 1 18:42:39 2022 |
Tega | Configuration | BHD | c1sus2 IPC dolphin issue update |
[Yuta, Tega]
We decided to give the dolphin debugging another go. Firstly, we noticed that c1sus2 was no longer recogonising the dolphin card, which can be checked using
lspci | grep Stargen
or looking at the status light on the dolphin card of c1sus2, which was orange for both ports A and B.
We decided to do a hard reboot of c1sus2 and turned off the DAQ chassis for a few minutes, then restared c1sus2. This solved the card recognition problem as well as the 'dis_irm' driver loading issue (I think the driver does not get loaded if the system does not recognise a valid card, as I also saw the missing dis_irm driver module on c1testand).

Next, we confirmed the status of all dolphin cards on fb1, using
controls@fb1$ /opt/DIS/sbin/dxadmin

It looks like the dolphin card on c1sus2 has now been configured and is availabe to all other nodes. We then restated the all FE machines and models to see if we are in the clear. Unfortunately, we are not so lucky since the problem persisted.
Looking at the output of 'dmesg', we could only identity two notable difference between the operational dolphin cards on c1sus/c1ioo/c1lsc and c1sus2, namely: the card number being equal to zero and the memory addresses which are also zero, see image below.

Anyways, at least we can now eliminate driver issues and would move on to debugging the models next. |
Attachment 1: c1sus2_dolphin.png
|
|
Attachment 2: fb1_dxamin_status.png
|
|
Attachment 3: dolphin_num_mem_init2.png
|
|
17054
|
Tue Aug 2 17:25:18 2022 |
Tega | Configuration | BHD | c1sus2 dolphin IPC issue solved |
[Yuta, Tega, Chris]
We did it!
Following Chris's suggestion, we added "pciRfm=1" to the CDS parameter block in c1x07.mdl - the IOP model for c1sus2. Then restarted the FE machines and this solved the dolphin IPC problem on c1sus2. We no longer see the RT Netstat error for 'C1:HPC-LSC_DCPD_A' and 'C1:HPC-LSC_DCPD_B' on the LSC IPC status page, see attachement 1.
Attachment 2 shows the module dependencies before and after the change was made, which confirms that the IOP model was not using the dolphin driver before the change.
We encountered a burt restore problem with missing snapfiles from yesterday when we tried restoring the EPICS values after restarting the FE machines. Koji helped us debug the problem, but the summary is that restarting the FE models somehow fixed the issue.
Log files:
/opt/rtcds/caltech/c1/burt/burtcron.log
/opt/rtcds/caltech/c1/burt/autoburt/autoburtlog.log
Request File list:
/opt/rtcds/caltech/c1/burt/autoburt/requestfilelist
Snap files location:
/opt/rtcds/caltech/c1/burt/autoburt/today
/opt/rtcds/caltech/c1/burt/autoburt/snapshots
Autoburt crontab on megatron:
19 * * * * /opt/rtcds/caltech/c1/scripts/autoburt/autoburt.cron > /opt/rtcds/caltech/c1/burt/burtcron.log 2>&1 |
Attachment 1: c1lsc_IPC_status.png
|
|
Attachment 2: FE_lsmod_dependencies_c1sus2_b4_after_iop_unpdate.png
|
|
17126
|
Thu Sep 1 09:00:02 2022 |
JC | Configuration | Daily Progress | Locked both arms and aligned Op Levs |
Each morning now, I am going to try to align both arms and lock. Along with that, sometime at towards the end of each week, we should align the OpLevs. This is a good habit that should be practiced more often, not only by me. As for the Y Arm, Yehonathan and I had to adjust the gain to 0.15 in order to stabilize the lock. |
Attachment 1: Daily.pdf
|
|
Attachment 2: Daily.pdf
|
|
17135
|
Thu Sep 8 11:54:37 2022 |
JC | Configuration | Lab Organization | Lab Organization |
The arms in the 40m laboratory have now been sectioned off. Each arm has been divided up into 15 sections. Along the Y arm, the section are labelled "Section Y1 - Section Y15". For the X arm, they are labelled "Section X1- Section X15". Anything changed or moved will now be updated into the elog with their appropriate section.
Below is an example of Section X6. |
Attachment 1: 1A7026BC-82A9-49E9-BA22-1A700DFEC5D2.jpeg
|
|
Attachment 2: 2A904809-82F0-40C0-B907-B48C3A0E789E.jpeg
|
|
Attachment 3: CB4B8591-B769-454D-9A16-EE9176004099.jpeg
|
|
17136
|
Thu Sep 8 12:01:02 2022 |
JC | Configuration | Lab Organization | Lab Organization |
The floor cable cover has been changed out for a new one. This is in Section X11. |
Attachment 1: F41AD1DA-29E9-4449-99CB-5F43AE527CA6_1_105_c.jpeg
|
|
Attachment 2: FF5F2CE8-85E8-4B6F-8F8A-9045D978F670.jpeg
|
|
17209
|
Tue Oct 25 09:57:34 2022 |
Paco | Configuration | PEM | Auto Z on trillium interface board |
I pressed the Auto-Z(ero) button for ~ 3 seconds at ~9:55 local (pacific) time on the trillium interface on 1X5. |
17210
|
Tue Oct 25 13:55:37 2022 |
Koji | Configuration | PEM | Auto Z on trillium interface board |
This nicely brought the sensing signal back to ~zero. See attachment
Some basic info:
- BS Seismometer is T240 (Trillium)
- The interface unit is at 1X5 Slot 26. D1002694
- aLIGO Trillium 240 Interface Quick Start Guide T1000742
|
Attachment 1: Screen_Shot_2022-10-25_at_13.56.46.png
|
|
17213
|
Tue Oct 25 22:01:53 2022 |
rana | Configuration | PEM | Auto Z on trillium interface board |
thanks, this seems to have recentered well.
It looks like it started to act funny at 0400 UTC on 10/24, so thats 9 PM on Sunday in the 40m. What was happening then?

|
Attachment 1: Screen_Shot_2022-10-26_at_4.45.30_PM.png
|
|
17264
|
Mon Nov 14 14:52:56 2022 |
Paco | Configuration | SUS | BHD SUS Coil output balance |
[JC, Paco]
We installed a steering mirror intersecting the BHD beam path and put the AS beam on the ITMY Oplev QPD (see Attachment #1 for a photo of this temporary hack) . This is done to do coil balancing of AS1/AS4, LO1/LO2. QPD sees ~ 10000 counts when the beam is centered.
[Paco, Yuta]
We follow this procedure -- but with different sensors for all BHD suspension coil output balancing.
AS1/AS4
We dither BUTT first, lock the LO-AS fringe (DC lock), and look at the residual LO_PHASE spectrum to minimize POS coupling. We then unlock, misalign LO beam and look at the hijacked Oplev (ITMY) while dithering POS to minimize PIT and YAW couplings.
LO1/LO2
We dither BUTT first, lock thChangeset summarye LO-AS fringe (DC lock), and look at the residual LO_PHASE spectrum to minimize POS coupling. We then unlock, misalign AS beam and look at the hijacked Oplev (ITMY) PIT/YAW residual noise while dithering POS to minimize PIT/YAW coupling.
Changeset summary
The new coil output gains are summarized in the table below:
Optic / Coil |
UL |
UR |
LR |
LL |
AS1 |
-0.939 |
1.040 |
-1.026 |
0.995 |
AS4 |
-0.9785 |
0.9775 |
-1.0695 |
0.9745 |
LO1 |
-0.939 |
1.003 |
-1.074 |
0.984 |
LO2 |
-1.051 |
1.342 |
-0.976 |
0.631 |
Finally, I reverted the hacked QPD setup to restore the ITMY OPLEV. |
Attachment 1: PXL_20221114_224829547~2.jpg
|
|
Attachment 2: Screenshot_2022-11-14_16-07-26_AS1CoilBalancing.png
|
|
Attachment 3: Screenshot_2022-11-14_16-23-17_AS4CoilBalancing.png
|
|
Attachment 4: Screenshot_2022-11-14_16-38-16_LO1CoilBalancing.png
|
|
Attachment 5: Screenshot_2022-11-14_16-54-14_LO2CoilBalancing.png
|
|
17269
|
Tue Nov 15 17:58:00 2022 |
Paco | Configuration | Cameras | POP camera realignment after IFO alignment |
[Paco, Yuta]
I swapped the 1 inch BS and lenses along the POP beam to clear the apertures and avoid clipping this beam. The results are illustrated by the attached pictures; this was done right after Yuta had optimized IFO alignment so it's hopefully a good reference from now on. Yuta also tuned the alignment of BHDC path in ITMY table, which mostly improved the alignment to DCPD A (90-ish counts improved to 100-ish counts with ITMY single bounce). |
Attachment 1: Screenshot_2022-11-15_16-22-26_AlignedBothArmLocked.png
|
|
Attachment 2: PXL_20221115_215851553.jpg
|
|
Attachment 3: PXL_20221115_233429500.jpg
|
|
17275
|
Thu Nov 17 07:39:01 2022 |
JC | Configuration | Cameras | ITMX Camera |
Coming in this morning, I found ITMX Camera malfunctioning. |
17278
|
Thu Nov 17 12:24:48 2022 |
Paco | Configuration | Cameras | ITMX Camera -- attempt at fix |
I found that an old BNC cable for ITMXF video existed so I first tried swapping both ends of the cable, one on the ITMX viewport and the other one in the video MUX input in the rear. This didn't fix the issue.
I searched around in the CCD cabinet by XARM and found an identical analog camera so I swapped it and got the same image ...
I then searched for a AC/DC supply cable, but couldn't find one.
Quote: |
Coming in this morning, I found ITMX Camera malfunctioning.
|
|
17280
|
Thu Nov 17 15:53:47 2022 |
JC | Configuration | Cameras | ITMX Camera -- attempt at fix |
The issue was the power supply.
Quote: |
I found that an old BNC cable for ITMXF video existed so I first tried swapping both ends of the cable, one on the ITMX viewport and the other one in the video MUX input in the rear. This didn't fix the issue.
I searched around in the CCD cabinet by XARM and found an identical analog camera so I swapped it and got the same image ...
I then searched for a AC/DC supply cable, but couldn't find one.
|
|