ID |
Date |
Author |
Type |
Category |
Subject |
16647
|
Fri Feb 4 10:21:39 2022 |
Anchal | Summary | General | Complete lab shutdown |
Please edit this same entry throughout the day for the shutdown elogging.
I took a screenshot of C0VAC_MONITOR.adl to ensure that all pnematic valves are in closed positions:

The status message says "All pnematic valves closed" and the latest error message is about "V7 closed, N2 < 6.50e+01".
I found out that there was no autoburt happening for c1vac channels. I created an autoBurt.req file for the vac.db file and saved one snapshot. I also added the path of this file in autoburt/.requestfilelist . Let's see if autoburting starts by that for this file as well.
With this, I think we can safely shutdown acromag chassis. Hopefully, the relays are configured such that the valves are nominally closed in absence of a control signal. After the chassis is shut down, wwe can shutdown C1VAC by:
sudo shutdown
[Chub, Jordan]
At the 1x8 rack, the following were switched off on their respective front panels:
PTP2 & PTP3 Controller
MKS Gauge controller
PRP Gauge Controller
G2P316a & b Controllers
Sorenson
Serial Device Server
Both UPS's
Powered off from back of unit:
TP1 Controller
Acromag chassis
TP2 and 3 controllers were unplugged from respective power strips (labeled C2 and C3)
C1vac and the laptop at the workstation were shut down
Manual Gate valve was closed |
16648
|
Mon Feb 7 09:00:26 2022 |
Paco | Update | General | Scheduled power outage recovery |
[Paco]
Started recovering from scheduled (Feb 05) power outage. Basically, time-reversing through this list.
== Office area ==
- Power martian network switches, WiFi routers on the north-rack.
- Power windows (CAD) machine on.
== Main network stations ==
- Power on nodus, try ping (fail).
- Power on network switches, try ping (success), try ssh controls@nodus.ligo.caltech.edu (success).
- Power on chiara to serve names for other stations, try ssh chiara (success).
- Power on fb1, try ping (success), try ssh fb1 (success).
- Power on paola (xend laptop), viviana (yend laptop), optimus, megatron.
== Control workstations ==
- Power on zita (success)
- Power on giada (success), run system upgrade.
- Power on donatella (success)
- Power on allegra (fail) **
- Power on pianosa (success)
- Power on rossa (success)
- From nodus, started elog (success).
== PSL + Vertex instruments ==
- Turn on newport PD power supplies on PSL table.
- Turn on TC200 temp controller on (setpoint --> 36.9 C)
- Turn on two oscilloscopes in PSL table.
- Turn on PSL (current setpoint --> 2.1 A, other settings seem nominal)
- Turn on Thorlabs HV pzt supply.
- Turn on ITMX OpLev / laser instrument AC strip.
== YEND and XEND instruments ==
- Turn on XEND AUX pump on (current setpoint -->1.984 A)
- Turn on XEND AUX SHG oven on (setpoint --> 37.1 C) (see green beam)
- Turn on XEND AUX shutter controller on.
- Turn on DCPD supply, and OpLev supply AC strip on.
- Turn on YEND AUX pump on (fail) *
- With the controller on STDBY, I tried setting up the current but got HD FAULT (or according to the manual this is what the head reports when the diode temperature is too high...)
- Upon power cycling the controller, even the controller display stopped working... YAUX controller + head died? maybe just the diode? maybe just the controller?
- I borrowed a spare LW125 controller from the PSL table (Yehonathan pointed me to it) and swapped it in.
- Got YEND AUX to lase with this controller, so the old controller is busted but at least the laser head is fine.
- Even saw SHG light. We switched the laser head off to "STDBY" (so it remains warm) and took the faulty controller out of there.
- Turn on YEND AUX SHG oven on (setpoint -->35.7 C)
- Turn on YEND AUX shutter controller on.
== YARM Electronic racks ==
== XARM Electronic racks ==
* Top priority, this needs to be fixed.
** Non-priority, but to be debugged |
16649
|
Mon Feb 7 15:32:48 2022 |
Yehonathan | Update | General | Y End laser controller |
I went to the Y end. The AUX laser was on Standby. I pushed the Standby button. The laser turned on and there was some green light. However, the controller displayed the message "CABLE?" which according to the manual means that the laser head is powered but there is no control over the laser (e.g. the control cable is disconnected). I turned off the controller and disconnected both the power and control cables. I put them back and turned the controller back on.
I pushed the Standby button, the laser turned on and this time the controller displayed the laserhead's state. I was able to change the current/temperature. The problem seems to be resolved. |
16650
|
Mon Feb 7 16:14:37 2022 |
Tega | Update | Computers | realtime system reboot problem |
I was looking into plotting temperature sensor data trend and why we currently do not have frame data written to file (on /frames) since Friday, and noticed that the FE models were not running. So I spoke to Anchal about it and he mentioned that we are currently unable to ssh into the FE machines, therefore we have been unable to start the models. I recalled the last time we enountered this problem Koji resolved it on Chiara, so I search the elog for Koji's fix and found it here, https://nodus.ligo.caltech.edu:8081/40m/16310. I followed the procedure and restarted c1sus and c1lsc machine and we are now able to ssh into these machines. Also restarted the remaining FE machines and confirm that can ssh into them. Then to start models, I ssh into each FE machine (c1lsc, c1sus, c1ioo, c1iscex, c1iscey, c1sus2) and ran the command
rtcds start --all
to start all models on the FE machine. This procedure worked for all the FE machines but failed for c1lsc. For some reason after starting the first the IOP model - c1x04, c1lsc and c1ass, the ssh connection to the machine drops. When we try to ssh into c1lsc after this event, we get the following error : "ssh: connect to host c1lsc port 22: No route to host ". I reset the c1lsc machine and deecided to to start the IOP model for now. I'll wait for Anchal or Paco to resolve this issue.
[Anchal, Tega]
I informed Anchal of the problem and ask if he could take a look. It turn out 9 FE models across 3 FE machines (c1lsc, c1sus, c1ioo) have a certain interdependece that requires careful consideration when starting the FE model. In a nutshell, we need to first start the IOP models in all three FE machines before we start the other models in these machines. So we turned off all the models and shutdown the FE machines mainly bcos of a daq issue, since the DC (data concentrator) indicator was not initialised. Anchal looked around in fb1 to figure out why this was happening and eventually discovered that it was the same as the ms_stream issue encountered earlier in fb1 clone (https://nodus.ligo.caltech.edu:8081/40m/16372). So we restarted fb1 to see if things clear up given that chiara dhcp sever is now working fine. Upon restart of fb1, we use the info in a previous elog that shows if the DAQ network is working or not, r.e. we ran the command
$ /opt/mx/bin/mx_info
MX:fb1:mx_init:querying driver:error 5(errno=2):No MX device entry in /dev.
The output shows that MX device was not initialiesd during the reboot as can also be seen below.
$ sudo systemctl status daqd_dc -l
● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
Active: failed (Result: exit-code) since Mon 2022-02-07 18:02:02 PST; 12min ago
Process: 606 ExecStart=/usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc (code=exited, status=1/FAILURE)
Main PID: 606 (code=exited, status=1/FAILURE)
Feb 07 18:01:56 fb1 systemd[1]: Starting Advanced LIGO RTS daqd data concentrator...
Feb 07 18:01:56 fb1 systemd[1]: Started Advanced LIGO RTS daqd data concentrator.
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: [Mon Feb 7 18:01:57 2022] Unable to set to nice = -20 -error Unknown error -1
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: Failed to do mx_get_info: MX not initialized.
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: 263596
Feb 07 18:02:02 fb1 systemd[1]: daqd_dc.service: main process exited, code=exited, status=1/FAILURE
Feb 07 18:02:02 fb1 systemd[1]: Unit daqd_dc.service entered failed state.
NOTE: We commented out the line
Restart=always
in the file "/etc/systemd/system/daqd_dc.service" in order to see the error, BUT MUST UNDO THIS AFTER THE PROBLEM IS FIXED! |
16651
|
Mon Feb 7 16:53:02 2022 |
Koji | Update | General | Scheduled power outage recovery |
I went to the X end and found it was warm. Turned out that not all the A/Cs were on. They were turned on now. |
16652
|
Wed Feb 9 11:56:24 2022 |
Anchal | Update | General | Bringing back CDS |
[Anchal, Paco]
Bringing back CDS took a lot of work yesterday. I'm gonna try to summarize the main points here.
mx_start_stop
For some reason, fb1 was not able to mount mx devices automatically on system boot. This was an issue I earlier faced in fb1(clone) too. The fix to this problem is to run the script:
controls@fb1:/opt/mx/sbin/mx_start_stop start
To make this persistent, I've configured a daemon (/etc/systemd/system/mx_start_stop.service) in fb1 to run once on system boot and mount the mx devices as mentioned above. We did not see this issue of later reboots yesterday.
gpstime
Next was the issue of gpstime module out of date on fb1. This issue is also known in the past and requires us to do the following:
controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 1$ sudo modprobe gpstime
Again, to make this persistent, I've configured a daemon (/etc/systemd/system/re-add-gpstime.service) in fb1 to run the above commands once on system boot. This corrected gpstime automatically and we did not face these problems again.
time synchornization
Later we found that fb1-FE computers, ntp time synchronization was not working and the main reason was that fb1 was unable to access internet. As a rule of thumb, it is always a good idea to try pinging www.google.com on fb1 to ensure that it is connected to internet. The issue had to do with fb1 not being able to find any namespace server. We fixed this issue by reloading bind9 service on chiara a couple of times. We're not really sure why it wasn't working.
~>sudo service bind9 stop
~>sudo service bind9 start
~>sudo service bind9 status
* bind9 is running
After the above, we saw that fb1 ntp server is working fine. You see following output on fb1 when that is the case:
controls@fb1:~ 0$ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
-table-moral.bnr 110.142.180.39 2 u 399 512 377 195.034 -14.618 0.122
*server1.quickdr .GPS. 1 u 67 64 377 130.483 -1.621 1.077
+ntp2.tecnico.ul 56.99.239.27 2 u 473 512 377 184.648 -0.775 2.231
+schattenbahnhof 129.69.1.153 2 u 365 512 377 144.848 3.841 1.092
192.168.123.255 .BCST. 16 u - 64 0 0.000 0.000 0.000
On the FE models, timedatectl should show that NTP synchronized feild is yes. That wasn't happening even after us restarting the systemd-timesyncd service. After this, I just tried restarting all FE computers and it started working.
CDS
We had removed all db9 enabling plugs on the new SOSs beforehand to keep coils off just in case CDS does not come back online properly.
Everything in CDS loaded properly except the c1oaf model which kepy showing 0x2bad status. This meant that some IPC flags are red on c1sus, c1mcs and c1lsc as well. But everything else is green. See attachment 1. I then burtrestroed everything in the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2022/Feb/4/12:19 directory. This includes the snapshot of c1vac as well that I added on autoburt that day. All burt restore statuses were green OK. I think we are in good state now to start watchdogs on the new SOSs and put back the db9 enabling plugs.
Future work:
When somebody gets time, we should make cutom service files in fb1:/etc/systemd/system/ symbolic links to a repo directory and version control these important services. We should also make sure that their dependencies and startup order is correctly configured. I might have done a half-assed job there since I recently learned how to make unit files. We should do the same on nodus and chiara too. Our hope is that on one glorious day, the lab can be restarted without spending more than 20 min on booting up the computers and network.
|
16653
|
Wed Feb 9 13:55:05 2022 |
Koji | Update | General | Bringing back CDS |
Great recovery work and cleaning of the rebooting process.
I'm just curious: Did you observe that the c1sus2 cards have different numbering order than the previous along with the power outage/cycling? |
16654
|
Wed Feb 9 14:34:27 2022 |
Ian | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
Restarted the C1sim machine at about 12:30 to help diagnose a network problem. Everything is back up and running |
16655
|
Wed Feb 9 16:43:35 2022 |
Paco | Update | General | Scheduled power outage recovery - Locking mode cleaner(s) |
[Paco, Anchal]
- We went in and measured the power after the power splitting HWP at the PSL table. Almost right before the PSL shutter (which was closed), when the PMC was locked we saw ~ 598 mW (!!)
- Checking back on ESP300, it seems the channel was not enabled even though the right angle was punched in, so it got enabled.
- The power adjustment MEDM screen is not really working...
- Going back to the controller, press HOME on the Axis 1 (our HWP) and see it go to zero...
- Now the power measured is ~ 78 mW.
- Not sure why the MEDM screen didn't really work (this needs to be fixed later)
We proceeded to align the MC optics because all offsets in MC_ALIGN screen were zeroed. After opening the PSL shutter, we used values from last year as a reference, and try to steadily recover the alignment. The IMC lock remains at large. |
16656
|
Thu Feb 10 14:39:31 2022 |
Koji | Summary | Computers | Network security issue resolved |
[Mike P / Koji / Tega / Anchal]
IMSS/LIGO IT notified us that "ILOM ports" of one of our hosts on the "114" network are open. We tried to shut down obvious machines but could not identify the host in question. So we decided to do a bit more systematic search of the host.
[@Network Rack]
- First of all, we disconnected the optical cables coming to the GC router while the ping is running on the AIRLIGO connected laptop (i.e. outside of the 40m network). This made the ping stopped. This means that the issue was definitely in the 40m.
- Secondly, we started to disconnect (and reconnect) the ethernet cables from the GC router one by one. We found that the ping response stops when the cable named "NODUS" was disconnected.
[@40m IFO lab]
- So we tracked the cable down in the 40m lab. After a while, we identified that the cable was really connected to nodus.
- Nodus was supposed to have one network connection to the martian network since the introduction of the bidirectional NAT router (rather than the old configuration with a single direction NAT router).
- In fact, the cable was connected to "non-networking" port of nodus. (Attachment 1). I guess the cable was connected like this long time, but somehow the ILOM (IPMI) port was activated along with the recent power cycling.
- The cable was disconnected at nodus too. (Attachment 2) And a tape was attached to the port so that we don't connect anything to the port anymore. |
16657
|
Thu Feb 10 15:41:00 2022 |
Anchal | Update | General | Scheduled power outage recovery - Locking mode cleaner(s) |
I found out that the ESP300 service needs to be run in root mode for it to be able to connect to the USB port of HWP motor controller. While doing this change, I noticed that the channels hosted by c1psl might have a duplication conflict with some other channel hosting computer, because a lot of them show the Warning: "Identical process variable names on multiple servers" which is not good. Someone should look into this conflict.
I added instructions on the power control MEDM screen as it was very non-trivial to use. I have set the power such that the C1:IOO-MC_RFPD_DCMON is 5.6 and this happened at C1:IOO-HWP_POS_SET 2.29. |
16658
|
Thu Feb 10 17:57:48 2022 |
Anchal | Update | General | Scheduled power outage recovery - Locking mode cleaner(s) |
Something is wrong with the Video MUX. The system did not turn back on with full functionality. Even though we see the screens as they were before the power shutdown, we have lost control on switching any of the videos. I went to check the wiki page about Video MUX which told be we should be able to see the configuration screen on this link, but the page wasn't opening. I went and removed the power cable and put it back in. That brought back the configuration page. Still, I could not change any of the video feeds however this time, I could see the EPICS channel values (like C1:VID-QUAD1_4) change. I tired to go to the configuration page and change the matrix values from the control tab there. I found out that the matrix was mislabeled and while making the changes, I started seeing blue screen on QUAD1_3 (where MC2T was set before). I set the QUAD1_3 (output 23) to MC2T (input 16), but no change. The EPICS values are also set properly, so I don't understand the reason behind blue screen. The same happened when I tried to use:
~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_3 MC2T
Weirdly, this caused the QUAD1_4 screen to go blue. Running following had no effect:
~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_4 MCR
So, I'm not sure what to do. This really needs to be fixed! I wanted to see teh MC2F camera so that I can align IMC, that was the whole reason for this rabit hole. Help needed. |
16659
|
Thu Feb 10 19:03:23 2022 |
Koji | Update | General | Scheduled power outage recovery - Locking mode cleaner(s) |
I came back to the 40m and started the investigation.
If I ping 192.168.113.92, it responds. But telnet (port 23) was rejected. I somehow tried ssh and it responds! I even could login to the host using usual password. Here is the prompt.
controls@nodus|~> ssh 192.168.113.92
controls@192.168.113.92's password:
...
controls@c1sus2:~ 0$
Oh no...
Looks like c1sus2 and the videomux have the IP address conflict.
Here are the useful ELOG links:
https://nodus.ligo.caltech.edu:8081/40m/4498
https://nodus.ligo.caltech.edu:8081/40m/4529 |
16660
|
Thu Feb 10 19:46:37 2022 |
Koji | Update | General | Scheduled power outage recovery - Locking mode cleaner(s) |
== Assign new IP address to c1sus2 ==
cf: [40m ELOG 16398] [40m ELOG 16396]
- Shutdown c1sus2 (Oh, no. This killed c1lsc/c1sus/c1ioo... This should be taken care of later)
- Confirmed 192.168.113.87 is not alive
- Go to chiara
- Modify /diskless/root/etc/hosts
192.168.113.87 c1sus2 c1sus2.martian
- Modify /etc/dhcp/dhcpd.conf
host c1sus2 {
hardware ethernet 00:25:90:06:69:C2;
fixed-address 192.168.113.87;
}
- Modify /var/lib/bind/martian.hosts
c1sus2 A 192.168.113.87
videomux A 192.168.113.92
- Modify /var/lib/bind/martian.hosts/rev.113.168.192.in-addr.arpa
87 PTR c1sus2.martian
92 PTR videomux.martian
- Reload/restart bind9 / dhcpd. Run the following command
sudo service bind9 reload
sudo service isc-dhcp-server restart
- Restart c1sus2 and confirm if the IP address was actually changed
controls@c1sus2:~ 0$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:25:90:06:69:c2
inet addr:192.168.113.87 Bcast:192.168.113.255 Mask:255.255.255.0
...
== Restart c1lsc / c1sus /c1ioo ==
- Reboot c1lsc/c1sus/c1ioo
- Go to scripts/cds
- Run startC1LSC.sh and follow the instruction
|
16661
|
Thu Feb 10 21:10:43 2022 |
Koji | Update | General | Video Mux setting reset |
Now the video matrix is responding correctly and the web interface shows up. (Attachment 1)
Also the video buttons respond as usual. I pushed Locking Template button to bring the setting back to nominal. (Attachment 2) |
16662
|
Thu Feb 10 21:16:27 2022 |
Koji | Summary | CDS | chiara resolv.conf wierdo |
During the videomux debug, I noticed that the host name resolving on chiara didn't behave well. Basically I could not login to anything from chiara using host names.
I found that there was no /etc/resolv.conf. Instead, there is /etc/resolvconf directory.
According to my research, live resolv.conf is placed in /run/resolveconf/resolv.conf .
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.113.20
nameserver 131.215.125.1
nameserver 8.8.8.8
This 113.20 is directing an old "linux1" machine. Too much obsolete. If I modify this file as
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.113.104
nameserver 131.215.125.1
nameserver 8.8.8.8
search martian
Then the name resolving became reasonable. However, during rebooting / service resetting / etc, resolvconf -u command is executed and /run/resolveconf/resolv.conf is overridden, as indicated in the file.
I have modified /etc/resolvconf/resolv.conf.d/base to include 192.168.113.104 and search martian . The latter was included but the former did not show up.
FInally I figured out that, after the resolv.conf is constructed from base and head files in /etc/resolvconf/resolv.conf.d/ , NetworkManager overrides the nameserver addresses.
The configuration was found in /etc/NetworkManager/system-connections/Wired\ connection\ 1 .
Here is the modified setting (dns entry was modified)
>sudo cat /etc/NetworkManager/system-connections/Wired\ connection\ 1
[sudo] password for controls:
[802-3-ethernet]
duplex=full
mac-address=68:05:CA:36:4E:B4
[connection]
id=Wired connection 1
uuid=ed177e70-d10e-42be-8165-3bf59f8f199d
type=802-3-ethernet
timestamp=1438810765
[ipv6]
method=auto
[ipv4]
method=manual
dns=192.168.113.104;131.215.125.1;8.8.8.8;
addresses1=192.168.113.104;24;192.168.113.2;
And
>cat /etc/resolvconf/resolv.conf.d/base
search martian
# See Also /etc/NetworkManager/system-connections/Wired\ connection\ 1
So complicated...
|
16663
|
Thu Feb 10 21:51:02 2022 |
Koji | Update | CDS | [Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW |
Huge random numbers are flowing into ETMX/ETMY ASC PIT/YAW. Because of this, I could not damp the ETMX/ETMY suspension at the beginning during the recovery from rebooting. (Attachment 1)
By turning off the output of the ASC filters, the mirrors were successfully damped.
Looking at the FE model view of the end RTSs, there were two possibilities: (Attachment 2)
- They are coming from RFM connection
- They are coming from ASXASY
ASX/ASY are not active and I could not see anything producing these numbers. Burtrestore didn't help.
The possibility was something at the other side of the RFM, or corruption of the RFM signal.
- Looking at the RFM model (Attachment 3), the ASC signals are coming from ASS and IOO. The ASS path has the filter module (C1:RFM-ETMX_PIT and etc). This FM is quiet and not guilty.
- Why do we have the RFM from IOO? I went to IOO and found the new ASC (WFS) model is there. I didn't realize the presence of this model. In fact ASC screen showed that these random numbers are flowing into the end SUSs.
So I did burtrestore of c1iooepics. Alas! they are gone.
Now I can go home. |
16664
|
Fri Feb 11 10:56:38 2022 |
Anchal | Update | CDS | [Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW |
Yeah, this is a known issue actually. We go to ASC screen and manually swich off all the outputs after every reboot. We haven't been able to find a way to set default so that when the model comes online, these outputs remain switched off. We should find a way for this.
|
16665
|
Fri Feb 11 11:17:00 2022 |
Anchal | Update | General | Scheduled power outage recovery |
I found that two computers are not powering up in the control room, Ottavia and Allegra. Allegra was important for us as it had the current version of LIGO CDS workstation installed on, providing us with options to use latest packages written by LIGO CDS team. I think the power issue should be resolvable if someone opens it and knows what thye are doing. Do we have any way of getting fuse repairs on such computers? Both these computers are Dell XPS 420.
|
16666
|
Fri Feb 11 12:22:19 2022 |
rana | Update | CDS | [Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW |
you can hand edit the autoBurt file which the FE uses to set the values after boot up. Just make a python script that amends all of the OFF or ZERO that are needed to make things safe. This would be the autoBurt snap used on boot up only, and not the hourly snaps.
|
Yeah, this is a known issue actually. We go to ASC screen and manually swich off all the outputs after every reboot. We haven't been able to find a way to set default so that when the model comes online, these outputs remain switched off. We should find a way for this.
|
|
16667
|
Fri Feb 11 16:09:11 2022 |
Anchal | Update | General | Scheduled power outage recovery - Input power increased |
We increased the input power to IMC by replacing the 98% transmission BS by a 10% transmission BS on the detection table (reverse of what mentioned in 40m/16408 see attachment 8-9 ). We then realigned the BS so that MC RFPD is centered. Then we realigned two steering mirrors to get the beam centered on the WFS1 and WFS2 QPD. Then we increased the power of the input beam to get 5.307 reading on the C1:IOO-MC_RFPD_DCMON channel. We did this so that we can align the IMC. Once we have it aligned, we'll go back to low poer for doing chamber work.
Beware, there is about 1W beam on the detection table right now.
|
16668
|
Fri Feb 11 17:07:19 2022 |
Anchal | Update | CDS | [Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW |
The autoBurt file for FE already has the C1:ASC-ETMX_PIT_SW2 (and other channels for ETMY, ITMX, ITMY, BS and for YAW) present, and I checked the last snapshot file from Feb 7th, 2022, which has 0 for these channels. So I'm not sure why when FE boots up, it does not follow the switch configuration. Fr safety, I changed all the gains of these filter modules, named like C1:ASC-XXXX_YYY_GAIN (where XXXX is ETMX, ETMY, ITMX, ITMY, or BS , and YYY is PIT or YAW) to 0.0. Now, even if the FE loads with switches in ON configuration, nothing should happen. In future, if we use this model for anything, we can change the gain values which won't be hard to track as the reason why no signal moves forward. Note, the BS connections from this model to BS suspension model do not work.
Quote: |
you can hand edit the autoBurt file which the FE uses to set the values after boot up. Just make a python script that amends all of the OFF or ZERO that are needed to make things safe. This would be the autoBurt snap used on boot up only, and not the hourly snaps.
|
Yeah, this is a known issue actually. We go to ASC screen and manually swich off all the outputs after every reboot. We haven't been able to find a way to set default so that when the model comes online, these outputs remain switched off. We should find a way for this.
|
|
|
16669
|
Mon Feb 14 18:31:50 2022 |
Paco | Update | General | Scheduled power outage recovery - IMC recovery progress |
[Paco, Anchal, Tega]
We have been realigning the IMC as of last Friday (02/11). Today we made some significant progress (still at high input power), but the IMC autolocker is unable to engage a stable mode lock. We have made some changes to reach this point, including re-centering of the MC1 REFL beam on the ccd, centering of MC2 QPD trans (using flashes), and centering of the MC REFL RFPD beam. The IMC is flashing to peak transmission of > 50% its max (near 14,000 counts average on 2021), and all PDs seem to be working ok... We will keep the PSL shutter closed (especially with high input power) for now. |
16670
|
Mon Feb 14 18:43:49 2022 |
Paco | Summary | General | SOS materials clean room cleared |
[Yehonathan, Paco]
We put away most items used / involved in SOS assembly and characterization. Many were stored in the left-most cabinet in the clean area. The OpLev test setup and optics were stored in the upper cabinets above the microscope area, and several screws and other general components were collected in clean bags or wrapped in foil, labeled and put away. |
16671
|
Mon Feb 14 21:03:25 2022 |
Koji | Update | General | Scheduled power outage recovery |
I opened the boxes. Allegra has obvious vent of at least 4 caps. And the power supply did not respond even a paper clip test was performed. https://www.silverstonetek.com/downloads/QA/PSU/PSU-Paper%20Clip-EN.pdf (Paper Clip Test)
=> The mother board and the PSU are dead.
Then Ottavia was also checked. The mother board looked OK, but the PSU did not respond. I quickly opened the PSU and it had a bunch of bulged capacitors in it. => PSU dead
Conclusion: Save the cards/memory etc as much as possible. Migrate the allegra HDD to any other healthy PC or obtain a new used PC from Larry. Otherwise, we just want to buy another WS and copy the disk in it.
|
16672
|
Tue Feb 15 19:32:50 2022 |
Koji | Update | General | Scheduled power outage recovery - IMC recovery progress |
Reduced the IMC power to 100mW
Setup: The power meter was placed right before the final aperture (Attachment 1)
Before the adjustment: Initial position of the HWP was 37.29deg and the input power was 987mW (Attachments 2/3)
After the adjustment: Initial position of the HWP was 74.00deg and the input power was 100mW (Attachments 4/5)
This made the MCREFL reading 0.549.
The MC refl path optics has not been modified. |
16673
|
Tue Feb 15 19:40:02 2022 |
Koji | Update | General | IMC locking |
IMC is locking now. There was nothing wrong: just a careful alignment + proper gain adj
=== Primary Alignment ===
- I used WFS error signals as the indicator of the PDH error signals. Checked C1:IOO-WFS1_(I/Q)n_ERR and ended up C1:IOO-WFS1_I4_ERR as it showed the largest PDH error PP.
- Then used MC2 and MC3 to align the IMC by maximizing the PDH error and the MC trans (C1:IOO-MC_TRANS_SUM_ERR)
=== Locking procedure ===
Note that the MC REFL path is still configured for the full power input
- (Only at the beginning) Run scripts/MC/mcdown for initialization / Run scripts/MC/MC2tickleOFF just in case
- Enable IOO-MC-SW1 (MC SERVO switch right after "IN1 Gain (dB)").
- Disable 40:4000 boost
- Increase VCO Gain from -15 to 0
- Jiggle IN1 Gain from low to +31 until the lock is achieved
- As soon as the lock is acquired, enable 40:4000
- Increase VCO Gain to +10
- Turn up "SUPER BOOST" from 0 to 3
=== Lock loss procedure ===
Note that the MC REFL path is still configured for the full power input
- Disable IOO-MC-SW1
- Disable 40:4000 boost
- Reduce VCO Gain 0
- Turn down "SUPER BOOST" to 0
- Then jiggle IN1 Gain again to lock the IMC
=== MC2 spot ===
- It was obvious that the MC2F spot was not on the center of the optic.
- I tried to move the spot on the camera as much as possible, but this did not make the trans beam to the center of the MC end QPD
- I had the impression that the trans beam started to be clipped when the beam was moved towards the end QPD,
We need to reestablish the reasonable/consistent MC2 spot on the mirror, the MC end optics, and the QPD.
We will need to use MC2 dithering and A2L coupling to determine the center of the mirror
But as long as the transmission is maximized, the transmitted beam thru MC1 and MC3 follows the input beam. So we can continue the vent work
The current maximized transmission was ~1300. MC1 refl CCD view was largely off -> The camera path was adjusted.
=== MC2 alignment note ===
During the alignment, I noticed a sudden change of the MC2 alignment. There might be some hysteresis in the MC2 suspension. If you are locking the IMC and noticed significant misalignment, the first thing to try is to touch MC2 alignment. |
16674
|
Wed Feb 16 15:19:41 2022 |
Anchal | Update | General | Reconfigured MC reflection path for low power |
I reconfigured the MC reflection path for low power. This meant the following changes:
- Replaced the 10% reflection BS by 98% reflection beam splitter
- Realigned the BS angle to get maximum on C1:IOO-MC_RFPD_DCMON when cavity is unlocked.
- Then realigned the steering mirrors for WFS1 and WFS2.
- I tried to align the light for MC reflection CCD but then I realized that the pickoff for the camera is too low for it to be able to see anything.
Note, even the pick-off for WFS1 and WFS2 is too low I think. The IOO WFS alignment does not work properly for such low levels of light. I tried running the WFS loop for IMC and it just took the cavity out of the lock. So for low power scenario, we would keep the WFS loops OFF.
|
16675
|
Tue Feb 22 18:47:51 2022 |
Ian MacMillan | Update | SUS | ETMY SUS Electronics Replacement |
[Ian, Koji]
In preparation for the replacement of the suspension electronics that control the ETMY, I took measurments of the system excluding the CDS System. I took transfer functions from the input to the coil drivers to the output of the OSEMs for each sensor: UL, UR, LL, LR, and SIDE. These graphs are shown below as well as all data in the compressed file.
We also had to replace the oplev laser power supply down the y-arm. The previous one was not turning on. the leading theory is that it's failure was caused by the power outage. We replaced it with one Koji brought from the fiber display setup.
I also am noting the values for the OSEM DC output
OSEM |
Value |
UL |
557 |
UR |
568 |
LR |
780 |
LL |
385 |
SIDE |
328 |
In addition the oplev position was:
OPLEV_POUT |
4.871 |
OPLEV_YOUT |
-0.659 |
OPLEV_PERROR |
-16.055 |
OPLEV_YERROR |
-6.667 |
(KA ed) We only care about PERROR and YERROR (because P/YOUT are servo output)
Edit: corrected DC Output values |
16676
|
Wed Feb 23 15:08:57 2022 |
Anchal | Update | General | Removed extra beamsplitter in MC WFS path |
As discussed in the meeting, I removed the extra beam splitter that dumps most of the beam going towards WFS photodiodes. This beam splitter needs to be placed back in position before increasing the input power to IMC at nominal level. This is to get sufficient light on the WFS photodiodes so that we can keep IMC locked for more than 3 days. Currently IMC is unlocked and misaligned. I have marked the position of this beam splitter on the table, so putting it back in should be easy. Right now, I'm trying to align the mode cleaner back and start the WFS loops once we get it locked. |
16677
|
Thu Feb 24 14:32:57 2022 |
Anchal | Update | General | MC RFPD DCMON channel got stuck to 0 |
I found a peculiar issue today. The C1:IOO-MC_RFPD_DCMON remains constantly 0. I wonder if the RFPF output is being read properly. I opened the table and used an oscilloscope to confirm that the DC output from the MC REFL photodiode is coming consistently but our EPICs channel is not reading it. I tried restarting the modbusIOC service but that did not affect anything. I power cycled the acromag chassis while keeping the modbusIOC service off, and then restarted teh modbusIOC service. After this, I saw more channels got stuck and became unresponsive, including the PMC channels. So then I rebooted c1psl without doing anything to the acromaf chasis, and finally things came back online. Everything looks normal to me now but I'm not sure if one of the many channels is not in the right state. Anyways, problem is solved now.
|
16678
|
Thu Feb 24 18:05:58 2022 |
Yehonathan | Update | BHD | Re-susspension of AS1 |
{Yehonathan, Anchal, Paco}
Yesterday, Anchal and Paco removed AS1 from the vacuum chamber and moved it into the cleanroom. The suspension wires were cut and the AS1 optic was put on the table.
Two things were noticed:
1. One of the wires was not sitting inside the side block groove (attachment 1)
2. One of the face magnets was grossly tilted (attachment 2). Probably due to uneven polishing of the dumbbell.
We put new wires into the side blocks making sure they sit in their grooves and we removed the tilted magnet. A different, more straight magnet was picked from the remaining spare magnets. The dumbbell and adapter were cleaned from glue residues and a batch of glue was prepared.
In the process of gluing a different magnet was knocked off. We cleaned that magnet too. The 2 magnets were glued on the adapter.
Today I came and saw that the gluing failed completely. One of the magnets was completely away from its socket and the other one wasn't glued at all.
I prepared a new batch of glue and glued the two magnets. |
16679
|
Thu Feb 24 19:26:32 2022 |
Anchal | Update | General | IMC Locking |
I think I have aligned the cavity, including MC1 such that we are seeing flashing of fundamental mode and significant transmission sum value as well.However, I'm unable to catch lock following Koji's method in 40m/16673. Autolocker could not catch lock either. Maybe I am doing something wrong, I'll pickup again tomorrow, hopefully the cavity won't drift too much in this time. |
16680
|
Fri Feb 25 14:00:08 2022 |
Ian MacMillan | Update | SUS | ETMY SUS Electronics Replacement |
[Koji, Ian]
We looked at a few power supplies and we found one that was marked "CHECK IF THIS WORKS" in yellow. We found that the power supply worked but the indicator light didn't work. I tried a two other lights from other power supplies that did not work but they did not work. Koji ordered a new one. |
16681
|
Fri Feb 25 14:48:53 2022 |
Ian MacMillan | Update | SUS | ETMY SUS Electronics Replacement |
I moved the network-enabled power strip from above the power supplies on rack 1y4 to below. Nothing was powered through the strip when I unplugged everything and I connected everything to the same port after. |
16682
|
Sat Feb 26 01:01:40 2022 |
Tega | Update | VAC | Ongoing work to get the FRG gauges readouts to EPICs channels |
I will make a detailed elog later today giving a detailed outline of the connection from the Agilent gauge controller to the vacuum subnet and the work I have been doing over the past two days to get data from the unit to EPICs channels. I just want to mention that I have plugged the XGS-600 gauge controller into the serial server on the vacuum subnet. I check the vacuum medm screen and I can confirm that the other sensors did not experience and issues are a result of this. I also currently have two of the FRG-700 connected to the controller but I have powered the unit down after the checks. |
16683
|
Sat Feb 26 15:45:14 2022 |
Tega | Update | VAC | Ongoing work to get the FRG gauges readouts to EPICs channels |
I have attached a flow diagram of my understanding of how the gauges are connected to the network.
Earlier today, I connected the XGS-600 gauge controller to the IOLAN Serial Device Server on port 192.168.114.22 .
The plan is a follows:
1. Update the serial device yaml file to include this new ip entry for the XGS-600 gauge controller
2. Create a serial gauge class "serial_gauge_xgs.py" for the XGS-600 gauge controller that inherits from the serial gauge p arent class for EPICS communication with a serial device via TCP sockets.
- Might be better to use the current channels of the devices that are being replaced initially, i.e.
-
C1:Vac-FRG1_pressure |
C1:Vac-CC1_pressure |
C1:Vac-FRG2_pressure |
C1:Vac-CCMC_pressure |
C1:Vac-FRG3_pressure |
C1:Vac-PTP1_pressure |
C1:Vac-FRG4_pressure |
C1:Vac-CC4_pressure |
C1:Vac-FRG5_pressure |
C1:Vac-IG1_pressure |
3. Modify the launcher file to include the XGS gauge controller. Following the same pattern used to start the service for the other serial gauges, we can start the communication between the XGS-600 gauge controller and the IOLAN serial server and write data to EPICS channels using
controls@c1vac> python launcher.py XGS600
If we are able to establish communication between the XGS-600 gauge controller and write it gause data to EPICS channels, go on to steps 4.
4. Create a serial service file "serial_XGS600.service" and place it in the service folder
5. Add the new EPICS channels to the database file
6. Add the "serial_XGS600.service" to line 10 and 11 of modbusIOC.service
7. Later on, when we are ready, we can restart the updated modbusIOC service
For vacuum signal flow and Acromag channel assignments see [1] and [2] respectively. For the 16 port IOLAN SDS (Serial Device Server) ethernet connections, see [3].
[1] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=40m_Vacuum_System_Signal_Flow.pdf
[2] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=AcromagChannelAssignment.pdf
[3] https://git.ligo.org/40m/vac/-/blob/master/python/serial/serial_devices.yaml |
16684
|
Sat Feb 26 23:48:14 2022 |
Koji | Update | SUS | ETMY SUS Electronics Replacement |
[Ian, Koji] - Activity on 25th (Fri)
We continued working on the ETMY electronics replacement.
- The units were fixed on the rack along with the rack plan.
- Unnecessary Eurocard modules were removed from the crate.
- Unnecessary IDC cables and the sat amp were removed from the wiring chain. The side cross-connects became obsolete and they also were removed.
- A 18V DC power strip was attached to one of the side DIN rails.
Warning:
- Right now the ETMY suspension is free and not damped. We are relying on the EQ stops.
Next things to do:
- Layout the coil driving cables from the vacuum feedthru to the sat amp (2x D2100675-01 30ft ) [40m wiki]
- Layout DB cables between the units
- Layout the DC power cables from the power strip to the units
- Reassign ADC/DAC channels in the iscey model.
- Recover the optic damping
- Measure the change of the PD gains and the actuator gains. |
16685
|
Sun Feb 27 00:37:00 2022 |
Koji | Update | General | IMC Locking Recovery |
Summary:
- IMC was locked.
- Some alignment change in the output optics.
- The WFS servos working fine now.
- You need to follow the proper alignment procedure to recover the good alignment condition.
Locking:
- Basically followed the previous procedure 40m/16673.
- The autolocker was turned off. Used MC2 and MC3 for the alignment.
- Once I hit the low order modes, increased the IN1 gain to acquire the lock. This helped me to bring the alignment to TEM00
- Found the MC2 spot was way too off in pitch and yaw.
- Moved MC1/2/3 to bring the MC2 spot around the center of the mirror.
- Found a reasonably good visibility (<90%) at a MC2 spot. Decided this to be the reference (at least for now)
SP Table Alignment Work
- Went to the SP table and aligned the WFS1/2 spots.
- I saw no spot on the camera. Found that the beam for the camera was way too weak and a PO mirror was useless to bring the spot on the CCD.
- So, instead, I decided to catch an AR reflection of the 90% mirror. (See Attachment 1)
- This made the CCD vulnerable to the stronger incident beam to the IMC. Work on the CCD path before increasing the incident power.
MC2 end table alignment work
- I knew that the focusing lens there and the end QPD had inconsistent alignment.
- The true MC2 spot needs to be optimized with A2L (and noise analysis / transmitted beam power analysis / etc)
- So, just aligned the QPD spot using today's beam as the temporary target of the MC alignment. (See Attachment 2)
Resulting CCD image on the quad display (Attachment 3)
WFS Servo
- To activate the WFS with the low transmitted power, the trigger threshold was reduced from 5000 to 500. (See Attachment 4)
- WFS offset was reset with /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_RF_offsets
- Resulting working state looks like Attachment 5 |
16686
|
Sun Feb 27 01:12:46 2022 |
Koji | Update | General | IMC manual alignment procedure |
We expect that the MC sus are susceptible to the temperature change and the alignment drifts away with time.
Here is the proper alignment procedure.
0) Assume there is no TEM00 flash or locking, but the IMC is still flashing with higher-order modes.
1) Use the CCD camera and WFS DC spots to bring the beam to the nominal position.
2) Use only MC2 and MC3 to align the cavity to have low-order modes (TEM00,01,02 etc)
3) You should be able to lock the cavity on one of these modes. Minimize the reflection (maximize the transmission) for that mode.
4) This should allow you to jump to a better lower-order mode. Continue alignment optimization only with MC2/3 until you get TEM00.
5) Optimize the TEM00 alignment only with MC2/3
6) Look at the MC end QPD. use one of the scripts in scripts/MC/moveMC2 . Note that the spot moves opposite to the name of the scripts. i.e. MC2_spot_down moves the spot up, MC2_spot_right moved the spot left, etc...
These scripts move MC1/2/3 and try to keep the good MC transmission.
7) moveMC2 scripts are not perfect. As you use them, it makes the MC alignment gradually degraded. Use MC2 and MC3 to recover good transmission.
8) If MC2 spot is satisfactory, you are done.
-------------
Step 6-8 can be done with the WFS on. This way, you can skip step 7 as the WFS servo takes care of it. But if the spot move is too fast, the servo can't keep up with the change. If so, you have to wait for the settling of the servo. Once the spot position is satisfactory, MC servo relief should be run so that the servo offset (in actuation) can be offloaded to the bias slider.
|
16687
|
Mon Feb 28 15:51:07 2022 |
Ian MacMillan | Update | SUS | ETMY 1Y4 Electronics Replacement |
[Paco, Ian]
paco helped me wire the ETMY 1Y4 rack. We wired the following (copied from Koji's email):
- Use DB9-DB9 to complete the wiring between
- 16bit DAC AI Chassis - End DAC Adapter (4 cables)
- End DAC Adapter - HAM-A Coil Driver (2 cables)
- AA Chassis - End ADC Adapter (2 cables)
- Koji already brought two special DB9-DB15 cables (in plastic bags) to the end. They connect the HAM-A coil drivers to the satellite amp. At this time, we skip Low Noise HV Bias Driver.
- Bring two 30ft DB25 (called #1, aka D2100675-01) cables from the office area to the end. I collected one end and left them there.
- All the new units have +/-18V DC supply in the back. Find the orange cables behind the 40m vacuum duct around Y-end and connect the units and the DC power strip. Use short cables if possible to save the longer ones.
the cables we used:
Number Used |
Type of Cable |
Length |
8 |
DB9 to DB9 |
2.5 ft |
2 |
DB9 to DB9 |
5 ft |
2 |
DB9-DB15 |
|
2 |
DB25 (called #1, aka D2100675-01) |
30ft |
9 |
Orange Power Cables |
~ 3 ft |
I attached pictures below. |
16688
|
Mon Feb 28 19:15:10 2022 |
Tega | Update | VAC | Ongoing work to get the FRG gauges readouts to EPICs channels |
I decided to create an independent service for the XGS data readout so we can get this to work first before trying to integrate into current system. After starting the service, I noticed that the EPICS channel were not updating as expected. So I started to debug the problem and managed to track it down to an ip socket connect() error, i.e. we get a connection error for the ip address assigned to the LAN port to which the XGS box was connected. After trying a few things and searching the internet, I think the error indicates that this particular LAN port is not yet configured. I reached this conclusion after noting that only a select number of LAN ports connected without issues and these are the ports that already had devices connected. So it must be the case that the LAN ports were somehow configured. The next step is to look at the IOLAN manual to figure out how to configure the ip port for the XGS controller. Fingers crossed. |
16689
|
Tue Mar 1 16:01:14 2022 |
Paco | Update | Electronics | RFSoC 2x2 board -- setup for remote work & BALUN saga |
[Tommy, Paco]
Since last week I've worked with tommy on getting the RFSoC 2x2 board to get some TFs from simple minicircuits type filters. The first thing I did was set up the board (which is in the office area) for remote access. I hooked up the TCP/IP port to a wall ethernet socket (LIGO-04) and the caltech network assiggned some IP address to our box. I guess eventually we can put this behind the lab network for internal use only.
After fiddling around with the tone-generators and spectrum analyzer tools in loopback configuration (DAC --> ADC direct connection), we noticed that lower frequency (~ 1 MHz) signals were hardly making it out/back into the board... so we looked at some of the schematics found here and saw that both RF data converters (ADC & DAC) interfaces are AC coupled through a BALUN network in the 10 - 8000 MHz band (see Attachment #1). This is in principle not great news if we want to get this board ready for audio-band DSP.
We decided that while Tommy works on measuring TFs for SHP-200 all the way up to ~ 2 GHz (which is possible with the board as is) I will design and put together an analog modulation/demodulation frontend so we can upconvert all our "slow" signals < 1MHz for fast, wideband DSP. and demodulate them back into the audio band. The BALUN network is pictured in Attachment #2 on the board, I'm afraid it's not very simple to bypass without damaging the PCB or causing some other unwanted effect on the high-speed DSP. |
16690
|
Tue Mar 1 19:26:24 2022 |
Koji | Update | SUS | ETMY SUS Electronics Replacement |
The replacement key switches and Ne Indicators came in. They were replaced and work fine now.
The power supply units were tested with the X end HeNe display. It turned out that one unit has the supply module for 1350V 4.9mA while the other two do 1700V 4.9mA.
In any case, these two ignited the HeNe Laser (1103P spec 1700V 4.9mA).
The 1350V one is left at the HeNe display and the others were stored in the cabinet together with spare key SWs and Ne lamps. |
16691
|
Tue Mar 1 20:38:49 2022 |
Tega | Update | VAC | Ongoing work to get the FRG gauges readouts to EPICs channels |
During my investigation, I inadvertently overwrote the serial port configuration for the connected devices. So I am now working to get it all back. I have attached screenshots of the config settings that brought back communication that is not garbled. There is no physical connection to port 6, which I guess was initially used for the UPS serial communication but not anymore. Also, ports 9 and 10 are connected to Hornet and SuperBee, both of which have not been communicating for a while and are to be replaced, so there is no way to confirm communication with them. Otherwise, the remaining devices seem to be communicating as before.
I still could not establish communication with the XGS-600 controller using the serial port settings given in the manual, which also happen to work via Serial to USB adapter, so I will revisit the problem later. My immediate plan is to do a Serial Ethernet, then Ethernet to Serial, and then Serial to USB connection to see if the USB code still works. If it does then at least I know the problem is not coming from the Serial to Ethernet adapters. Then I guess I will replace the controller with my laptop and see what signal comes through when I send a message to the controller via the IOLAN serial device server. Hopefully, I can discover what's wrong by this point.
Note to self: Before doing anything, do a sanity check by comparing the settings on the IOLAN SDS and the config settings that worked for the Serial to USB communication and post an elog for this for reference. |
16692
|
Wed Mar 2 11:50:39 2022 |
Tega | Update | VAC | Ongoing work to get the FRG gauges readouts to EPICs channels |
Here is the IOLAN SDS TCP socket setting and the USBserial setting for comparison.
I have also included the python script and output from the USBserial test from earlier. |
16693
|
Wed Mar 2 12:40:08 2022 |
Tega | Update | VAC | Ongoing work to get the FRG gauges readouts to EPICs channels |
Connector Test:
A quick test to rule out any issue with the Ethernet to Serial adapter was done using the setup shown in Attachment 1. The results rule out any connector problem.
IOLAN COMM test (as per Koji's suggestion):
The next step is to swap the controller with a laptop set up to receive serial commands using the same settings as the XGS600 controller. Basically, run a slightly modified version of python script where we go into listening mode. Then send commands to the TCP socket on the IOLAN SDS unit using c1vac and check what data makes its way to the laptop USBserial terminal. After working on this for a bit, I realized that we do not need to do anything on the c1vac machine. We only need to start the service as it would work normally. So I wrote a small python code for a basic XGS-600 controller emulator, see Attachment 4. The outputs from the laptop and c1vac terminals are Attachments 5 and 6 respectively.
These results show that we can communicate via the assigned IP address "192.168.114.22" and the commands that are sent from c1vac reaches the laptop in the correct format. Furthermore, the serial_XSG service, a part modbusIOC_XGS service, which usually exits with an error seems fine now after successfully communicating with the laptop. I don't know why it did not die after the tests. I also found a bug in my code as a result of the test, where the status field for the fourth gauge didn't get written to.
Pressure reading issue:
I noticed that the pressure reading was not giving the atmospheric value of ~760 Torrs as expected. Looking through my previous readouts, it seems the unit showed this atm value of ~761 Torrs when the first gauge was attached. However, a closer look at the issue revealed a transient behavior, i.e. when the unit is turned on the reading dips to atm value but eventually rises up to 1000 Torrs. I don't think this is a calibration problem bcos the value of 1000 Torrs is the maximum value for the gauge range. I also found out that when the XGS-controller has been running for a while, a power cycle does not have this transient behavior. So maybe a faulty capacitor somewhere? I have attached a short video clip that shows what happens when the XGS-controller unit is turned on. |
16694
|
Wed Mar 2 14:02:43 2022 |
Yehonathan | Update | BHD | Re-susspension of AS1 |
Yesterday, I rebuilt the OpLev setup in the cleanroom in order to suspend AS1. It took me a while to find all the necessary parts but I found them in the end.
The HeNe laser was placed on the optical table and turned on. The beam was aimed to bounce off a folding mirror to the SOS tower.
The beam's height was controlled by the HeNe laser stage and made to be 5+14/32". The beam from the folding mirror was made parallel to the table, first with an iris and then with the QPD connected to a scope.
Preparing the SOS tower for the suspension I noticed that the wire clamp is scratched on both sides from previous suspensions. I discarded that wire clamp but couldn't find the spares. Time ran out and I had to stop. |
16695
|
Thu Mar 3 04:11:36 2022 |
Koji | Update | SUS | ETMY 1Y4 Electronics Replacement |
For the Y-end electronics replacement, we want to remove unused power supplies. In fact, we already removed the +/-5V supplies from the stack. I was checking what supply voltages are used by the Eurocard modules. I found that D990399 QPD whitening board had the possible use of +/-5V.
The 40m Y-end version can be found here D1400415. The +/-5V supply voltages are used at the input stage AD620 and the QPD bias voltage of -5V.
AD620 can work with +/-15V. Also the bias voltage can easily be -15V. So I decided to cut the connector legs and connected +5V line to +15V, and -5V line to -15V.
With this modification, I can say that the eurocards only use the +/-15V voltages and nothing else.
The updated schematics can be found as D1400415-v6 |
16696
|
Thu Mar 3 04:24:23 2022 |
Koji | Update | SUS | ETMY 1Y4 Electronics Replacement |
The DC power strip at Y-end was connected to the bottom two Sorensen power supplies. They are configured to provide +/-18V.
|