40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 338 of 354  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Typeup Category Subject
  16652   Wed Feb 9 11:56:24 2022 AnchalUpdateGeneralBringing back CDS

[Anchal, Paco]

Bringing back CDS took a lot of work yesterday. I'm gonna try to summarize the main points here.


mx_start_stop

For some reason, fb1 was not able to mount mx devices automatically on system boot. This was an issue I earlier faced in fb1(clone) too. The fix to this problem is to run the script:

controls@fb1:/opt/mx/sbin/mx_start_stop start

To make this persistent, I've configured a daemon (/etc/systemd/system/mx_start_stop.service) in fb1 to run once on system boot and mount the mx devices as mentioned above. We did not see this issue of later reboots yesterday.


gpstime

Next was the issue of gpstime module out of date on fb1. This issue is also known in the past and requires us to do the following:

controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 1$ sudo modprobe gpstime

Again, to make this persistent, I've configured a daemon (/etc/systemd/system/re-add-gpstime.service) in fb1 to run the above commands once on system boot. This corrected gpstime automatically and we did not face these problems again.


time synchornization

Later we found that fb1-FE computers, ntp time synchronization was not working and the main reason was that fb1 was unable to access internet. As a rule of thumb, it is always a good idea to try pinging www.google.com on fb1 to ensure that it is connected to internet. The issue had to do with fb1 not being able to find any namespace server. We fixed this issue by reloading bind9 service on chiara a couple of times. We're not really sure why it wasn't working.

~>sudo service bind9 stop
~>sudo service bind9 start
~>sudo service bind9 status
* bind9 is running

After the above, we saw that fb1 ntp server is working fine. You see following output on fb1 when that is the case:

controls@fb1:~ 0$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
-table-moral.bnr 110.142.180.39   2 u  399  512  377  195.034  -14.618   0.122
*server1.quickdr .GPS.            1 u   67   64  377  130.483   -1.621   1.077
+ntp2.tecnico.ul 56.99.239.27     2 u  473  512  377  184.648   -0.775   2.231
+schattenbahnhof 129.69.1.153     2 u  365  512  377  144.848    3.841   1.092
 192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000

On the FE models, timedatectl should show that NTP synchronized feild is yes. That wasn't happening even after us restarting the systemd-timesyncd service. After this, I just tried restarting all FE computers and it started working.


CDS

We had removed all db9 enabling plugs on the new SOSs beforehand to keep coils off just in case CDS does not come back online properly.

Everything in CDS loaded properly except the c1oaf model which kepy showing 0x2bad status. This meant that some IPC flags are red on c1sus, c1mcs and c1lsc as well. But everything else is green. See attachment 1. I then burtrestroed everything in the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2022/Feb/4/12:19 directory. This includes the snapshot of c1vac as well that I added on autoburt that day. All burt restore statuses were green OK. I think we are in good state now to start watchdogs on the new SOSs and put back the db9 enabling plugs.


Future work:

When somebody gets time, we should make cutom service files in fb1:/etc/systemd/system/ symbolic links to a repo directory and version control these important services. We should also make sure that their dependencies and startup order is correctly configured. I might have done a half-assed job there since I recently learned how to make unit files. We should do the same on nodus and chiara too. Our hope is that on one glorious day, the lab can be restarted without spending more than 20 min on booting up the computers and network.

 

  16653   Wed Feb 9 13:55:05 2022 KojiUpdateGeneralBringing back CDS

Great recovery work and cleaning of the rebooting process.

I'm just curious: Did you observe that the c1sus2 cards have different numbering order than the previous along with the power outage/cycling?

  16655   Wed Feb 9 16:43:35 2022 PacoUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

[Paco, Anchal]

  • We went in and measured the power after the power splitting HWP at the PSL table. Almost right before the PSL shutter (which was closed), when the PMC was locked we saw ~ 598 mW (!!)
  • Checking back on ESP300, it seems the channel was not enabled even though the right angle was punched in, so it got enabled.
    • No change.
  • The power adjustment MEDM screen is not really working...
  • Going back to the controller, press HOME on the Axis 1 (our HWP) and see it go to zero...
    • Now the power measured is ~ 78 mW.
  • Not sure why the MEDM screen didn't really work (this needs to be fixed later)

We proceeded to align the MC optics because all offsets in MC_ALIGN screen were zeroed. After opening the PSL shutter, we used values from last year as a reference, and try to steadily recover the alignment. The IMC lock remains at large.

  16657   Thu Feb 10 15:41:00 2022 AnchalUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

I found out that the ESP300 service needs to be run in root mode for it to be able to connect to the USB port of HWP motor controller. While doing this change, I noticed that the channels hosted by c1psl might have a duplication conflict with some other channel hosting computer, because a lot of them show the Warning: "Identical process variable names on multiple servers" which is not good. Someone should look into this conflict.

I added instructions on the power control MEDM screen as it was very non-trivial to use. I have set the power such that the C1:IOO-MC_RFPD_DCMON is 5.6 and this happened at C1:IOO-HWP_POS_SET 2.29.

  16658   Thu Feb 10 17:57:48 2022 AnchalUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

Something is wrong with the Video MUX. The system did not turn back on with full functionality. Even though we see the screens as they were before the power shutdown, we have lost control on switching any of the videos. I went to check the wiki page about Video MUX which told be we should be able to see the configuration screen on this link, but the page wasn't opening. I went and removed the power cable and put it back in. That brought back the configuration page. Still, I could not change any of the video feeds however this time, I could see the EPICS channel values (like C1:VID-QUAD1_4) change. I tired to go to the configuration page and change the matrix values from the control tab there. I found out that the matrix was mislabeled and while making the changes, I started seeing blue screen on QUAD1_3 (where MC2T was set before). I set the QUAD1_3 (output 23) to MC2T (input 16), but no change. The EPICS values are also set properly, so I don't understand the reason behind blue screen. The same happened when I tried to use:

~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_3 MC2T

Weirdly, this caused the QUAD1_4 screen to go blue. Running following had no effect:

~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_4 MCR

So, I'm not sure what to do. This really needs to be fixed! I wanted to see teh MC2F camera so that I can align IMC, that was the whole reason for this rabit hole. Help needed.

  16659   Thu Feb 10 19:03:23 2022 KojiUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

I came back to the 40m and started the investigation.

If I ping 192.168.113.92, it responds. But telnet (port 23) was rejected. I somehow tried ssh and it responds! I even could login to the host using usual password. Here is the prompt.

controls@nodus|~> ssh 192.168.113.92
controls@192.168.113.92's password:

...
controls@c1sus2:~ 0$

Oh no...

Looks like c1sus2 and the videomux have the IP address conflict.

Here are the useful ELOG links:

https://nodus.ligo.caltech.edu:8081/40m/4498

https://nodus.ligo.caltech.edu:8081/40m/4529

  16660   Thu Feb 10 19:46:37 2022 KojiUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

== Assign new IP address to c1sus2 ==

cf: [40m ELOG 16398] [40m ELOG 16396]

- Shutdown c1sus2 (Oh, no. This killed c1lsc/c1sus/c1ioo... This should be taken care of later)

- Confirmed 192.168.113.87 is not alive

- Go to chiara
- Modify /diskless/root/etc/hosts

192.168.113.87  c1sus2 c1sus2.martian

- Modify /etc/dhcp/dhcpd.conf

host c1sus2 {
  hardware ethernet 00:25:90:06:69:C2;
  fixed-address 192.168.113.87;
}

- Modify /var/lib/bind/martian.hosts

c1sus2          A    192.168.113.87
videomux        A    192.168.113.92

- Modify /var/lib/bind/martian.hosts/rev.113.168.192.in-addr.arpa

87            PTR    c1sus2.martian
92            PTR    videomux.martian

- Reload/restart bind9 / dhcpd. Run the following command

sudo service bind9 reload
sudo service isc-dhcp-server restart

- Restart c1sus2 and confirm if the IP address was actually changed

controls@c1sus2:~ 0$ /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 00:25:90:06:69:c2
          inet addr:192.168.113.87  Bcast:192.168.113.255  Mask:255.255.255.0
...

== Restart c1lsc / c1sus /c1ioo ==

- Reboot c1lsc/c1sus/c1ioo

- Go to scripts/cds

- Run startC1LSC.sh and follow the instruction

 

  16661   Thu Feb 10 21:10:43 2022 KojiUpdateGeneralVideo Mux setting reset

Now the video matrix is responding correctly and the web interface shows up. (Attachment 1)

Also the video buttons respond as usual. I pushed Locking Template button to bring the setting back to nominal. (Attachment 2)

  16663   Thu Feb 10 21:51:02 2022 KojiUpdateCDS[Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW

Huge random numbers are flowing into ETMX/ETMY ASC PIT/YAW. Because of this, I could not damp the ETMX/ETMY suspension at the beginning during the recovery from rebooting. (Attachment 1)
By turning off the output of the ASC filters, the mirrors were successfully damped.

Looking at the FE model view of the end RTSs, there were two possibilities: (Attachment 2)

- They are coming from RFM connection
- They are coming from ASXASY

ASX/ASY are not active and I could not see anything producing these numbers. Burtrestore didn't help.

The possibility was something at the other side of the RFM, or corruption of the RFM signal.

- Looking at the RFM model (Attachment 3), the ASC signals are coming from ASS and IOO. The ASS path has the filter module (C1:RFM-ETMX_PIT and etc). This FM is quiet and not guilty.

- Why do we have the RFM from IOO? I went to IOO and found the new ASC (WFS) model is there. I didn't realize the presence of this model. In fact ASC screen showed that these random numbers are flowing into the end SUSs.
So I did burtrestore of c1iooepics. Alas! they are gone.

Now I can go home.

  16664   Fri Feb 11 10:56:38 2022 AnchalUpdateCDS[Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW

Yeah, this is a known issue actually. We go to ASC screen and manually swich off all the outputs after every reboot. We haven't been able to find a way to set default so that when the model comes online, these outputs remain switched off. We should find a way for this.

 

  16665   Fri Feb 11 11:17:00 2022 AnchalUpdateGeneralScheduled power outage recovery

I found that two computers are not powering up in the control room, Ottavia and Allegra. Allegra was important for us as it had the current version of LIGO CDS workstation installed on, providing us with options to use latest packages written by LIGO CDS team. I think the power issue should be resolvable if someone opens it and knows what thye are doing. Do we have any way of getting fuse repairs on such computers? Both these computers are Dell XPS 420.

 

  16666   Fri Feb 11 12:22:19 2022 ranaUpdateCDS[Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW

you can hand edit the autoBurt file which the FE uses to set the values after boot up. Just make a python script that amends all of the OFF or ZERO that are needed to make things safe. This would be the autoBurt snap used on boot up only, and not the hourly snaps.

 

Yeah, this is a known issue actually. We go to ASC screen and manually swich off all the outputs after every reboot. We haven't been able to find a way to set default so that when the model comes online, these outputs remain switched off. We should find a way for this.

 

  16667   Fri Feb 11 16:09:11 2022 AnchalUpdateGeneralScheduled power outage recovery - Input power increased

We increased the input power to IMC by replacing the 98% transmission BS by a 10% transmission BS on the detection table (reverse of what mentioned in 40m/16408 see attachment 8-9laugh). We then realigned the BS so that MC RFPD is centered. Then we realigned two steering mirrors to get the beam centered on the WFS1 and WFS2 QPD. Then we increased the power of the input beam to get 5.307 reading on the C1:IOO-MC_RFPD_DCMON channel. We did this so that we can align the IMC. Once we have it aligned, we'll go back to low poer for doing chamber work.

Beware, there is about 1W beam on the detection table right now.

 

  16668   Fri Feb 11 17:07:19 2022 AnchalUpdateCDS[Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW

The autoBurt file for FE already has the C1:ASC-ETMX_PIT_SW2 (and other channels for ETMY, ITMX, ITMY, BS and for YAW) present, and I checked the last snapshot file from Feb 7th, 2022, which has 0 for these channels. So I'm not sure why when FE boots up, it does not follow the switch configuration. Fr safety, I changed all the gains of these filter modules, named like C1:ASC-XXXX_YYY_GAIN (where XXXX is ETMX, ETMY, ITMX, ITMY, or BS , and YYY is PIT or YAW) to 0.0. Now, even if the FE loads with switches in ON configuration, nothing should happen. In future, if we use this model for anything, we can change the gain values which won't be hard to track as the reason why no signal moves forward. Note, the BS connections from this model to BS suspension model do not work.

Quote:

you can hand edit the autoBurt file which the FE uses to set the values after boot up. Just make a python script that amends all of the OFF or ZERO that are needed to make things safe. This would be the autoBurt snap used on boot up only, and not the hourly snaps.

 

Yeah, this is a known issue actually. We go to ASC screen and manually swich off all the outputs after every reboot. We haven't been able to find a way to set default so that when the model comes online, these outputs remain switched off. We should find a way for this.

 

 

  16669   Mon Feb 14 18:31:50 2022 PacoUpdateGeneralScheduled power outage recovery - IMC recovery progress

[Paco, Anchal, Tega]

We have been realigning the IMC as of last Friday (02/11). Today we made some significant progress (still at high input power), but the IMC autolocker is unable to engage a stable mode lock. We have made some changes to reach this point, including re-centering of the MC1 REFL beam on the ccd, centering of MC2 QPD trans (using flashes), and centering of the MC REFL RFPD beam. The IMC is flashing to peak transmission of > 50% its max (near 14,000 counts average on 2021), and all PDs seem to be working ok... We will keep the PSL shutter closed (especially with high input power) for now.

  16671   Mon Feb 14 21:03:25 2022 KojiUpdateGeneralScheduled power outage recovery

I opened the boxes. Allegra has obvious vent of at least 4 caps. And the power supply did not respond even a paper clip test was performed. https://www.silverstonetek.com/downloads/QA/PSU/PSU-Paper%20Clip-EN.pdf (Paper Clip Test)
=> The mother board and the PSU are dead.

Then Ottavia was also checked. The mother board looked OK, but the PSU did not respond. I quickly opened the PSU and it had a bunch of bulged capacitors in it. => PSU dead

Conclusion: Save the cards/memory etc as much as possible. Migrate the allegra HDD to any other healthy PC or obtain a new used PC from Larry. Otherwise, we just want to buy another WS and copy the disk in it.

 

  16672   Tue Feb 15 19:32:50 2022 KojiUpdateGeneralScheduled power outage recovery - IMC recovery progress

Reduced the IMC power to 100mW

Setup: The power meter was placed right before the final aperture (Attachment 1)

Before the adjustment: Initial position of the HWP was 37.29deg and the input power was 987mW (Attachments 2/3)

After the adjustment: Initial position of the HWP was 74.00deg and the input power was 100mW (Attachments 4/5)

This made the MCREFL reading 0.549.

The MC refl path optics has not been modified.

  16673   Tue Feb 15 19:40:02 2022 KojiUpdateGeneralIMC locking

IMC is locking now. There was nothing wrong: just a careful alignment + proper gain adj

=== Primary Alignment ===

- I used WFS error signals as the indicator of the PDH error signals. Checked C1:IOO-WFS1_(I/Q)n_ERR and ended up C1:IOO-WFS1_I4_ERR as it showed the largest PDH error PP.

- Then used MC2 and MC3 to align the IMC by maximizing the PDH error and the MC trans (C1:IOO-MC_TRANS_SUM_ERR)

=== Locking procedure ===

Note that the MC REFL path is still configured for the full power input

- (Only at the beginning) Run scripts/MC/mcdown for initialization / Run scripts/MC/MC2tickleOFF just in case

- Enable IOO-MC-SW1 (MC SERVO switch right after "IN1 Gain (dB)").
- Disable 40:4000 boost
- Increase VCO Gain from -15 to 0
- Jiggle IN1 Gain from low to +31 until the lock is achieved

- As soon as the lock is acquired, enable 40:4000
- Increase VCO Gain to +10
- Turn up "SUPER BOOST" from 0 to 3

=== Lock loss procedure ===

Note that the MC REFL path is still configured for the full power input

- Disable IOO-MC-SW1
- Disable 40:4000 boost
- Reduce VCO Gain 0
- Turn down "SUPER BOOST" to 0

- Then jiggle IN1 Gain again to lock the IMC

=== MC2 spot ===

- It was obvious that the MC2F spot was not on the center of the optic.
- I tried to move the spot on the camera as much as possible, but this did not make the trans beam to the center of the MC end QPD
- I had the impression that the trans beam started to be clipped when the beam was moved towards the end QPD,

We need to reestablish the reasonable/consistent MC2 spot on the mirror, the MC end optics, and the QPD.
We will need to use MC2 dithering and A2L coupling to determine the center of the mirror

But as long as the transmission is maximized, the transmitted beam thru MC1 and MC3 follows the input beam. So we can continue the vent work

The current maximized transmission was ~1300. MC1 refl CCD view was largely off -> The camera path was adjusted.

=== MC2 alignment note ===

During the alignment, I noticed a sudden change of the MC2 alignment. There might be some hysteresis in the MC2 suspension. If you are locking the IMC and noticed significant misalignment, the first thing to try is to touch MC2 alignment.

  16674   Wed Feb 16 15:19:41 2022 AnchalUpdateGeneralReconfigured MC reflection path for low power

I reconfigured the MC reflection path for low power. This meant the following changes:

  • Replaced the 10% reflection BS by 98% reflection beam splitter
  • Realigned the BS angle to get maximum on C1:IOO-MC_RFPD_DCMON when cavity is unlocked.
  • Then realigned the steering mirrors for WFS1 and WFS2.
  • I tried to align the light for MC reflection CCD but then I realized that the pickoff for the camera is too low for it to be able to see anything.

Note, even the pick-off for WFS1 and WFS2 is too low I think. The IOO WFS alignment does not work properly for such low levels of light. I tried running the WFS loop for IMC and it just took the cavity out of the lock. So for low power scenario, we would keep the WFS loops OFF.

 

  16675   Tue Feb 22 18:47:51 2022 Ian MacMillanUpdateSUSETMY SUS Electronics Replacement

[Ian, Koji]

In preparation for the replacement of the suspension electronics that control the ETMY, I took measurments of the system excluding the CDS System. I took transfer functions from the input to the coil drivers to the output of the OSEMs for each sensor: UL, UR, LL, LR,  and SIDE. These graphs are shown below as well as all data in the compressed file.

We also had to replace the oplev laser power supply down the y-arm. The previous one was not turning on. the leading theory is that it's failure was caused by the power outage. We replaced it with one Koji brought from the fiber display setup.

I also am noting the values for the OSEM DC output

 OSEM  Value
 UL  557
 UR 568
 LR 780
 LL 385
 SIDE 328

In addition the oplev position was:

 OPLEV_POUT  4.871
 OPLEV_YOUT  -0.659
 OPLEV_PERROR  -16.055
 OPLEV_YERROR  -6.667

(KA ed) We only care about PERROR and YERROR (because P/YOUT are servo output)

Edit: corrected DC Output values

  16676   Wed Feb 23 15:08:57 2022 AnchalUpdateGeneralRemoved extra beamsplitter in MC WFS path

As discussed in the meeting, I removed the extra beam splitter that dumps most of the beam going towards WFS photodiodes. This beam splitter needs to be placed back in position before increasing the input power to IMC at nominal level. This is to get sufficient light on the WFS photodiodes so that we can keep IMC locked for more than 3 days. Currently IMC is unlocked and misaligned. I have marked the position of this beam splitter on the table, so putting it back in should be easy. Right now, I'm trying to align the mode cleaner back and start the WFS loops once we get it locked.

  16677   Thu Feb 24 14:32:57 2022 AnchalUpdateGeneralMC RFPD DCMON channel got stuck to 0

I found a peculiar issue today. The C1:IOO-MC_RFPD_DCMON remains constantly 0. I wonder if the RFPF output is being read properly. I opened the table and used an oscilloscope to confirm that the DC output from the MC REFL photodiode is coming consistently but our EPICs channel is not reading it. I tried restarting the modbusIOC service but that did not affect anything. I power cycled the acromag chassis while keeping the modbusIOC service off, and then restarted teh modbusIOC service. After this, I saw more channels got stuck and became unresponsive, including the PMC channels. So then I rebooted c1psl without doing anything to the acromaf chasis, and finally things came back online. Everything looks normal to me now but I'm not sure if one of the many channels is not in the right state. Anyways, problem is solved now.

 

  16678   Thu Feb 24 18:05:58 2022 YehonathanUpdateBHDRe-susspension of AS1

{Yehonathan, Anchal, Paco}

Yesterday, Anchal and Paco removed AS1 from the vacuum chamber and moved it into the cleanroom. The suspension wires were cut and the AS1 optic was put on the table.

Two things were noticed:

1. One of the wires was not sitting inside the side block groove (attachment 1)

2. One of the face magnets was grossly tilted (attachment 2). Probably due to uneven polishing of the dumbbell.

We put new wires into the side blocks making sure they sit in their grooves and we removed the tilted magnet. A different, more straight magnet was picked from the remaining spare magnets. The dumbbell and adapter were cleaned from glue residues and a batch of glue was prepared.

In the process of gluing a different magnet was knocked off. We cleaned that magnet too. The 2 magnets were glued on the adapter.

Today I came and saw that the gluing failed completely. One of the magnets was completely away from its socket and the other one wasn't glued at all.

I prepared a new batch of glue and glued the two magnets.

  16679   Thu Feb 24 19:26:32 2022 AnchalUpdateGeneralIMC Locking

I think I have aligned the cavity, including MC1 such that we are seeing flashing of fundamental mode and significant transmission sum value as well.However, I'm unable to catch lock following Koji's method in 40m/16673. Autolocker could not catch lock either. Maybe I am doing something wrong, I'll pickup again tomorrow, hopefully the cavity won't drift too much in this time.

  16680   Fri Feb 25 14:00:08 2022 Ian MacMillanUpdateSUSETMY SUS Electronics Replacement

[Koji, Ian]

We looked at a few power supplies and we found one that was marked "CHECK IF THIS WORKS" in yellow. We found that the power supply worked but the indicator light didn't work. I tried a two other lights from other power supplies that did not work but they did not work. Koji ordered a new one.

  16681   Fri Feb 25 14:48:53 2022 Ian MacMillanUpdateSUSETMY SUS Electronics Replacement

I moved the network-enabled power strip from above the power supplies on rack 1y4 to below. Nothing was powered through the strip when I unplugged everything and I connected everything to the same port after.

  16682   Sat Feb 26 01:01:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I will make a detailed elog later today giving a detailed outline of the connection from the Agilent gauge controller to the vacuum subnet and the work I have been doing over the past two days to get data from the unit to EPICs channels. I just want to mention that I have plugged the XGS-600 gauge controller into the serial server on the vacuum subnet. I check the vacuum medm screen and I can confirm that the other sensors did not experience and issues are a result of this. I also currently have two of the FRG-700 connected to the controller but I have powered the unit down after the checks.

  16683   Sat Feb 26 15:45:14 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I have attached a flow diagram of my understanding of how the gauges are connected to the network.

Earlier today, I connected the XGS-600 gauge controller to the IOLAN Serial Device Server on port 192.168.114.22 .

The plan is a follows:

1. Update the serial device yaml file to include this new ip entry for the XGS-600 gauge controller

2. Create a serial gauge class "serial_gauge_xgs.py" for the XGS-600 gauge controller that inherits from the serial gauge parent class for EPICS communication with a serial device via TCP sockets.

  • Might be better to use the current channels of the devices that are being replaced initially, i.e.
  • C1:Vac-FRG1_pressure C1:Vac-CC1_pressure
    C1:Vac-FRG2_pressure C1:Vac-CCMC_pressure
    C1:Vac-FRG3_pressure C1:Vac-PTP1_pressure
    C1:Vac-FRG4_pressure C1:Vac-CC4_pressure
    C1:Vac-FRG5_pressure C1:Vac-IG1_pressure

3. Modify the launcher file to include the XGS gauge controller. Following the same pattern used  to start the service for the other serial gauges, we can start the communication between the XGS-600 gauge controller and the IOLAN serial server and write data to EPICS channels using

controls@c1vac> python launcher.py XGS600

If we are able to establish communication between the XGS-600 gauge controller and write it gause data to EPICS channels, go on to steps 4.

4. Create a serial service file "serial_XGS600.service" and place it in the service folder

5. Add the new EPICS channels to the database file

6. Add the "serial_XGS600.service" to line 10 and 11 of modbusIOC.service

7. Later on, when we are ready, we can restart the updated modbusIOC service

 

For vacuum signal flow and Acromag channel assignments see [1]  and [2] respectively. For the 16 port IOLAN SDS (Serial Device Server) ethernet connections, see [3]. 

[1] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=40m_Vacuum_System_Signal_Flow.pdf

[2] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=AcromagChannelAssignment.pdf

[3] https://git.ligo.org/40m/vac/-/blob/master/python/serial/serial_devices.yaml

  16684   Sat Feb 26 23:48:14 2022 KojiUpdateSUSETMY SUS Electronics Replacement

[Ian, Koji] - Activity on 25th (Fri)

We continued working on the ETMY electronics replacement.

- The units were fixed on the rack along with the rack plan.

- Unnecessary Eurocard modules were removed from the crate.

- Unnecessary IDC cables and the sat amp were removed from the wiring chain. The side cross-connects became obsolete and they also were removed.

- A 18V DC power strip was attached to one of the side DIN rails.

Warning:

- Right now the ETMY suspension is free and not damped. We are relying on the EQ stops.

Next things to do:

- Layout the coil driving cables from the vacuum feedthru to the sat amp (2x D2100675-01 30ft ) [40m wiki]

- Layout DB cables between the units

- Layout the DC power cables from the power strip to the units

- Reassign ADC/DAC channels in the iscey model.

- Recover the optic damping

- Measure the change of the PD gains and the actuator gains.

  16685   Sun Feb 27 00:37:00 2022 KojiUpdateGeneralIMC Locking Recovery

Summary:

- IMC was locked.
- Some alignment change in the output optics.
- The WFS servos working fine now.
- You need to follow the proper alignment procedure to recover the good alignment condition.

Locking:
- Basically followed the previous procedure 40m/16673.
- The autolocker was turned off. Used MC2 and MC3 for the alignment.
- Once I hit the low order modes, increased the IN1 gain to acquire the lock. This helped me to bring the alignment to TEM00
- Found the MC2 spot was way too off in pitch and yaw.
- Moved MC1/2/3 to bring the MC2 spot around the center of the mirror.
- Found a reasonably good visibility (<90%) at a MC2 spot. Decided this to be the reference (at least for now)

SP Table Alignment Work
- Went to the SP table and aligned the WFS1/2 spots.
- I saw no spot on the camera. Found that the beam for the camera was way too weak and a PO mirror was useless to bring the spot on the CCD.
- So, instead, I decided to catch an AR reflection of the 90% mirror. (See Attachment 1)
- This made the CCD vulnerable to the stronger incident beam to the IMC. Work on the CCD path before increasing the incident power.

MC2 end table alignment work
- I knew that the focusing lens there and the end QPD had inconsistent alignment.
- The true MC2 spot needs to be optimized with A2L (and noise analysis / transmitted beam power analysis / etc)
- So, just aligned the QPD spot using today's beam as the temporary target of the MC alignment. (See Attachment 2)

Resulting CCD image on the quad display (Attachment 3)

WFS Servo
- To activate the WFS with the low transmitted power, the trigger threshold was reduced from 5000 to 500. (See Attachment 4)
- WFS offset was reset with /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_RF_offsets
- Resulting working state looks like Attachment 5

  16686   Sun Feb 27 01:12:46 2022 KojiUpdateGeneralIMC manual alignment procedure

We expect that the MC sus are susceptible to the temperature change and the alignment drifts away with time.

Here is the proper alignment procedure.

0) Assume there is no TEM00 flash or locking, but the IMC is still flashing with higher-order modes.

1) Use the CCD camera and WFS DC spots to bring the beam to the nominal position.

2) Use only MC2 and MC3 to align the cavity to have low-order modes (TEM00,01,02 etc)

3) You should be able to lock the cavity on one of these modes. Minimize the reflection (maximize the transmission) for that mode.

4) This should allow you to jump to a better lower-order mode. Continue alignment optimization only with MC2/3 until you get TEM00.

5) Optimize the TEM00 alignment only with MC2/3

6) Look at the MC end QPD. use one of the scripts in scripts/MC/moveMC2 . Note that the spot moves opposite to the name of the scripts. i.e. MC2_spot_down moves the spot up, MC2_spot_right moved the spot left, etc...
These scripts move MC1/2/3 and try to keep the good MC transmission.

7) moveMC2 scripts are not perfect. As you use them, it makes the MC alignment gradually degraded. Use MC2 and MC3 to recover good transmission.

8) If MC2 spot is satisfactory, you are done.

-------------

Step 6-8 can be done with the WFS on. This way, you can skip step 7 as the WFS servo takes care of it. But if the spot move is too fast, the servo can't keep up with the change. If so, you have to wait for the settling of the servo. Once the spot position is satisfactory, MC servo relief should be run so that the servo offset (in actuation) can be offloaded to the bias slider.

 

  16687   Mon Feb 28 15:51:07 2022 Ian MacMillanUpdateSUSETMY 1Y4 Electronics Replacement

[Paco, Ian]

paco helped me wire the ETMY 1Y4 rack. We wired the following (copied from Koji's email):

  1. Use DB9-DB9 to complete the wiring between
    1. 16bit DAC AI Chassis - End DAC Adapter (4 cables)
    2. End DAC Adapter - HAM-A Coil Driver (2 cables)
    3. AA Chassis - End ADC Adapter (2 cables)
  2. Koji already brought two special DB9-DB15 cables (in plastic bags) to the end. They connect the HAM-A coil drivers to the satellite amp. At this time, we skip Low Noise HV Bias Driver.
  3. Bring two 30ft DB25 (called #1, aka D2100675-01) cables from the office area to the end. I collected one end and left them there.
  4. All the new units have +/-18V DC supply in the back. Find the orange cables behind the 40m vacuum duct around Y-end and connect the units and the DC power strip. Use short cables if possible to save the longer ones.

the cables we used:

Number Used Type of Cable Length
8 DB9 to DB9 2.5 ft
2 DB9 to DB9 5 ft
2 DB9-DB15  
2 DB25 (called #1, aka D2100675-01) 30ft
9 Orange Power Cables ~ 3 ft

I attached pictures below.

  16688   Mon Feb 28 19:15:10 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I decided to create an independent service for the XGS data readout so we can get this to work first before trying to integrate into current system. After starting the service, I noticed that the EPICS channel were not updating as expected. So I started to debug the problem and managed to track it down to an ip socket connect() error, i.e. we get a connection error for the ip address assigned to the LAN port to which the XGS box was connected. After trying a few things and searching the internet, I think the error indicates that this particular LAN port is not yet configured. I reached this conclusion after noting that only a select number of LAN ports connected without issues and these are the ports that already had devices connected. So it must be the case that the LAN ports were somehow configured. The next step is to look at the IOLAN manual to figure out how to configure the ip port for the XGS controller. Fingers crossed.

  16689   Tue Mar 1 16:01:14 2022 PacoUpdateElectronicsRFSoC 2x2 board -- setup for remote work & BALUN saga

[Tommy, Paco]

Since last week I've worked with tommy on getting the RFSoC 2x2 board to get some TFs from simple minicircuits type filters. The first thing I did was set up the board (which is in the office area) for remote access. I hooked up the TCP/IP port to a wall ethernet socket (LIGO-04) and the caltech network assiggned some IP address to our box. I guess eventually we can put this behind the lab network for internal use only.

After fiddling around with the tone-generators and spectrum analyzer tools in loopback configuration (DAC --> ADC direct connection), we noticed that lower frequency (~ 1 MHz) signals were hardly making it out/back into the board... so we looked at some of the schematics found here and saw that both RF data converters (ADC & DAC) interfaces are AC coupled through a BALUN network in the 10 - 8000 MHz band (see Attachment #1). This is in principle not great news if we want to get this board ready for audio-band DSP.

We decided that while Tommy works on measuring TFs for SHP-200 all the way up to ~ 2 GHz (which is possible with the board as is) I will design and put together an analog modulation/demodulation frontend so we can upconvert all our "slow" signals < 1MHz for fast, wideband DSP. and demodulate them back into the audio band. The BALUN network is pictured in Attachment #2 on the board, I'm afraid it's not very simple to bypass without damaging the PCB or causing some other unwanted effect on the high-speed DSP.

  16690   Tue Mar 1 19:26:24 2022 KojiUpdateSUSETMY SUS Electronics Replacement

The replacement key switches and Ne Indicators came in. They were replaced and work fine now.

The power supply units were tested with the X end HeNe display. It turned out that one unit has the supply module for 1350V 4.9mA while the other two do 1700V 4.9mA.
In any case, these two ignited the HeNe Laser (1103P spec 1700V 4.9mA).

The 1350V one is left at the HeNe display and the others were stored in the cabinet together with spare key SWs and Ne lamps.

  16691   Tue Mar 1 20:38:49 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

During my investigation, I inadvertently overwrote the serial port configuration for the connected devices. So I am now working to get it all back. I have attached screenshots of the config settings that brought back communication that is not garbled. There is no physical connection to port 6, which I guess was initially used for the UPS serial communication but not anymore. Also, ports 9 and 10 are connected to Hornet and SuperBee, both of which have not been communicating for a while and are to be replaced, so there is no way to confirm communication with them. Otherwise, the remaining devices seem to be communicating as before.

I still could not establish communication with the XGS-600 controller using the serial port settings given in the manual, which also happen to work via Serial to USB adapter, so I will revisit the problem later. My immediate plan is to do a Serial Ethernet, then Ethernet to Serial, and then Serial to USB connection to see if the USB code still works. If it does then at least I know the problem is not coming from the Serial to Ethernet adapters. Then I guess I will replace the controller with my laptop and see what signal comes through when I send a message to the controller via the IOLAN serial device server. Hopefully, I can discover what's wrong by this point.

 

Note to self: Before doing anything, do a sanity check by comparing the settings on the IOLAN SDS and the config settings that worked for the Serial to USB communication and post an elog for this for reference.

  16692   Wed Mar 2 11:50:39 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Here is the IOLAN SDS TCP socket setting and the USBserial setting for comparison.

I have also included the python script and output from the USBserial test from earlier.

  16693   Wed Mar 2 12:40:08 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Connector Test:

A quick test to rule out any issue with the Ethernet to Serial adapter was done using the setup shown in Attachment 1. The results rule out any connector problem.

 

IOLAN COMM test (as per Koji's suggestion):

The next step is to swap the controller with a laptop set up to receive serial commands using the same settings as the XGS600 controller. Basically, run a slightly modified version of python script where we go into listening mode. Then send commands to the TCP socket on the IOLAN SDS unit using c1vac and check what data makes its way to the laptop USBserial terminal. After working on this for a bit, I realized that we do not need to do anything on the c1vac machine. We only need to start the service as it would work normally. So I wrote a small python code for a basic XGS-600 controller emulator, see Attachment 4. The outputs from the laptop and c1vac terminals are Attachments 5 and 6 respectively. 

These results show that we can communicate via the assigned IP address "192.168.114.22" and the commands that are sent from c1vac reaches the laptop in the correct format. Furthermore, the serial_XSG service, a part modbusIOC_XGS service, which usually exits with an error seems fine now after successfully communicating with the laptop. I don't know why it did not die after the tests. I also found a bug in my code as a result of the test, where the status field for the fourth gauge didn't get written to. 

 

Pressure reading issue:

I noticed that the pressure reading was not giving the atmospheric value of ~760 Torrs as expected. Looking through my previous readouts, it seems the unit showed this atm value of ~761 Torrs when the first gauge was attached. However, a closer look at the issue revealed a transient behavior, i.e. when the unit is turned on the reading dips to atm value but eventually rises up to 1000 Torrs. I don't think this is a calibration problem bcos the value of 1000 Torrs is the maximum value for the gauge range. I also found out that when the XGS-controller has been running for a while, a power cycle does not have this transient behavior. So maybe a faulty capacitor somewhere? I have attached a short video clip that shows what happens when the XGS-controller unit is turned on.

  16694   Wed Mar 2 14:02:43 2022 YehonathanUpdateBHDRe-susspension of AS1

Yesterday, I rebuilt the OpLev setup in the cleanroom in order to suspend AS1. It took me a while to find all the necessary parts but I found them in the end.

The HeNe laser was placed on the optical table and turned on. The beam was aimed to bounce off a folding mirror to the SOS tower.

The beam's height was controlled by the HeNe laser stage and made to be 5+14/32". The beam from the folding mirror was made parallel to the table, first with an iris and then with the QPD connected to a scope.

Preparing the SOS tower for the suspension I noticed that the wire clamp is scratched on both sides from previous suspensions. I discarded that wire clamp but couldn't find the spares. Time ran out and I had to stop.

  16695   Thu Mar 3 04:11:36 2022 KojiUpdateSUSETMY 1Y4 Electronics Replacement

For the Y-end electronics replacement, we want to remove unused power supplies. In fact, we already removed the +/-5V supplies from the stack. I was checking what supply voltages are used by the Eurocard modules. I found that D990399 QPD whitening board had the possible use of +/-5V.

The 40m Y-end version can be found here D1400415. The +/-5V supply voltages are used at the input stage AD620 and the QPD bias voltage of -5V.

AD620 can work with +/-15V. Also the bias voltage can easily be -15V. So I decided to cut the connector legs and connected +5V line to +15V, and -5V line to -15V.

With this modification, I can say that the eurocards only use the +/-15V voltages and nothing else.

The updated schematics can be found as D1400415-v6

  16696   Thu Mar 3 04:24:23 2022 KojiUpdateSUSETMY 1Y4 Electronics Replacement

The DC power strip at Y-end was connected to the bottom two Sorensen power supplies. They are configured to provide +/-18V.

 

  16698   Thu Mar 3 17:09:46 2022 PacoUpdateBHDRe-susspension of AS1

[Anchal, Paco]

Wire clamp spare was installed, furthermore AS1 was reinstalled on adapter, attached wire clamps, and cleaned using ionized air gun. Finally, we suspended it on the SOS tower and left it resting on the bottom earthquake stops; ready for balancing.

Quote:

Yesterday, I rebuilt the OpLev setup in the cleanroom in order to suspend AS1. It took me a while to find all the necessary parts but I found them in the end.

The HeNe laser was placed on the optical table and turned on. The beam was aimed to bounce off a folding mirror to the SOS tower.

The beam's height was controlled by the HeNe laser stage and made to be 5+14/32". The beam from the folding mirror was made parallel to the table, first with an iris and then with the QPD connected to a scope.

Preparing the SOS tower for the suspension I noticed that the wire clamp is scratched on both sides from previous suspensions. I discarded that wire clamp but couldn't find the spares. Time ran out and I had to stop.

 

  16699   Thu Mar 3 17:21:11 2022 Ian MacMillanUpdateSUSETMY 1Y4 Electronics Replacement

[Koji, Ian]

1) We attached the 30 coil driving cables to the vacuum feed through to the sat amp  [40m wiki] they run along the cable tray then up and down into the rack.

2) we checked all DB and power cables. We found that the anti-imaging filter had a short and got very hot when plugged in. the back power indicator lights turned on fine but the front panel stayed off. We removed it and replaced it with the one that was on the test stand marked for the BHD. This means we need to fix the broken one and Koji mentioned getting another one.

3) we reassigned the ADC and DAC channels in the iscey model and the asy model. we committed a version before we made any changes.

4) Finally we tested the setup to make sure the ETM was being damped.

Next step:

1) Measure the change of the PD gains and the actuator gains. See pervious elog

  16701   Fri Mar 4 18:12:44 2022 KojiUpdateVACRGA pumping down

1. Jordan reported that the newly installed Pirani gauge for P2 shows 850Torr while PTP2 show 680 Torr. Because of this, the vacuum interlock fails when we try to open V4.

2. Went to c1vac. Copied the interlock setting file interlock_conditions.yaml to interlock_conditions_220304.yaml
3. Deleted diffpressure line and pump_underspeed line for V4
4. Restarted the interlock service

controls@c1vac:/opt/target/python/interlocks$ sudo systemctl status interlock.service  
controls@c1vac:/opt/target/python/interlocks$ sudo systemctl restart interlock.service
controls@c1vac:/opt/target/python/interlocks$ sudo systemctl status interlock.service

5. The above 2~4 was unnecessary. Start over.


Let RP1/3 pump down TP1 section through the pump spool. Then let TP2 pump down TP1 and RGA.

1. Open V7. This made P2 a bit lower (P2 is alive) and P3.
2. Connected the main RP tube to the RP port.
2. Started RP1/3. PRP quickly reaches 0.4Torr.
3. Opened V6 this made P3 and O2 below 1Torr.
4. Close V6. Shutdown RP1/3. Disconnect the RP tube.
5. Turn on auxRP at the wall powe
6. Turn on TP2. Wait for the starting up.
7. Open V4. Once the pressure is below Pirani range, open VM3.
8. Keep it running over the weekend.

9. Once TP2 reached the nominal speed, the "StandBy" button was clicked to lower the rotation speed (for longer life of TP2)

  16703   Sat Mar 5 02:03:46 2022 KojiUpdateSUSETMY 1Y4 Electronics Replacement

Oplev saga

Summary

- The new coil driver had not enough alignment range to bring the oplev beam back to the QPD center
- The coil driver output R was reduced from 1.2k to 1.2k//100 = 92.3 +/- 0.4 Ohm
- Now the oplev spot could be moved to the center of the QPD

- The damping gains (POS/PIT/YAW) and the oplev gains were reduced by a factor of 1/10.
- The damping and the oplev servos work now. Fine gain tuning is necessary.

To Do:
- DC value / TF measurements
- Adjust damping gains
- RFM issue
- Connection check
- Cable labeling


== Alignment Range ==

- Since c1auxey was removed, we no longer have C1:SUS-ETMY_PIT_COMM and C1:SUS-ETMY_YAW_COMM. At this moment, all the alignment is taken with the offset input from the fast real-time system via C1:SUS-ETMY_PIT_OFFSET and C1:SUS-ETMY_YAW_OFFSET.

- The oplev spot could not be moved on the center of the QPD without exceeding the DAC output range (~+ or -32000) for the coils. (Attachment 1)

- This is because the old system had a slow but large current range (Rout = 100) and a small current range for the fast control. Until we commission the new HV BIAS Driver, we have to deal with the large DC current with the HAM-A coil driver.

== Modification to the output resistances ==

The following units and the channels were modified. Each channel had a differential current driver and two output resistances of 1.2K. 100Ohm (OHMITE 43F100, 3W) wire wound resistors were added to them in parallel, making the resulting output R of ~92Ohm.

- ETMY HAM A Coil Driver 1: S2100622 (Attachments 2/3) CH1/2/3
- ETMY HAM A Coil Driver 2: S2100621 (Attachments 4/5) CH3

- This modification allowed me to align the oplev spot to the center of the QPD. C1:SUS-ETMY_PIT_OFFSET and C1:SUS-ETMY_YAW_OFFSE are +2725 (8%FS) and -2341 (7%FS), respectively.
- The previous alignment slider values were -0.9392 and 0.7615 (out of 10). These are the reasonable numbers, considering the change of the Rout from 100 to 92Ohm, and the sign flip.
(By the way, autoBurt files for c1auxex and c1auxey were not properly configured and the history of C1:SUS-ETM*_*_COMM was not recorded.)

== Damping Servos ==

- Now, the POS/PIT/YAW servos experience ~x10 gains. So temporarily these gains were reduced (POS 20->2, PIT 6->0.6, YAW 4->0.4) and the loops are stable when engaged.
- Also the gains of the OPLEV servos were reduced from -4.5 to -0.45. The loops are stable when engaged.

== Snapshot of the working condition ==

Attachent 6 shows the screenshot for the snapshot of the working condition.


To Do

- The damping servos were tested without proper PD whitening compensation.
  -> It turned out this is not necessary as our modified PD whitening has the pole and zero at the same freqs as before.

- Compare the DC values of the OSEM outputs and compensate for the gain increase by the "cts2um" filter.

- The end RTS suffers from the RFM issue. There is no data transmitted from the vertex to the end. I suspect we need to restart the c1rfm process. But this will likely suspend all the vertex real-time machines. Careful execution is necessary.

- c1iscey has all the necessary analog connections. But they are not tested. When we lock the green/IR cavity, we'll need them.

- The cable labeling is only half done.

  16704   Sun Mar 6 18:14:45 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Following repeated failure to establish communication between c1vac and the XGS600 controller via Perle IOLAN serial device server, I decided to monitor the signal voltage of the communication channels (pin#2, pin#3 and pin#5) using an oscilloscope. The result of this investigation is presented in the attached pdf document. In summary, it seems I have used a crossed wired RS232 serial cable instead of a normal RS232 serial cable, so the c1vac read request command is being relayed on the wrong comm channel (pin#2 instead of pin#3). I will swap out the cable to see if this resolves the problem.  

  16705   Mon Mar 7 10:06:32 2022 YehonathanUpdateIOOIMC unlocked again, completely misaligned

Came this morning and saw that the IMC is unlocked.

Went into MC Lock screen and see that the watchdog is down and the PSL shutter is closed. I tried to open the shutter but nothing happened - no REFL signal or beam on the MC REFL camera .

Thinking this has something to do with the watchdog I upped the watchdog:

ezcawrite C1:SUS-MC2_LATCH_OFF 1

The watchdog on the MEDM screen became green but the shutter still seemed unresponsive. I went to the PSL table and made sure that the shutter is working. I opened the AS table and saw there no MC REFL beam anywhere.

Thinking that MC1 must be completely misaligned I opened the MC align screen to find that indeed all the alignment values has been zeroed! (attachment).

I burt restore c1iooepics from Mar 4th 00:19. Didn't help.

I try to burt restore c1susepics from Mar 1st 13:19. Still zero.

I try to burt restore c1susaux from Mar 1st 00:19 -> seems like alignment values have been restored.

I open the shutter. Beam is flying! MC Watchdogs tripped! I close the shutter. OK, I need to wait until the MCs are dampped enough. MC2 and MC3 have relaxed so I enable their watchdogs. MC1 is still swinging a bit. I turn on damping for MC1 as well.

 

MC locked immediately but the REFL is still high like 1.2. Is it normal?

I turn on the WFSs and the REFL went down to 0.3 nice. I run the MC WFS relief script.

  16706   Mon Mar 7 13:53:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

So it appears that my deduction from the pictures of needing a cable swap was correct, however, it turns out that the installed cable was actually the normal RS232 and what we need instead is the RS232 null cable. After the swap was done, the communication between c1vac and the XGS600 controller became active. Although, the data makes it all to the to c1vac without any issues, the scope view of it shows that it is mainly utilizing the upper half of the voltage range which is just over 50% of the available range. I don't know what to make of this.

 

I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

 

Quote:

Following repeated failure to establish communication between c1vac and the XGS600 controller via Perle IOLAN serial device server, I decided to monitor the signal voltage of the communication channels (pin#2, pin#3 and pin#5) using an oscilloscope. The result of this investigation is presented in the attached pdf document. In summary, it seems I have used a crossed wired RS232 serial cable instead of a normal RS232 serial cable, so the c1vac read request command is being relayed on the wrong comm channel (pin#2 instead of pin#3). I will swap out the cable to see if this resolves the problem.  

 

  16707   Mon Mar 7 14:52:34 2022 KojiUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Great trouble shoot!

> I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

This is just a calibration issue. The controller should have the calibration function.
(The other Pirani showing 850Torr was also a calibration issue although I didn't bother to correct it. I think the pirani's typically has large distribution of the calibration values and requires individual calibration)

  16708   Mon Mar 7 14:55:33 2022 KojiUpdateIOOIMC unlocked again, completely misaligned

Hmm, the bias values were reset at 2022-03-03-20:01UTC which is 2022-03-03-12:01 PST with no apparent disruption of the data acquisition (= no resetting of the RTS). Not sure how this could happen.

 

ELOG V3.1.3-