40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 157 of 357  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  16488   Tue Nov 30 17:11:06 2021 PacoUpdateGeneralMoved white rack to 1X3.5

[Paco, Ian, Tega]

We moved the white rack (formerly unused along the YARM) to a position between 1X3, and 1X4. For this task we temporarily removed the hepas near the enclosures, but have since restored them.

  16532   Wed Dec 22 14:57:05 2021 KojiUpdateGeneralchiara local backup

chiara local backup of /cvs/cds has not been running since the move of chiara in Nov 19. The remote backup has not been taken since 2017.
The lack of the local backup was because of the misconfiguration of /etc/fstab.

It was fixed and now the backup disk was mounted. We'll see the backup script running tomorrow morning.
The backup disk is smaller than the main disk. So sooner or later, we will face the backup problem again.


localbackup script was crying because there was no backup disk.

backup>pwd
/opt/rtcds/caltech/c1/scripts/backup
backup>tail localbackup.log
2021-12-18 07:00:02,002 INFO       Updating backup image of /cvs/cds
2021-12-18 07:00:02,002 ERROR      External drive not mounted!!!
2021-12-19 07:00:01,146 INFO       Updating backup image of /cvs/cds
2021-12-19 07:00:01,146 ERROR      External drive not mounted!!!
2021-12-20 07:00:01,255 INFO       Updating backup image of /cvs/cds
2021-12-20 07:00:01,255 ERROR      External drive not mounted!!!
2021-12-21 07:00:01,361 INFO       Updating backup image of /cvs/cds
2021-12-21 07:00:01,361 ERROR      External drive not mounted!!!
2021-12-22 07:00:01,469 INFO       Updating backup image of /cvs/cds
2021-12-22 07:00:01,470 ERROR      External drive not mounted!!!

fstab had no entry for the backup disk.

backup>cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/sda1 during installation
UUID=972db769-4020-4b74-b943-9b868c26043a /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=a3f5d977-72d7-47c9-a059-38633d16413e none            swap    sw              0       0

# OLD BACKUP DISK
#UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0

# CURRENT BACKUP DISK as of 2021/09/02
#UUID="1843f813-872b-44ff-9a4e-38b77976e8dc"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0

#fb:/frames      /frames nfs     ro,bg

# CURRENT MAIN DISK as of 2021/09/02
# UUID=92dc7073-bf4d-4c58-8052-63129ff5755b   /home/cds    ext4    defaults,relatime,commit=60    0   0
UUID="1843f813-872b-44ff-9a4e-38b77976e8dc"   /home/cds    ext4   defaults,relatime,commit=60    0   0

Checked the dev name of the disks and the UUIDs

backup>sudo lsblk
[sudo] password for controls:
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk
├─sda1   8:1    0 446.9G  0 part /
├─sda2   8:2    0     1K  0 part
└─sda5   8:5    0  18.9G  0 part [SWAP]
sdb      8:16   0   5.5T  0 disk
└─sdb1   8:17   0   5.5T  0 part /home/cds
sdc      8:32   0   3.7T  0 disk
└─sdc1   8:33   0   3.7T  0 part
sr0     11:0    1  1024M  0 rom
backup> sudo blkid
/dev/sda1: UUID="972db769-4020-4b74-b943-9b868c26043a" TYPE="ext4"
/dev/sda5: UUID="a3f5d977-72d7-47c9-a059-38633d16413e" TYPE="swap"
/dev/sdb1: UUID="1843f813-872b-44ff-9a4e-38b77976e8dc" TYPE="ext4"
/dev/sdc1: UUID="92dc7073-bf4d-4c58-8052-63129ff5755b" TYPE="ext4"

Added the fstab entry for the backup disk

media>cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/sda1 during installation
UUID=972db769-4020-4b74-b943-9b868c26043a /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=a3f5d977-72d7-47c9-a059-38633d16413e none            swap    sw              0       0

# OLD BACKUP DISK
#UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0

# OLD BACKUP DISK as of 2021/09/02
#UUID="1843f813-872b-44ff-9a4e-38b77976e8dc"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0

# Current backup disk as of 2021/12/22
UUID="92dc7073-bf4d-4c58-8052-63129ff5755b"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0

#fb:/frames      /frames nfs     ro,bg

# CURRENT MAIN DISK as of 2021/09/02
# UUID=92dc7073-bf4d-4c58-8052-63129ff5755b   /home/cds    ext4    defaults,relatime,commit=60    0   0
UUID="1843f813-872b-44ff-9a4e-38b77976e8dc"   /home/cds    ext4   defaults,relatime,commit=60    0   0

  16535   Thu Dec 23 16:38:21 2021 KojiUpdateGeneralIs megatron down? (Re: chiara local backup)

The local backup seems working fine again. But I found that megatron is down and this is a real issue. This should be fixed at the earliest chance.


It seems that the local backup has been successfully taken this morning.

controls@nodus|backup> tail /opt/rtcds/caltech/c1/scripts/backup/localbackup.log
2021-12-19 07:00:01,146 INFO       Updating backup image of /cvs/cds
2021-12-19 07:00:01,146 ERROR      External drive not mounted!!!
2021-12-20 07:00:01,255 INFO       Updating backup image of /cvs/cds
2021-12-20 07:00:01,255 ERROR      External drive not mounted!!!
2021-12-21 07:00:01,361 INFO       Updating backup image of /cvs/cds
2021-12-21 07:00:01,361 ERROR      External drive not mounted!!!
2021-12-22 07:00:01,469 INFO       Updating backup image of /cvs/cds
2021-12-22 07:00:01,470 ERROR      External drive not mounted!!!
2021-12-23 07:00:01,594 INFO       Updating backup image of /cvs/cds
2021-12-23 07:19:55,560 INFO       Backup rsync job ran successfully, transferred 338425 files.

However, I noticed that the autoburt has been stalled since Dec 6 (I used to check how the backup is up-to-date using the autoburt snapshots)

Dec>pwd
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Dec
Dec>ls -l
total 24
drwxr-xr-x 26 controls controls 4096 Dec  1 23:07 1
drwxr-xr-x 26 controls controls 4096 Dec  2 23:07 2
drwxr-xr-x 26 controls controls 4096 Dec  3 23:07 3
drwxr-xr-x 26 controls controls 4096 Dec  4 23:07 4
drwxr-xr-x 26 controls controls 4096 Dec  5 23:07 5
drwxr-xr-x 19 controls controls 4096 Dec  6 16:07 6

There are a bunch of errors in the log file as follows, but maybe this is not an issue

controls@nodus|burt> pwd
/opt/rtcds/caltech/c1/burt
controls@nodus|burt> tail burtcron.log
!!!  ERROR !!! Target c1supepics Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1tstepics Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1x10epics Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1aux Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1dcuepics Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1iscaux Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1iscepics Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1losepics Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1psl Snapshot file inconsistent with Request file
!!!  ERROR !!! Target c1susaux Snapshot file inconsistent with Request file

The real issue seems that megatron is down. It has a lot of house keeping jobs on corn including the N2 pressure alert.
https://wiki-40m.ligo.caltech.edu/Computers_and_Scripts/CRON
This needs to be fixed at the earliest chance.

  16536   Fri Dec 24 16:49:41 2021 KojiUpdateGeneralIs megatron down? (Re: chiara local backup)

It turned out that the UPS installed on Nov 22 failed (cf https://nodus.ligo.caltech.edu:8081/40m/16479 ). As a fact, it was alive just for 2 weeks!

The APC UPS unit indicated F06. According to the manual (https://www.apc.com/shop/us/en/products/APC-Power-Saving-Back-UPS-Pro-1000VA/P-BR1000G), F06 means "Relay Welding" and can not be fixed by a user. Resetting the UPS eliminated the error, but I didn't want to have the same issue while no one is in the lab, I moved the megatron power source from the UPS to the power strip on 1Y7. So, megatron is currently vulnerable to a power glitch.

After the power cords were restored, megatron eventually recovered ssh terminals. I manually ran autoburt.cron at 16:50 so that the latest snapshot is taken.

  16646   Fri Feb 4 10:04:47 2022 ChubUpdateGeneraldish soap and clean scrub sponges!

Bought dish soap and scrub sponges today and placed them under the sink with the other dish supplies.

  16647   Fri Feb 4 10:21:39 2022 AnchalSummaryGeneralComplete lab shutdown

Please edit this same entry throughout the day for the shutdown elogging.

I took a screenshot of C0VAC_MONITOR.adl to ensure that all pnematic valves are in closed positions:

The status message says "All pnematic valves closed" and the latest error message is about "V7 closed, N2 < 6.50e+01".

I found out that there was no autoburt happening for c1vac channels. I created an autoBurt.req file for the vac.db file and saved one snapshot. I also added the path of this file in autoburt/.requestfilelist . Let's see if autoburting starts by that for this file as well.

With this, I think we can safely shutdown acromag chassis. Hopefully, the relays are configured such that the valves are nominally closed in absence of a control signal. After the chassis is shut down, wwe can shutdown C1VAC by:

sudo shutdown

[Chub, Jordan]

At the 1x8 rack, the following were switched off on their respective front panels:

PTP2 & PTP3 Controller
MKS Gauge controller
PRP Gauge Controller
G2P316a & b Controllers
Sorenson
Serial Device Server
Both UPS's

Powered off from back of unit:

TP1 Controller
Acromag chassis

TP2 and 3 controllers were unplugged from respective power strips (labeled C2 and C3)

C1vac and the laptop at the workstation were shut down

Manual Gate valve was closed

  16648   Mon Feb 7 09:00:26 2022 PacoUpdateGeneralScheduled power outage recovery

[Paco]

Started recovering from scheduled (Feb 05) power outage. Basically, time-reversing through this list.


== Office area ==

  • Power martian network switches, WiFi routers on the north-rack.
  • Power windows (CAD) machine on.

== Main network stations ==

  • Power on nodus, try ping (fail).
  • Power on network switches, try ping (success), try ssh controls@nodus.ligo.caltech.edu (success).
  • Power on chiara to serve names for other stations, try ssh chiara (success).
  • Power on fb1, try ping (success), try ssh fb1 (success).
  • Power on paola (xend laptop), viviana (yend laptop), optimus, megatron.

== Control workstations ==

  • Power on zita (success)
  • Power on giada (success), run system upgrade.
  • Power on donatella (success)
  • Power on allegra (fail)  **
  • Power on pianosa (success)
  • Power on rossa (success)
  • From nodus, started elog (success).

== PSL + Vertex instruments ==

  • Turn on newport PD power supplies on PSL table.
  • Turn on TC200 temp controller on (setpoint --> 36.9 C)
  • Turn on two oscilloscopes in PSL table.
  • Turn on PSL (current setpoint --> 2.1 A, other settings seem nominal)
  • Turn on Thorlabs HV pzt supply.
  • Turn on ITMX OpLev / laser instrument AC strip.

== YEND and XEND instruments ==

  • Turn on XEND AUX pump on (current setpoint -->1.984 A)
  • Turn on XEND AUX SHG oven on (setpoint --> 37.1 C) (see green beam)
  • Turn on XEND AUX shutter controller on.
  • Turn on DCPD supply, and OpLev supply AC strip on.
  • Turn on YEND AUX pump on (fail) *
    • With the controller on STDBY, I tried setting up the current but got HD FAULT (or according to the manual this is what the head reports when the diode temperature is too high...)
    • Upon power cycling the controller, even the controller display stopped working... YAUX controller + head died? maybe just the diode? maybe just the controller?
      • I borrowed a spare LW125 controller from the PSL table (Yehonathan pointed me to it) and swapped it in.
      • Got YEND AUX to lase with this controller, so the old controller is busted but at least the laser head is fine.
      • Even saw SHG light. We switched the laser head off to "STDBY" (so it remains warm) and took the faulty controller out of there.
  • Turn on YEND AUX SHG oven on (setpoint -->35.7 C)
  • Turn on YEND AUX shutter controller on.

== YARM Electronic racks ==

== XARM Electronic racks ==

 


* Top priority, this needs to be fixed.

** Non-priority, but to be debugged

  16649   Mon Feb 7 15:32:48 2022 YehonathanUpdateGeneralY End laser controller

I went to the Y end. The AUX laser was on Standby. I pushed the Standby button. The laser turned on and there was some green light. However, the controller displayed the message "CABLE?" which according to the manual means that the laser head is powered but there is no control over the laser (e.g. the control cable is disconnected). I turned off the controller and disconnected both the power and control cables. I put them back and turned the controller back on.

I pushed the Standby button, the laser turned on and this time the controller displayed the laserhead's state. I was able to change the current/temperature. The problem seems to be resolved.

  16651   Mon Feb 7 16:53:02 2022 KojiUpdateGeneralScheduled power outage recovery

I went to the X end and found it was warm. Turned out that not all the A/Cs were on. They were turned on now.

  16652   Wed Feb 9 11:56:24 2022 AnchalUpdateGeneralBringing back CDS

[Anchal, Paco]

Bringing back CDS took a lot of work yesterday. I'm gonna try to summarize the main points here.


mx_start_stop

For some reason, fb1 was not able to mount mx devices automatically on system boot. This was an issue I earlier faced in fb1(clone) too. The fix to this problem is to run the script:

controls@fb1:/opt/mx/sbin/mx_start_stop start

To make this persistent, I've configured a daemon (/etc/systemd/system/mx_start_stop.service) in fb1 to run once on system boot and mount the mx devices as mentioned above. We did not see this issue of later reboots yesterday.


gpstime

Next was the issue of gpstime module out of date on fb1. This issue is also known in the past and requires us to do the following:

controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 1$ sudo modprobe gpstime

Again, to make this persistent, I've configured a daemon (/etc/systemd/system/re-add-gpstime.service) in fb1 to run the above commands once on system boot. This corrected gpstime automatically and we did not face these problems again.


time synchornization

Later we found that fb1-FE computers, ntp time synchronization was not working and the main reason was that fb1 was unable to access internet. As a rule of thumb, it is always a good idea to try pinging www.google.com on fb1 to ensure that it is connected to internet. The issue had to do with fb1 not being able to find any namespace server. We fixed this issue by reloading bind9 service on chiara a couple of times. We're not really sure why it wasn't working.

~>sudo service bind9 stop
~>sudo service bind9 start
~>sudo service bind9 status
* bind9 is running

After the above, we saw that fb1 ntp server is working fine. You see following output on fb1 when that is the case:

controls@fb1:~ 0$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
-table-moral.bnr 110.142.180.39   2 u  399  512  377  195.034  -14.618   0.122
*server1.quickdr .GPS.            1 u   67   64  377  130.483   -1.621   1.077
+ntp2.tecnico.ul 56.99.239.27     2 u  473  512  377  184.648   -0.775   2.231
+schattenbahnhof 129.69.1.153     2 u  365  512  377  144.848    3.841   1.092
 192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000

On the FE models, timedatectl should show that NTP synchronized feild is yes. That wasn't happening even after us restarting the systemd-timesyncd service. After this, I just tried restarting all FE computers and it started working.


CDS

We had removed all db9 enabling plugs on the new SOSs beforehand to keep coils off just in case CDS does not come back online properly.

Everything in CDS loaded properly except the c1oaf model which kepy showing 0x2bad status. This meant that some IPC flags are red on c1sus, c1mcs and c1lsc as well. But everything else is green. See attachment 1. I then burtrestroed everything in the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2022/Feb/4/12:19 directory. This includes the snapshot of c1vac as well that I added on autoburt that day. All burt restore statuses were green OK. I think we are in good state now to start watchdogs on the new SOSs and put back the db9 enabling plugs.


Future work:

When somebody gets time, we should make cutom service files in fb1:/etc/systemd/system/ symbolic links to a repo directory and version control these important services. We should also make sure that their dependencies and startup order is correctly configured. I might have done a half-assed job there since I recently learned how to make unit files. We should do the same on nodus and chiara too. Our hope is that on one glorious day, the lab can be restarted without spending more than 20 min on booting up the computers and network.

 

  16653   Wed Feb 9 13:55:05 2022 KojiUpdateGeneralBringing back CDS

Great recovery work and cleaning of the rebooting process.

I'm just curious: Did you observe that the c1sus2 cards have different numbering order than the previous along with the power outage/cycling?

  16655   Wed Feb 9 16:43:35 2022 PacoUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

[Paco, Anchal]

  • We went in and measured the power after the power splitting HWP at the PSL table. Almost right before the PSL shutter (which was closed), when the PMC was locked we saw ~ 598 mW (!!)
  • Checking back on ESP300, it seems the channel was not enabled even though the right angle was punched in, so it got enabled.
    • No change.
  • The power adjustment MEDM screen is not really working...
  • Going back to the controller, press HOME on the Axis 1 (our HWP) and see it go to zero...
    • Now the power measured is ~ 78 mW.
  • Not sure why the MEDM screen didn't really work (this needs to be fixed later)

We proceeded to align the MC optics because all offsets in MC_ALIGN screen were zeroed. After opening the PSL shutter, we used values from last year as a reference, and try to steadily recover the alignment. The IMC lock remains at large.

  16657   Thu Feb 10 15:41:00 2022 AnchalUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

I found out that the ESP300 service needs to be run in root mode for it to be able to connect to the USB port of HWP motor controller. While doing this change, I noticed that the channels hosted by c1psl might have a duplication conflict with some other channel hosting computer, because a lot of them show the Warning: "Identical process variable names on multiple servers" which is not good. Someone should look into this conflict.

I added instructions on the power control MEDM screen as it was very non-trivial to use. I have set the power such that the C1:IOO-MC_RFPD_DCMON is 5.6 and this happened at C1:IOO-HWP_POS_SET 2.29.

  16658   Thu Feb 10 17:57:48 2022 AnchalUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

Something is wrong with the Video MUX. The system did not turn back on with full functionality. Even though we see the screens as they were before the power shutdown, we have lost control on switching any of the videos. I went to check the wiki page about Video MUX which told be we should be able to see the configuration screen on this link, but the page wasn't opening. I went and removed the power cable and put it back in. That brought back the configuration page. Still, I could not change any of the video feeds however this time, I could see the EPICS channel values (like C1:VID-QUAD1_4) change. I tired to go to the configuration page and change the matrix values from the control tab there. I found out that the matrix was mislabeled and while making the changes, I started seeing blue screen on QUAD1_3 (where MC2T was set before). I set the QUAD1_3 (output 23) to MC2T (input 16), but no change. The EPICS values are also set properly, so I don't understand the reason behind blue screen. The same happened when I tried to use:

~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_3 MC2T

Weirdly, this caused the QUAD1_4 screen to go blue. Running following had no effect:

~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_4 MCR

So, I'm not sure what to do. This really needs to be fixed! I wanted to see teh MC2F camera so that I can align IMC, that was the whole reason for this rabit hole. Help needed.

  16659   Thu Feb 10 19:03:23 2022 KojiUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

I came back to the 40m and started the investigation.

If I ping 192.168.113.92, it responds. But telnet (port 23) was rejected. I somehow tried ssh and it responds! I even could login to the host using usual password. Here is the prompt.

controls@nodus|~> ssh 192.168.113.92
controls@192.168.113.92's password:

...
controls@c1sus2:~ 0$

Oh no...

Looks like c1sus2 and the videomux have the IP address conflict.

Here are the useful ELOG links:

https://nodus.ligo.caltech.edu:8081/40m/4498

https://nodus.ligo.caltech.edu:8081/40m/4529

  16660   Thu Feb 10 19:46:37 2022 KojiUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

== Assign new IP address to c1sus2 ==

cf: [40m ELOG 16398] [40m ELOG 16396]

- Shutdown c1sus2 (Oh, no. This killed c1lsc/c1sus/c1ioo... This should be taken care of later)

- Confirmed 192.168.113.87 is not alive

- Go to chiara
- Modify /diskless/root/etc/hosts

192.168.113.87  c1sus2 c1sus2.martian

- Modify /etc/dhcp/dhcpd.conf

host c1sus2 {
  hardware ethernet 00:25:90:06:69:C2;
  fixed-address 192.168.113.87;
}

- Modify /var/lib/bind/martian.hosts

c1sus2          A    192.168.113.87
videomux        A    192.168.113.92

- Modify /var/lib/bind/martian.hosts/rev.113.168.192.in-addr.arpa

87            PTR    c1sus2.martian
92            PTR    videomux.martian

- Reload/restart bind9 / dhcpd. Run the following command

sudo service bind9 reload
sudo service isc-dhcp-server restart

- Restart c1sus2 and confirm if the IP address was actually changed

controls@c1sus2:~ 0$ /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 00:25:90:06:69:c2
          inet addr:192.168.113.87  Bcast:192.168.113.255  Mask:255.255.255.0
...

== Restart c1lsc / c1sus /c1ioo ==

- Reboot c1lsc/c1sus/c1ioo

- Go to scripts/cds

- Run startC1LSC.sh and follow the instruction

 

  16661   Thu Feb 10 21:10:43 2022 KojiUpdateGeneralVideo Mux setting reset

Now the video matrix is responding correctly and the web interface shows up. (Attachment 1)

Also the video buttons respond as usual. I pushed Locking Template button to bring the setting back to nominal. (Attachment 2)

  16665   Fri Feb 11 11:17:00 2022 AnchalUpdateGeneralScheduled power outage recovery

I found that two computers are not powering up in the control room, Ottavia and Allegra. Allegra was important for us as it had the current version of LIGO CDS workstation installed on, providing us with options to use latest packages written by LIGO CDS team. I think the power issue should be resolvable if someone opens it and knows what thye are doing. Do we have any way of getting fuse repairs on such computers? Both these computers are Dell XPS 420.

 

  16667   Fri Feb 11 16:09:11 2022 AnchalUpdateGeneralScheduled power outage recovery - Input power increased

We increased the input power to IMC by replacing the 98% transmission BS by a 10% transmission BS on the detection table (reverse of what mentioned in 40m/16408 see attachment 8-9laugh). We then realigned the BS so that MC RFPD is centered. Then we realigned two steering mirrors to get the beam centered on the WFS1 and WFS2 QPD. Then we increased the power of the input beam to get 5.307 reading on the C1:IOO-MC_RFPD_DCMON channel. We did this so that we can align the IMC. Once we have it aligned, we'll go back to low poer for doing chamber work.

Beware, there is about 1W beam on the detection table right now.

 

  16669   Mon Feb 14 18:31:50 2022 PacoUpdateGeneralScheduled power outage recovery - IMC recovery progress

[Paco, Anchal, Tega]

We have been realigning the IMC as of last Friday (02/11). Today we made some significant progress (still at high input power), but the IMC autolocker is unable to engage a stable mode lock. We have made some changes to reach this point, including re-centering of the MC1 REFL beam on the ccd, centering of MC2 QPD trans (using flashes), and centering of the MC REFL RFPD beam. The IMC is flashing to peak transmission of > 50% its max (near 14,000 counts average on 2021), and all PDs seem to be working ok... We will keep the PSL shutter closed (especially with high input power) for now.

  16670   Mon Feb 14 18:43:49 2022 PacoSummaryGeneralSOS materials clean room cleared

[Yehonathan, Paco]

We put away most items used / involved in SOS assembly and characterization. Many were stored in the left-most cabinet in the clean area. The OpLev test setup and optics were stored in the upper cabinets above the microscope area, and several screws and other general components were collected in clean bags or wrapped in foil, labeled and put away.

  16671   Mon Feb 14 21:03:25 2022 KojiUpdateGeneralScheduled power outage recovery

I opened the boxes. Allegra has obvious vent of at least 4 caps. And the power supply did not respond even a paper clip test was performed. https://www.silverstonetek.com/downloads/QA/PSU/PSU-Paper%20Clip-EN.pdf (Paper Clip Test)
=> The mother board and the PSU are dead.

Then Ottavia was also checked. The mother board looked OK, but the PSU did not respond. I quickly opened the PSU and it had a bunch of bulged capacitors in it. => PSU dead

Conclusion: Save the cards/memory etc as much as possible. Migrate the allegra HDD to any other healthy PC or obtain a new used PC from Larry. Otherwise, we just want to buy another WS and copy the disk in it.

 

  16672   Tue Feb 15 19:32:50 2022 KojiUpdateGeneralScheduled power outage recovery - IMC recovery progress

Reduced the IMC power to 100mW

Setup: The power meter was placed right before the final aperture (Attachment 1)

Before the adjustment: Initial position of the HWP was 37.29deg and the input power was 987mW (Attachments 2/3)

After the adjustment: Initial position of the HWP was 74.00deg and the input power was 100mW (Attachments 4/5)

This made the MCREFL reading 0.549.

The MC refl path optics has not been modified.

  16673   Tue Feb 15 19:40:02 2022 KojiUpdateGeneralIMC locking

IMC is locking now. There was nothing wrong: just a careful alignment + proper gain adj

=== Primary Alignment ===

- I used WFS error signals as the indicator of the PDH error signals. Checked C1:IOO-WFS1_(I/Q)n_ERR and ended up C1:IOO-WFS1_I4_ERR as it showed the largest PDH error PP.

- Then used MC2 and MC3 to align the IMC by maximizing the PDH error and the MC trans (C1:IOO-MC_TRANS_SUM_ERR)

=== Locking procedure ===

Note that the MC REFL path is still configured for the full power input

- (Only at the beginning) Run scripts/MC/mcdown for initialization / Run scripts/MC/MC2tickleOFF just in case

- Enable IOO-MC-SW1 (MC SERVO switch right after "IN1 Gain (dB)").
- Disable 40:4000 boost
- Increase VCO Gain from -15 to 0
- Jiggle IN1 Gain from low to +31 until the lock is achieved

- As soon as the lock is acquired, enable 40:4000
- Increase VCO Gain to +10
- Turn up "SUPER BOOST" from 0 to 3

=== Lock loss procedure ===

Note that the MC REFL path is still configured for the full power input

- Disable IOO-MC-SW1
- Disable 40:4000 boost
- Reduce VCO Gain 0
- Turn down "SUPER BOOST" to 0

- Then jiggle IN1 Gain again to lock the IMC

=== MC2 spot ===

- It was obvious that the MC2F spot was not on the center of the optic.
- I tried to move the spot on the camera as much as possible, but this did not make the trans beam to the center of the MC end QPD
- I had the impression that the trans beam started to be clipped when the beam was moved towards the end QPD,

We need to reestablish the reasonable/consistent MC2 spot on the mirror, the MC end optics, and the QPD.
We will need to use MC2 dithering and A2L coupling to determine the center of the mirror

But as long as the transmission is maximized, the transmitted beam thru MC1 and MC3 follows the input beam. So we can continue the vent work

The current maximized transmission was ~1300. MC1 refl CCD view was largely off -> The camera path was adjusted.

=== MC2 alignment note ===

During the alignment, I noticed a sudden change of the MC2 alignment. There might be some hysteresis in the MC2 suspension. If you are locking the IMC and noticed significant misalignment, the first thing to try is to touch MC2 alignment.

  16674   Wed Feb 16 15:19:41 2022 AnchalUpdateGeneralReconfigured MC reflection path for low power

I reconfigured the MC reflection path for low power. This meant the following changes:

  • Replaced the 10% reflection BS by 98% reflection beam splitter
  • Realigned the BS angle to get maximum on C1:IOO-MC_RFPD_DCMON when cavity is unlocked.
  • Then realigned the steering mirrors for WFS1 and WFS2.
  • I tried to align the light for MC reflection CCD but then I realized that the pickoff for the camera is too low for it to be able to see anything.

Note, even the pick-off for WFS1 and WFS2 is too low I think. The IOO WFS alignment does not work properly for such low levels of light. I tried running the WFS loop for IMC and it just took the cavity out of the lock. So for low power scenario, we would keep the WFS loops OFF.

 

  16676   Wed Feb 23 15:08:57 2022 AnchalUpdateGeneralRemoved extra beamsplitter in MC WFS path

As discussed in the meeting, I removed the extra beam splitter that dumps most of the beam going towards WFS photodiodes. This beam splitter needs to be placed back in position before increasing the input power to IMC at nominal level. This is to get sufficient light on the WFS photodiodes so that we can keep IMC locked for more than 3 days. Currently IMC is unlocked and misaligned. I have marked the position of this beam splitter on the table, so putting it back in should be easy. Right now, I'm trying to align the mode cleaner back and start the WFS loops once we get it locked.

  16677   Thu Feb 24 14:32:57 2022 AnchalUpdateGeneralMC RFPD DCMON channel got stuck to 0

I found a peculiar issue today. The C1:IOO-MC_RFPD_DCMON remains constantly 0. I wonder if the RFPF output is being read properly. I opened the table and used an oscilloscope to confirm that the DC output from the MC REFL photodiode is coming consistently but our EPICs channel is not reading it. I tried restarting the modbusIOC service but that did not affect anything. I power cycled the acromag chassis while keeping the modbusIOC service off, and then restarted teh modbusIOC service. After this, I saw more channels got stuck and became unresponsive, including the PMC channels. So then I rebooted c1psl without doing anything to the acromaf chasis, and finally things came back online. Everything looks normal to me now but I'm not sure if one of the many channels is not in the right state. Anyways, problem is solved now.

 

  16679   Thu Feb 24 19:26:32 2022 AnchalUpdateGeneralIMC Locking

I think I have aligned the cavity, including MC1 such that we are seeing flashing of fundamental mode and significant transmission sum value as well.However, I'm unable to catch lock following Koji's method in 40m/16673. Autolocker could not catch lock either. Maybe I am doing something wrong, I'll pickup again tomorrow, hopefully the cavity won't drift too much in this time.

  16685   Sun Feb 27 00:37:00 2022 KojiUpdateGeneralIMC Locking Recovery

Summary:

- IMC was locked.
- Some alignment change in the output optics.
- The WFS servos working fine now.
- You need to follow the proper alignment procedure to recover the good alignment condition.

Locking:
- Basically followed the previous procedure 40m/16673.
- The autolocker was turned off. Used MC2 and MC3 for the alignment.
- Once I hit the low order modes, increased the IN1 gain to acquire the lock. This helped me to bring the alignment to TEM00
- Found the MC2 spot was way too off in pitch and yaw.
- Moved MC1/2/3 to bring the MC2 spot around the center of the mirror.
- Found a reasonably good visibility (<90%) at a MC2 spot. Decided this to be the reference (at least for now)

SP Table Alignment Work
- Went to the SP table and aligned the WFS1/2 spots.
- I saw no spot on the camera. Found that the beam for the camera was way too weak and a PO mirror was useless to bring the spot on the CCD.
- So, instead, I decided to catch an AR reflection of the 90% mirror. (See Attachment 1)
- This made the CCD vulnerable to the stronger incident beam to the IMC. Work on the CCD path before increasing the incident power.

MC2 end table alignment work
- I knew that the focusing lens there and the end QPD had inconsistent alignment.
- The true MC2 spot needs to be optimized with A2L (and noise analysis / transmitted beam power analysis / etc)
- So, just aligned the QPD spot using today's beam as the temporary target of the MC alignment. (See Attachment 2)

Resulting CCD image on the quad display (Attachment 3)

WFS Servo
- To activate the WFS with the low transmitted power, the trigger threshold was reduced from 5000 to 500. (See Attachment 4)
- WFS offset was reset with /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_RF_offsets
- Resulting working state looks like Attachment 5

  16686   Sun Feb 27 01:12:46 2022 KojiUpdateGeneralIMC manual alignment procedure

We expect that the MC sus are susceptible to the temperature change and the alignment drifts away with time.

Here is the proper alignment procedure.

0) Assume there is no TEM00 flash or locking, but the IMC is still flashing with higher-order modes.

1) Use the CCD camera and WFS DC spots to bring the beam to the nominal position.

2) Use only MC2 and MC3 to align the cavity to have low-order modes (TEM00,01,02 etc)

3) You should be able to lock the cavity on one of these modes. Minimize the reflection (maximize the transmission) for that mode.

4) This should allow you to jump to a better lower-order mode. Continue alignment optimization only with MC2/3 until you get TEM00.

5) Optimize the TEM00 alignment only with MC2/3

6) Look at the MC end QPD. use one of the scripts in scripts/MC/moveMC2 . Note that the spot moves opposite to the name of the scripts. i.e. MC2_spot_down moves the spot up, MC2_spot_right moved the spot left, etc...
These scripts move MC1/2/3 and try to keep the good MC transmission.

7) moveMC2 scripts are not perfect. As you use them, it makes the MC alignment gradually degraded. Use MC2 and MC3 to recover good transmission.

8) If MC2 spot is satisfactory, you are done.

-------------

Step 6-8 can be done with the WFS on. This way, you can skip step 7 as the WFS servo takes care of it. But if the spot move is too fast, the servo can't keep up with the change. If so, you have to wait for the settling of the servo. Once the spot position is satisfactory, MC servo relief should be run so that the servo offset (in actuation) can be offloaded to the bias slider.

 

  16725   Tue Mar 15 10:45:31 2022 PacoUpdateGeneralAssembled small in-vac optics

[Paco]

This morning I assembled LO3, LO4 and AS3 (all mirrors) onto polaris K1 mounts. The mounts stand as per this elog, on 4.5" posts with 0.5" Al spacers to match the beam heigth of 5.5". I also assembled ASL by adding a 0.14" Al spacer, and finally, recycled two DLC mounts (from the XEND flowbench) and posts to mount the 2 inch diameter beamsplitters BHDBS and AS2 (T=10%). I stored the previous 2" optics in the CVI and lambda optic cases and labeled appropriately.

  16775   Wed Apr 13 16:23:54 2022 Ian MacMillanUpdateGeneralSmell in 40m

[Ian, Paco, JC]

There is a strange smell in the 40m. It smells like a chemically burning smell maybe like a shorted component. I went around with the IR camera to see if anything was unusually hot but I didn't see anything. The smell seems to be concentrated at the vertex and down the y-arm

  16784   Mon Apr 18 15:17:31 2022 JancarloUpdateGeneralTool box and Work Station Organization

I cleaned up around the 40 m lab. All the Laser Safety Glasses have been picked up and placed on the rack at the entrance.

Some miscellaneous BNC Connector cables have been arranged and organized along the wall parallel to the Y-Tunnel.

Nitrogen tanks have been swapped out. Current tank is at 1200 psi and the other is at 1850 psi.

The tool box has been organized with each tool in its specified area.

  16787   Mon Apr 18 23:22:39 2022 KojiUpdateGeneralTool box and Work Station Organization

Whoa! Thanks!

  16808   Mon Apr 25 14:19:51 2022 JCUpdateGeneralNitrogen Tank

Coming in this morning, I checked on the Nitrogen tanks to check the level. One of the tanks were empty, so I went ahead and swapped it out. One tank is at 946 PSI, the other is at 2573 PSI. I checked for leaks and found none.

  16809   Mon Apr 25 14:49:02 2022 KojiUpdateGeneralNitrogen Tank

For your (and mine) info:

N2 pressure can be monitored on the 40m summary page: https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20220425/vacuum/
(you need to hit "today" to go to the current status)

 

  16901   Wed Jun 8 16:33:26 2022 KojiUpdateGeneralPower Outage 220608: HVAC restored

I found the HVACs for the ends were off. They were turned back on.

  16921   Wed Jun 15 17:12:39 2022 CiciSummaryGeneralPreparation for AUX Loop Characterization

[Deeksha, Cici]

We went to the end Xarm station and looked at the green laser setup and electronics. We fiddled with the SR-785 and experimented with low-pass filters, and will be exploring the Python script tomorrow.

  16926   Thu Jun 16 19:49:48 2022 CiciUpdateGeneralUsing the SR785

[Deeksha, Cici]

We used a python script to collect data from the SR785 remotely. The SR785 is now connected to the wifi network via Ethernet port 7.

  16933   Tue Jun 21 14:59:22 2022 CiciSummaryGeneralAUX Transfer Function Loop Exploration

[Deeksha, Cici]

We learned about the auxillary laser control loop, and then went into the lab to identify the components and cables represented by our transfer functions. We connected to the SR785 inside the lab so that we can use it to insert noise next time, and measure the output in various parts of the control loop.

  16944   Fri Jun 24 13:29:37 2022 YehonathanUpdateGeneralOSEMs from KAGRA

The box was given to Juan Gamez (SURF)

Quote:

I put the box containing the untested OSEMs from KAGRA near the south flow bench on the floor.

 

  16950   Mon Jun 27 13:25:50 2022 CiciUpdateGeneralCharacterizing the Transfer Loop

[Deeksha, Cici]

We first took data of a simple low pass filter, and attempted to perform a fit to both the magnitude and phase in order to find the Z of the components. Once we felt confident in our ability to measure tranfer functions, we took data and plotted the transfer function of the existing control loop of the AUX laser. What we found generally followed the trend of, but was lower than, 10^4/f, which is what we hoped to match, and also had a strange unexplained notch ~1.3 kHz. The magnitude and phase data both got worse after around 40-50 kHz, which we believe is because the laser came out of lock near the end of the run. 

Edit: 

[Attachment 2 and 3] are the frequency response of the low pass filter, curves fitted using least squares in python.

[Attachment 1 and 4] is the same measurement of OLTF of the actual AUX circuit, and the control diagram pointing out the location of excitation and test point.

  16953   Tue Jun 28 09:03:58 2022 JCUpdateGeneralOrganizing and Cleaning

The plan for the tools in 40m

As of right now, there are 4 tool boxes. X-end, Y-end, Vertex, and the main tool box along the X-arm. The plan is the give each toolbox a set of their own tools. The tools of X-end, Y-end, and Vertex toolboxes will be very similar containing the basic tools such as pliers, screwdrivers, allen ball drivers. Along with this, each tool box will have a tape measure, caliper, level, and other measuring tools we find convinient. 

As for the new toolbox, I have done research and found a few good selections. The only problem I have ran into with this is the width of the tool box corresponding with the prices. The tool cabinet we have now is 41" wide. The issue I have is not in finding another toolbox of the same width, but for a similar price we can find a 54" wide tool cabinet. Would anyone be objected to making a bit more space for this?

How the tools will stay organized.

I the original idea I had was to use a specified color of electrical tape for each tool box. Then to wrap the corresponding tools tools with the same color tape. But it was brought to my attention that the electrical tape would become sticky over time. So, I think the using the label maker would be the best idea. with the labels being 'X' for X-end, 'Y' for Y-end, 'V' for vertex, and 'M' for main toolboxes.

An idea for the optical tables:

Anchal brought it up to me that it is a hassle to go back and forth searching for the correct sizes of Hex Keys and Allen Wrenches. The idea of a pouch on the outside of each optical table was mentioned so I brought this up to Paco. Paco also gave me the idea of a 3D printed stand we could make for allen ball drives. Does anyone have a preference or an idea of what would be the best choice and why? 


A few sidenotes: 

Anchal mentioned to me a while back that there are many cables that are laying on the racks that are not being used. Is there a way we could identify which ones are being used? 

I noticed that when we were vented that a few of the chamber doors were leaning up against the wall and not on a wooden stand like others. Although, the seats for the chamber doors are pretty spacious and do not give us much clearance. For the future ones, could we make something more sleek and put the wider seats at the end chambers?

The cabinets along the Y-Arm are labelled, but do not correspond with all the materials inside or are too full to take in more items. Could I organize these? 
 

  16955   Tue Jun 28 16:26:58 2022 CiciSummaryGeneralVector fitting open loop transfer function/Audio cancellation of optical table enclosure

[Deeksha, Cici]

We attempted to use vectfit to fit our earlier transfer function data, and were generally unsuccessful (see vectfit_firstattempt.png), but are much closer to understanding vectfit than before. Couple of problems to address - finding the right set of initial poles to start with has been very hard, and also however vectfit is plotting the phase data is unwrapping it, which makes it generally unreadable. Still working on how to mess with the vectfit automatically-generated plots. In general, our data is very messy (this is old data of the transfer function from last week), so we took more data today to see if our coherence was the problem (see TFSR785_28-06-2022_161937.pdf). As is visible from the graph, our coherence is terrible, and above 1kHz is almost entirely below 0.5 (or 0.2) on both channels. Figuring out why this is and fixing it is our first priority.

In the process of taking new data, we also found out that the optical table enclosure at the end of the X-arm does a decent job of sound isolation (see enclosure_open.mp4 and enclosure_closed.mp4). The clicking from the shutter is visible on a spectrogram at high frequencies when the enclosure is open, but not when it is closed. We also discovered that the script to toggle the shutter can run indefinitely, which can break the shutter, so we need to fix that problem!

  16982   Fri Jul 8 23:10:04 2022 KojiSummaryGeneralJuly 9th, 2022 Power Outage Prep

The 40m team worked on the power outage preparation. The detailed is summarized on this wiki page. We will still be able to access the wiki page during the power outage as it is hosted some where in Downs.

https://wiki-40m.ligo.caltech.edu/Complete_power_shutdown_2022_07

  16988   Mon Jul 11 19:29:23 2022 PacoSummaryGeneralFinalizing recovery -- timing issues, cds, MC1

[Yuta, Koji, Paco]

Restarting CDS

We were having some trouble restarting all the models on the FEs. The error was the famous 0x4000 DC error, which has to do with time de-synchronization between fb1 and a given FE. We tried a combination of things haphazardly, such as reloading the gpstime process using

controls@fb1:~ 0$ sudo systemctl stop daqd_*
controls@fb1
:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 0$ sudo modprobe gpstime
controls@fb1:~ 0$ sudo systemctl start daqd_*
controls@fb1:~ 0$ sudo systemctl restart open-mx.service

without much success, even when doing this again after hard rebooting FE + IO chassis combinations around the lab. Koji prompted us to check the local times as reported by the gpstime module, and comparing it to network reported times we saw the expected offset of ~ 3.5 s. On a given FE ("c1***") and fb1 separately, we ran:

controls@c1***:~ 0$ timedatectl
  Local time: Mon 2022-07-11 16:22:39 PDT
  Universal time: Tue 2022-07-11 23:22:39 UTC
       Time zone: America/Los_Angeles (PDT, -0700)
       NTP enabled: yes
       NTP synchronized: no
 RTC in local TZ: no
       DST active: yes
 Last DST change: DST began at
                  Sun 2022-03-13 01:59:59 PST
                  Sun 2022-03-13 03:00:00 PDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2022-11-06 01:59:59 PDT
                  Sun 2022-11-06 01:00:00 PST
controls@fb1:~ 0$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000

which meant a couple of things:

  1. fb1 was serving its time (broadcast to local (martian) network)
  2. fb1 was not getting its time from the internet
  3. c1*** was not synchronized even though fb1 was serving the time

By looking at previous elogs with similar issues, we tried two things;

  1. First, from the FEs, run sudo systemctl restart systemd-timesyncd to get the FE in sync; this didn't immediately solve anything.
  2. Then, from fb1, we tried pinging google.com and failed! The fb1 was not connected to the internet!!!

We tried rebooting fb1 to see if it connected, but eventually what solved this was restarting the bind9 service on chiara! Now we could ping google, and saw this output

controls@fb1:~ 0$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+tor.viarouge.ne 85.199.214.102   2 u  244 1024  377  144.478    0.761   0.566
*ntp.exact-time. .GPS.            1 u   93 1024  377  174.450   -1.741   0.613
 time.nullrouten .STEP.          16 u    - 1024    0    0.000    0.000   0.000
+ntp.as43588.net 129.6.15.28      2 u  39m 1024  314  189.152    4.244   0.733
 192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000 

meaning fb1 was getting its time served. Going back to the FEs, we still couldn't see the ntp synchronized flag up, but it just took time after a few minutes we saw the FEs in sync! This also meant that we could finally restart all FE models, which we successfully did following the script described in the wiki. Then we had to reload the modbusIOC service in all the slow machines (sometimes this required us to call sudo systemctl daemon-reload) and performed burt restore to a last Friday's snap file collection.


IMC realign and MC1 glitch?

With Koji's help PMC locked, and then Yuta and Paco manually increased the input power to the IFO by rotating the waveplate picomotor to 37.0 deg. After this, we noticed that the MC REFL spot was not hitting the camera, so maybe MC1 was misaligned. Paco checked the AP table and saw the spot horizontally misaligned on the camera, which gave us the initial YAW correction on MC1. After some IMC recovery, we saw only MC1 got spontaneously kicked along both PIT and YAW, making our alignment futile. Though not hard to recover, we wondered why this happened.

We went into the 1X4 rack and pushed MC1 suspension cables in to disregard loose connections, but as we came back into the control room we again saw it being kicked randomly! We even turned damping off for a little while and this random kicking didn't stop. There was no significant seismic motion at the time so it is still unclear of what is happening.

  16993   Tue Jul 12 18:35:31 2022 Cici HannaSummaryGeneralFinding Zeros/Poles With Vectfit

Am still working on using vectfit to find my zeros/poles of a transfer function - now have a more specific project in mind, which is to have a Red Pitaya use the zero/pole data of the transfer function to find the UGF, so we can check what the UGF is at any given time and plot it as a function of time to see if it drifts (hopefully it doesn't). Wrestled with vectfit more on matlab, found out I was converting from dB's incorrectly (should be 10^(dB/20)....) Intend to read a bit of a book by Bendat and Piersol to learn a bit more about how I should be weighting my vectfit. May also check out an algorithm called AAA for fitting instead.

  17003   Thu Jul 14 19:09:51 2022 ranaUpdateGeneralEQ recovery

There was a EQ in Ridgecrest (approximately 200 km north of Caltech). It was around 6:20 PM local time.

All the suspensions tripped. I have recovered them (after some struggle with the weird profusion of multiple conflicting scripts/ directories that have appeared in the recent past...)

ETMY is still giving me some trouble. Maybe because of the HUGE bias on that within the fast CDS system, it had some trouble damping. Also the 'reenable watchdog' script in one of the many scripts directories seems to do a bad job. It re-enables optics, btu doesn't make sure that the beams are on the optical lever QPD, and so the OL servo can smash the optic around. This is not good.

Also what's up with the bashrc.d/ in some workstations and not others? Was there something wrong with the .bashrc files we had for the past 15 years? I will revert them unless someone puts in an elog with some justification for this "upgrade".

This new SUS screen is coming along well, but some of the fields are white. Are they omitted or is there something non-functional in the CDS? Also, the PD variances should not be in the line between the servo outputs and the coil. It may mislead people into thinking that the variances are of the coils. Instead, they should be placed elsewhere as we had it in the old screens.

  17006   Fri Jul 15 16:20:16 2022 Cici HannaUpdateGeneralFinding UGF

I have temporarily abandoned vectfit and aaa since I've been pretty unsuccessful with them and I don't need poles/zeroes to find the unity gain frequency. Instead I'm just fitting the transfer function linearly (on a log-log scale). I've found the UGF at about 5.5 kHz right now, using old data - next step is to get the Red Pitaya working so I can take data with that. Also need to move this code from matlab to python. Uncertainty's propagated using the 95% confidence bounds given by the fit, using curvefit - so just from the standard error, and all points are weighted equally. Ideally would like to propagate uncertainty accounting for the coherence data too, but haven't figured out how to do that correctly yet.

 

[UPDATE 7/22/2022: added raw data files]

  17021   Wed Jul 20 11:58:45 2022 PacoSummaryGeneralJenne laser kaput?

[Paco, Yehonathan, JC]

We were trying to setup the Jenne laser to characterize the response of three 1811s that Yehonathan is using for his WOPA experiment (in QIL). We hooked up a ~ 5 VDC power supply to the bias tee and looked to see if there was any DC response in the REF PD. We used a DB9 breakout board and a DB9 cable, and saw some current being drawn. The DC current was a bit too high (500 mA), so we turned the DC voltage off, and realized the VDC power was reversed, probably along the DB9 cable which we didn't check before. As we flipped the power supply leads and turned power back on, we could no longer see any current even though the voltage was now right (or was it???). We would like to debug this laser, and continue using it if it still works (!), but there is negligible documentation either here or in the wiki, so if there are any known places to look at it would be helpful to know them.

ELOG V3.1.3-