40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 18 of 348  Not logged in ELOG logo
ID Date Author Type Category Subject
  16660   Thu Feb 10 19:46:37 2022 KojiUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

== Assign new IP address to c1sus2 ==

cf: [40m ELOG 16398] [40m ELOG 16396]

- Shutdown c1sus2 (Oh, no. This killed c1lsc/c1sus/c1ioo... This should be taken care of later)

- Confirmed 192.168.113.87 is not alive

- Go to chiara
- Modify /diskless/root/etc/hosts

192.168.113.87  c1sus2 c1sus2.martian

- Modify /etc/dhcp/dhcpd.conf

host c1sus2 {
  hardware ethernet 00:25:90:06:69:C2;
  fixed-address 192.168.113.87;
}

- Modify /var/lib/bind/martian.hosts

c1sus2          A    192.168.113.87
videomux        A    192.168.113.92

- Modify /var/lib/bind/martian.hosts/rev.113.168.192.in-addr.arpa

87            PTR    c1sus2.martian
92            PTR    videomux.martian

- Reload/restart bind9 / dhcpd. Run the following command

sudo service bind9 reload
sudo service isc-dhcp-server restart

- Restart c1sus2 and confirm if the IP address was actually changed

controls@c1sus2:~ 0$ /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 00:25:90:06:69:c2
          inet addr:192.168.113.87  Bcast:192.168.113.255  Mask:255.255.255.0
...

== Restart c1lsc / c1sus /c1ioo ==

- Reboot c1lsc/c1sus/c1ioo

- Go to scripts/cds

- Run startC1LSC.sh and follow the instruction

 

  16659   Thu Feb 10 19:03:23 2022 KojiUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

I came back to the 40m and started the investigation.

If I ping 192.168.113.92, it responds. But telnet (port 23) was rejected. I somehow tried ssh and it responds! I even could login to the host using usual password. Here is the prompt.

controls@nodus|~> ssh 192.168.113.92
controls@192.168.113.92's password:

...
controls@c1sus2:~ 0$

Oh no...

Looks like c1sus2 and the videomux have the IP address conflict.

Here are the useful ELOG links:

https://nodus.ligo.caltech.edu:8081/40m/4498

https://nodus.ligo.caltech.edu:8081/40m/4529

  16658   Thu Feb 10 17:57:48 2022 AnchalUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

Something is wrong with the Video MUX. The system did not turn back on with full functionality. Even though we see the screens as they were before the power shutdown, we have lost control on switching any of the videos. I went to check the wiki page about Video MUX which told be we should be able to see the configuration screen on this link, but the page wasn't opening. I went and removed the power cable and put it back in. That brought back the configuration page. Still, I could not change any of the video feeds however this time, I could see the EPICS channel values (like C1:VID-QUAD1_4) change. I tired to go to the configuration page and change the matrix values from the control tab there. I found out that the matrix was mislabeled and while making the changes, I started seeing blue screen on QUAD1_3 (where MC2T was set before). I set the QUAD1_3 (output 23) to MC2T (input 16), but no change. The EPICS values are also set properly, so I don't understand the reason behind blue screen. The same happened when I tried to use:

~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_3 MC2T

Weirdly, this caused the QUAD1_4 screen to go blue. Running following had no effect:

~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_4 MCR

So, I'm not sure what to do. This really needs to be fixed! I wanted to see teh MC2F camera so that I can align IMC, that was the whole reason for this rabit hole. Help needed.

Attachment 1: PXL_20220211_021509819.jpg
PXL_20220211_021509819.jpg
  16657   Thu Feb 10 15:41:00 2022 AnchalUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

I found out that the ESP300 service needs to be run in root mode for it to be able to connect to the USB port of HWP motor controller. While doing this change, I noticed that the channels hosted by c1psl might have a duplication conflict with some other channel hosting computer, because a lot of them show the Warning: "Identical process variable names on multiple servers" which is not good. Someone should look into this conflict.

I added instructions on the power control MEDM screen as it was very non-trivial to use. I have set the power such that the C1:IOO-MC_RFPD_DCMON is 5.6 and this happened at C1:IOO-HWP_POS_SET 2.29.

  16656   Thu Feb 10 14:39:31 2022 KojiSummaryComputersNetwork security issue resolved

[Mike P / Koji / Tega / Anchal]

IMSS/LIGO IT notified us that "ILOM ports" of one of our hosts on the "114" network are open. We tried to shut down obvious machines but could not identify the host in question. So we decided to do a bit more systematic search of the host.

[@Network Rack]
- First of all, we disconnected the optical cables coming to the GC router while the ping is running on the AIRLIGO connected laptop (i.e. outside of the 40m network). This made the ping stopped. This means that the issue was definitely in the 40m.
- Secondly, we started to disconnect (and reconnect) the ethernet cables from the GC router one by one. We found that the ping response stops when the cable named "NODUS" was disconnected.

[@40m IFO lab]
- So we tracked the cable down in the 40m lab. After a while, we identified that the cable was really connected to nodus.

- Nodus was supposed to have one network connection to the martian network since the introduction of the bidirectional NAT router (rather than the old configuration with a single direction NAT router).

- In fact, the cable was connected to "non-networking" port of nodus. (Attachment 1). I guess the cable was connected like this long time, but somehow the ILOM (IPMI) port was activated along with the recent power cycling.

- The cable was disconnected at nodus too. (Attachment 2) And a tape was attached to the port so that we don't connect anything to the port anymore.

Attachment 1: PXL_20220210_220816955.jpg
PXL_20220210_220816955.jpg
Attachment 2: PXL_20220210_220827167.jpg
PXL_20220210_220827167.jpg
  16655   Wed Feb 9 16:43:35 2022 PacoUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

[Paco, Anchal]

  • We went in and measured the power after the power splitting HWP at the PSL table. Almost right before the PSL shutter (which was closed), when the PMC was locked we saw ~ 598 mW (!!)
  • Checking back on ESP300, it seems the channel was not enabled even though the right angle was punched in, so it got enabled.
    • No change.
  • The power adjustment MEDM screen is not really working...
  • Going back to the controller, press HOME on the Axis 1 (our HWP) and see it go to zero...
    • Now the power measured is ~ 78 mW.
  • Not sure why the MEDM screen didn't really work (this needs to be fixed later)

We proceeded to align the MC optics because all offsets in MC_ALIGN screen were zeroed. After opening the PSL shutter, we used values from last year as a reference, and try to steadily recover the alignment. The IMC lock remains at large.

  16654   Wed Feb 9 14:34:27 2022 IanSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics

Restarted the C1sim machine at about 12:30 to help diagnose a network problem. Everything is back up and running

Attachment 1: SummaryMdemScreen.png
SummaryMdemScreen.png
  16653   Wed Feb 9 13:55:05 2022 KojiUpdateGeneralBringing back CDS

Great recovery work and cleaning of the rebooting process.

I'm just curious: Did you observe that the c1sus2 cards have different numbering order than the previous along with the power outage/cycling?

  16652   Wed Feb 9 11:56:24 2022 AnchalUpdateGeneralBringing back CDS

[Anchal, Paco]

Bringing back CDS took a lot of work yesterday. I'm gonna try to summarize the main points here.


mx_start_stop

For some reason, fb1 was not able to mount mx devices automatically on system boot. This was an issue I earlier faced in fb1(clone) too. The fix to this problem is to run the script:

controls@fb1:/opt/mx/sbin/mx_start_stop start

To make this persistent, I've configured a daemon (/etc/systemd/system/mx_start_stop.service) in fb1 to run once on system boot and mount the mx devices as mentioned above. We did not see this issue of later reboots yesterday.


gpstime

Next was the issue of gpstime module out of date on fb1. This issue is also known in the past and requires us to do the following:

controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 1$ sudo modprobe gpstime

Again, to make this persistent, I've configured a daemon (/etc/systemd/system/re-add-gpstime.service) in fb1 to run the above commands once on system boot. This corrected gpstime automatically and we did not face these problems again.


time synchornization

Later we found that fb1-FE computers, ntp time synchronization was not working and the main reason was that fb1 was unable to access internet. As a rule of thumb, it is always a good idea to try pinging www.google.com on fb1 to ensure that it is connected to internet. The issue had to do with fb1 not being able to find any namespace server. We fixed this issue by reloading bind9 service on chiara a couple of times. We're not really sure why it wasn't working.

~>sudo service bind9 stop
~>sudo service bind9 start
~>sudo service bind9 status
* bind9 is running

After the above, we saw that fb1 ntp server is working fine. You see following output on fb1 when that is the case:

controls@fb1:~ 0$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
-table-moral.bnr 110.142.180.39   2 u  399  512  377  195.034  -14.618   0.122
*server1.quickdr .GPS.            1 u   67   64  377  130.483   -1.621   1.077
+ntp2.tecnico.ul 56.99.239.27     2 u  473  512  377  184.648   -0.775   2.231
+schattenbahnhof 129.69.1.153     2 u  365  512  377  144.848    3.841   1.092
 192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000

On the FE models, timedatectl should show that NTP synchronized feild is yes. That wasn't happening even after us restarting the systemd-timesyncd service. After this, I just tried restarting all FE computers and it started working.


CDS

We had removed all db9 enabling plugs on the new SOSs beforehand to keep coils off just in case CDS does not come back online properly.

Everything in CDS loaded properly except the c1oaf model which kepy showing 0x2bad status. This meant that some IPC flags are red on c1sus, c1mcs and c1lsc as well. But everything else is green. See attachment 1. I then burtrestroed everything in the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2022/Feb/4/12:19 directory. This includes the snapshot of c1vac as well that I added on autoburt that day. All burt restore statuses were green OK. I think we are in good state now to start watchdogs on the new SOSs and put back the db9 enabling plugs.


Future work:

When somebody gets time, we should make cutom service files in fb1:/etc/systemd/system/ symbolic links to a repo directory and version control these important services. We should also make sure that their dependencies and startup order is correctly configured. I might have done a half-assed job there since I recently learned how to make unit files. We should do the same on nodus and chiara too. Our hope is that on one glorious day, the lab can be restarted without spending more than 20 min on booting up the computers and network.

 

Attachment 1: Screenshot_2022-02-09_12-11-33.png
Screenshot_2022-02-09_12-11-33.png
  16651   Mon Feb 7 16:53:02 2022 KojiUpdateGeneralScheduled power outage recovery

I went to the X end and found it was warm. Turned out that not all the A/Cs were on. They were turned on now.

Attachment 1: PXL_20220208_001646282.jpg
PXL_20220208_001646282.jpg
Attachment 2: PXL_20220208_001657871.jpg
PXL_20220208_001657871.jpg
  16650   Mon Feb 7 16:14:37 2022 TegaUpdateComputersrealtime system reboot problem

I was looking into plotting temperature sensor data trend and why we currently do not have frame data written to file (on /frames) since Friday, and noticed that the FE models were not running. So I spoke to Anchal about it and he mentioned that we are currently unable to ssh into the FE machines, therefore we have been unable to start the models. I recalled the last time we enountered this problem Koji resolved it on Chiara, so I search the elog for Koji's fix and found it here, https://nodus.ligo.caltech.edu:8081/40m/16310. I followed the procedure and restarted c1sus and c1lsc machine and we are now able to ssh into these machines. Also restarted the remaining FE machines and confirm that can ssh into them. Then to start models, I ssh into each FE machine (c1lsc, c1sus, c1ioo, c1iscex, c1iscey, c1sus2) and ran the command

rtcds start --all

to start all models on the FE machine. This procedure worked for all the FE machines but failed for c1lsc. For some reason after starting the first the IOP model - c1x04, c1lsc and c1ass, the ssh connection to the machine drops. When we try to ssh into c1lsc after this event, we get the following error :  "ssh: connect to host c1lsc port 22: No route to host".  I reset the c1lsc machine and deecided to to start the IOP model for now. I'll wait for Anchal or Paco to resolve this issue.


[Anchal, Tega]

I informed Anchal of the problem and ask if he could take a look. It turn out 9 FE models across 3 FE machines (c1lsc, c1sus, c1ioo) have a certain interdependece that requires careful consideration when starting the FE model. In a nutshell, we need to first start the IOP models in all three FE machines before we start the other models in these machines. So we turned off all the models and shutdown the FE machines mainly bcos of a daq issue, since the DC (data concentrator) indicator was not initialised. Anchal looked around in fb1 to figure out why this was happening and eventually discovered that it was the same as the ms_stream issue encountered earlier in fb1 clone (https://nodus.ligo.caltech.edu:8081/40m/16372). So we restarted fb1 to see if things clear up given that chiara dhcp sever is now working fine. Upon restart of fb1, we use the info in a previous elog that shows if the DAQ network is working or not, r.e. we ran the command

$ /opt/mx/bin/mx_info
MX:fb1:mx_init:querying driver:error 5(errno=2):No MX device entry in /dev.

 The output shows that MX device was not initialiesd during the reboot as can also be seen below.

$ sudo systemctl status daqd_dc -l

● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
   Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
   Active: failed (Result: exit-code) since Mon 2022-02-07 18:02:02 PST; 12min ago
  Process: 606 ExecStart=/usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc (code=exited, status=1/FAILURE)
 Main PID: 606 (code=exited, status=1/FAILURE)

Feb 07 18:01:56 fb1 systemd[1]: Starting Advanced LIGO RTS daqd data concentrator...
Feb 07 18:01:56 fb1 systemd[1]: Started Advanced LIGO RTS daqd data concentrator.
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: [Mon Feb  7 18:01:57 2022] Unable to set to nice = -20 -error Unknown error -1
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: Failed to do mx_get_info: MX not initialized.
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: 263596
Feb 07 18:02:02 fb1 systemd[1]: daqd_dc.service: main process exited, code=exited, status=1/FAILURE
Feb 07 18:02:02 fb1 systemd[1]: Unit daqd_dc.service entered failed state.


NOTE: We commented out the line

Restart=always

in the file "/etc/systemd/system/daqd_dc.service" in order to see the error, BUT MUST UNDO THIS AFTER THE PROBLEM IS FIXED!

  16649   Mon Feb 7 15:32:48 2022 YehonathanUpdateGeneralY End laser controller

I went to the Y end. The AUX laser was on Standby. I pushed the Standby button. The laser turned on and there was some green light. However, the controller displayed the message "CABLE?" which according to the manual means that the laser head is powered but there is no control over the laser (e.g. the control cable is disconnected). I turned off the controller and disconnected both the power and control cables. I put them back and turned the controller back on.

I pushed the Standby button, the laser turned on and this time the controller displayed the laserhead's state. I was able to change the current/temperature. The problem seems to be resolved.

  16648   Mon Feb 7 09:00:26 2022 PacoUpdateGeneralScheduled power outage recovery

[Paco]

Started recovering from scheduled (Feb 05) power outage. Basically, time-reversing through this list.


== Office area ==

  • Power martian network switches, WiFi routers on the north-rack.
  • Power windows (CAD) machine on.

== Main network stations ==

  • Power on nodus, try ping (fail).
  • Power on network switches, try ping (success), try ssh controls@nodus.ligo.caltech.edu (success).
  • Power on chiara to serve names for other stations, try ssh chiara (success).
  • Power on fb1, try ping (success), try ssh fb1 (success).
  • Power on paola (xend laptop), viviana (yend laptop), optimus, megatron.

== Control workstations ==

  • Power on zita (success)
  • Power on giada (success), run system upgrade.
  • Power on donatella (success)
  • Power on allegra (fail)  **
  • Power on pianosa (success)
  • Power on rossa (success)
  • From nodus, started elog (success).

== PSL + Vertex instruments ==

  • Turn on newport PD power supplies on PSL table.
  • Turn on TC200 temp controller on (setpoint --> 36.9 C)
  • Turn on two oscilloscopes in PSL table.
  • Turn on PSL (current setpoint --> 2.1 A, other settings seem nominal)
  • Turn on Thorlabs HV pzt supply.
  • Turn on ITMX OpLev / laser instrument AC strip.

== YEND and XEND instruments ==

  • Turn on XEND AUX pump on (current setpoint -->1.984 A)
  • Turn on XEND AUX SHG oven on (setpoint --> 37.1 C) (see green beam)
  • Turn on XEND AUX shutter controller on.
  • Turn on DCPD supply, and OpLev supply AC strip on.
  • Turn on YEND AUX pump on (fail) *
    • With the controller on STDBY, I tried setting up the current but got HD FAULT (or according to the manual this is what the head reports when the diode temperature is too high...)
    • Upon power cycling the controller, even the controller display stopped working... YAUX controller + head died? maybe just the diode? maybe just the controller?
      • I borrowed a spare LW125 controller from the PSL table (Yehonathan pointed me to it) and swapped it in.
      • Got YEND AUX to lase with this controller, so the old controller is busted but at least the laser head is fine.
      • Even saw SHG light. We switched the laser head off to "STDBY" (so it remains warm) and took the faulty controller out of there.
  • Turn on YEND AUX SHG oven on (setpoint -->35.7 C)
  • Turn on YEND AUX shutter controller on.

== YARM Electronic racks ==

== XARM Electronic racks ==

 


* Top priority, this needs to be fixed.

** Non-priority, but to be debugged

  16647   Fri Feb 4 10:21:39 2022 AnchalSummaryGeneralComplete lab shutdown

Please edit this same entry throughout the day for the shutdown elogging.

I took a screenshot of C0VAC_MONITOR.adl to ensure that all pnematic valves are in closed positions:

The status message says "All pnematic valves closed" and the latest error message is about "V7 closed, N2 < 6.50e+01".

I found out that there was no autoburt happening for c1vac channels. I created an autoBurt.req file for the vac.db file and saved one snapshot. I also added the path of this file in autoburt/.requestfilelist . Let's see if autoburting starts by that for this file as well.

With this, I think we can safely shutdown acromag chassis. Hopefully, the relays are configured such that the valves are nominally closed in absence of a control signal. After the chassis is shut down, wwe can shutdown C1VAC by:

sudo shutdown

[Chub, Jordan]

At the 1x8 rack, the following were switched off on their respective front panels:

PTP2 & PTP3 Controller
MKS Gauge controller
PRP Gauge Controller
G2P316a & b Controllers
Sorenson
Serial Device Server
Both UPS's

Powered off from back of unit:

TP1 Controller
Acromag chassis

TP2 and 3 controllers were unplugged from respective power strips (labeled C2 and C3)

C1vac and the laptop at the workstation were shut down

Manual Gate valve was closed

  16646   Fri Feb 4 10:04:47 2022 ChubUpdateGeneraldish soap and clean scrub sponges!

Bought dish soap and scrub sponges today and placed them under the sink with the other dish supplies.

Attachment 1: 40m_supplies.jpg
40m_supplies.jpg
  16645   Thu Feb 3 17:15:23 2022 TegaSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics

Finally got the SIMPLANT damping to work following Rana's suggestion to try damping one DoF at a time, woo-hoo!

At first, things didn't look good even when we only focus on the POS DoF. I then noticed that the input value (X1:SUS-OPT_PLANT_TM_RESP_1_1_IN1) to the plant was always zero. This was odd bcos it meant the control signal was not making its way to the plant. So I decided to look at the sensor data

(X1:SUS-OPT_PLANT_COIL_IN_UL_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_UR_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_LR_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_LL_OUTPUT)

that adds up via the C2DOF matrix to give the POS DoF and I noticed that these interior nodes can take on large values but always sum up to zero because the pair (UL, LL) was always the negative of (UR,LR). These things should have the same sign, at least in our case where only the POS DoF is excited, so I tracked the issue back to the alternating (-,+,-,+,-) convention for the gains

(X1:SUS-OPT_CTRL_ULCOIL_GAIN, X1:SUS-OPT_CTRL_URCOIL_GAIN, X1:SUS-OPT_CTRL_LRCOIL_GAIN, X1:SUS-OPT_CTRL_LLCOIL_GAIN, X1:SUS-OPT_CTRL_SDCOIL_GAIN)

of the Coil Output filters used in the real system, which we adopted in the hopes that all was well. Anyways, I changed them all back to +1. This also means that we need to change the sign of the gain for the SIDE filter, which I have done also (and check that it damps OK). I decided to reduce the magnitude of the SIDE damping from 1 to 0.1 so that we can see the residuals since the value of -1 quickly sends the error to zero.  I also increased the gain magnitude for the other DoF to 4.

When looking at the plot remember that the values actually represent counts with a scaling of 2^15 (or 32768) from the ADC. I switched back to the original filters on FM1 (e.g. pit_pit ) without damping coefficients present in the FM2 filter (e.g. pit_pit_damp).


FYI, Rana used the ETMY suspension MEDM screen to illustrate the working of the single suspension to me and changed maybe POS and PITCH gains while doing so.


Also, the Medify purifier 'replace filter' indicator issue occurred bcos the moonlight button should have been pressed for 3 seconds to reset the 'replace filter' indicator after filter replacement.

Attachment 1: Screen_Shot_2022-02-03_at_8.23.07_PM.png
Screen_Shot_2022-02-03_at_8.23.07_PM.png
  16644   Thu Feb 3 14:47:12 2022 ChubUpdateElectronicsnew UPS in place

Received the new 1100VA APC UPS today and placed it at the bottom of the valve rack.  I'd connected the battery and plugged the unit into the AC outlet, but did not turn it on due to the power outage this weekend.

  16643   Thu Feb 3 10:25:59 2022 JordanUpdateVACTP1 and Manual Gate Valve Install

Jordan, Chub

Chub and I installed the new manual gate valve (Nor-Cal GVM-6002-CF-K79) and reinstalled TP1. The new gate valave was placed with the sealing side towards the main 40m volume, then TP1 was installed on top and the foreline reattched to TP1.

This valve has a hard stop in the actuator to prevent over torquing.

 

Attachment 1: 20220203_101455.jpg
20220203_101455.jpg
Attachment 2: 20220203_094831.jpg
20220203_094831.jpg
Attachment 3: 20220203_094823.jpg
20220203_094823.jpg
  16641   Wed Feb 2 21:16:23 2022 KojiSummaryBHDOptomechanics configuration for the in-vacuum aux small optics

Asked Jordan to clean 2x 1.5" posts (0.5 dia) and a washer with 1" dia.

 

Attachment 1: PXL_20220203_050156031.jpg
PXL_20220203_050156031.jpg
  16640   Wed Feb 2 18:55:58 2022 KojiSummaryBHDOptomechanics configuration for the in-vacuum aux small optics

Here is more detail of the POP_SM4 mount assembly.

It's a combination of BA2V + PLS-T238 + BA1V + TR-1.5 + LMR1V + Mirror: CM254-750-E03
Between BA1V and PLS-T238, we have to do a washer action to fix the post (8-32) with a 1/4-20 slot. Maybe we can use a 1" post shim from thorlabs/newport.
Otherwise, we should be able to fasten the other joints with silver-plated screws we already have/ordered.

I think TR-1.5 (and a shim) has not been given to Jordan for C&B. I'll take a look at these.

Attachment 1: Screen_Shot_2022-02-02_at_6.46.46_PM.png
Screen_Shot_2022-02-02_at_6.46.46_PM.png
  16639   Wed Feb 2 16:32:02 2022 AnchalSummaryBHDPart IX of BHR upgrade - AS1 free swing test still not good

We still did not get a good AS1 free swinging spectrum. It seems the Side OSEM is reporting coupling to too many DOFs and some extra resonances than we expect. So we did not upload the new input matrix calculated from this test. I'm attaching teh peak fitting (attachment 1) and the diagonalization attempt (attachment 2) to give some idea of what happened. Note how different this fre swinging spectrum looks from the other optics. Given that Yehonathan also felt that somthing might be off with this optic, we need to reevaluate if the suspension has wire not aligned in grove, or it is imbalanced or something else if touching it still.

 

Attachment 1: AS1_FreeSwingData_PeakFitting_20220125.pdf
AS1_FreeSwingData_PeakFitting_20220125.pdf
Attachment 2: AS1_SUS_InpMat_Diagnolization_20220202.pdf
AS1_SUS_InpMat_Diagnolization_20220202.pdf
  16638   Wed Feb 2 16:27:57 2022 AnchalSummaryBHDPart III of BHR upgrade - PR3 inpute matrix diagonalized

The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the PR3 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/PR3 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks. I chose to not type out the matrix values now. One can find them in teh repo links above.

 

Attachment 1: PR3_FreeSwingData_PeakFitting_20220125.pdf
PR3_FreeSwingData_PeakFitting_20220125.pdf
Attachment 2: PR3_SUS_InpMat_Diagnolization_20220202.pdf
PR3_SUS_InpMat_Diagnolization_20220202.pdf
  16637   Wed Feb 2 16:22:00 2022 AnchalSummaryBHDPart III of BHR upgrade - PR2 inpute matrix diagonalized

The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the PR2 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/PR2 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks. I chose to not type out the matrix values now. One can find them in teh repo links above.

 

Attachment 1: PR2_FreeSwingData_PeakFitting_20220125.pdf
PR2_FreeSwingData_PeakFitting_20220125.pdf
Attachment 2: PR2_SUS_InpMat_Diagnolization_20220202.pdf
PR2_SUS_InpMat_Diagnolization_20220202.pdf
  16636   Tue Feb 1 20:16:09 2022 TegaUpdateBHDPR2 candidate mirror analysis

git repo: git@git.ligo.org:tega-edo/charmirrormap.git

The analysis code takes in a set of raw images, 10 in our case,  for each mirror and calculates the zernike aberration coefficients for each image, then takes their average. This average value is used to reconstruct the mirror height map.  Finally, the residual error between the reconstructed image and the raw data is calculated.

We repeat the analysis for different field of views (FoV) namely 10mm, 20mm, 30mm, 40mm and 46.5mm and save the results in the output folder of the repo.

The analysis output for a 10mm FoV aperture at the mirror center is shown in the attachement. These three images show the input data, the reconstructed mirror surface map and the residual error.

Attachment 1: PR2_M2_data.png
PR2_M2_data.png
Attachment 2: PR2_M2_recon_FoV_10mm.png
PR2_M2_recon_FoV_10mm.png
Attachment 3: PR2_M2_residual_FoV_10mm.png
PR2_M2_residual_FoV_10mm.png
  16635   Tue Feb 1 15:33:28 2022 KojiSummaryBHDOptomechanics configuration for the in-vacuum aux small optics

I've summarized the optomechanics configuration for the in-vacuum aux small optics

It's not obvious here but the post for POP_SM4 is the stack of BA2V, Newport 9953, PLS-T238, LMR1V. The mirror is a CM254-750-E03 curved mirror. This should go on the suspension base. I hope I did not make a mistake there...

Attachment 1: Invac_opto_mechanics_v2.pdf
Invac_opto_mechanics_v2.pdf Invac_opto_mechanics_v2.pdf Invac_opto_mechanics_v2.pdf Invac_opto_mechanics_v2.pdf
  16634   Mon Jan 31 10:39:19 2022 JordanUpdateVACTP1 and Manual Gate Valve Removal

Jordan, Chub

Today, Chub and I removed TP1 and the failed manual gate valve off of the pumping spool.

First, P2 needed to be vented in order to remove TP1. TP1 has a purge valve on the side of the pump which we slowly opened bringing the P2 volume up to atmosphere. Although, this was not vented using the dry air/N2, using this purge valve eliminated the need to vent the RGA volume.

Then we disconnected TP1 foreline, removed TP1+8" flange reducer, then the gate valve. All of the removed hardware looked good, so no need to replace bolts/nuts, only needs new gaskets. TP1 and the failed valve are sitting on a cart wrapped in foil next to the pumping station.

Attachment 1: 20220131_100637.jpg
20220131_100637.jpg
Attachment 2: 20220131_102807.jpg
20220131_102807.jpg
Attachment 3: 20220131_102818.jpg
20220131_102818.jpg
Attachment 4: 20220131_100647.jpg
20220131_100647.jpg
  16632   Fri Jan 28 16:35:18 2022 AnchalSummaryBHDAS1, PR2 and PR3 set to trigger free swing test

AS1, PR2 and PR3 are set to o go through a free swinging test at 3 am. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.

To access the test, on allegra, type:

tmux a -t AS1
tmux a -t PR2
tmux a -t PR3

Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.

 

  16631   Fri Jan 28 11:30:52 2022 KojiSummaryBHDPart III of BHR upgrade - PR2 OSEM finalized, reinsall LO1 OSEMs

All green! Great work, Team!

  16630   Fri Jan 28 10:37:59 2022 PacoSummaryBHDPart III of BHR upgrade - PR2 OSEM finalized, reinsall LO1 OSEMs

[Paco]

Thanks to Koji's hotfix on the PR2 SatAmp box last evening, this morning I was able to finish the OSEM installation for PR2. PR2 is now fully damped. Then, I realized that with the extreme rebalancing done in ITMX chamber, LO1 needed to be reinstalled, so I proceeded to do that. I verified all the degrees of freedom remained damped.

I think all SOS are nominally damped, so we are 90% done with suspension installation!


SUSPENSION STATUS UPDATED HERE

  16629   Thu Jan 27 20:46:38 2022 KojiSummaryBHDPart III of BHR upgrade - Install PR2, balance, and attempt OSEM tuning
  • Started debugging D1002818 / S2100737 (8:30PM)
  • Confirmed the issue of the negative output of the UR sensor with the dummy OSEM connected at the air side of the ITMY chamber. Both PD Out and PD Mon have negative outputs.
  • The same issue remains when the dummy OSEM box is connected to the chassis with a short DB25M/F cable at the rack.
  • Started debugging the setup at the workbench.
    • CH1 TIA Output=-3.0V / CH2 (in question) TIA Output =-2.7V => No Problem
    • CH1 Whitening Out=+3.0V / CH2 Whitening Out=-1.4V => Problem
    • Resolder the components around whitening CH2 => no change
    • Remove AD822 and replace with a new one => CH2 Whitening OUt = +2.7V ==> Problem solved
    • PD1~3 channels of the left and right PCBs tested with the OSEM box ==> nearly +3V/-3V differential output (All Clear)
    • Chassis closing
  • Chassis restored in Rack 1Y1 and the normal output with the dummy OSEM box confirmed
  • Mission Completed (9:30PM)
  • Elog finished (9:40PM)
  • Case closed
Attachment 1: Screen_Shot_2022-01-27_at_20.33.56.png
Screen_Shot_2022-01-27_at_20.33.56.png
Attachment 2: Screen_Shot_2022-01-27_at_21.30.16.png
Screen_Shot_2022-01-27_at_21.30.16.png
  16628   Thu Jan 27 18:03:36 2022 PacoSummaryBHDPart III of BHR upgrade - Install PR2, balance, and attempt OSEM tuning

[Paco, Anchal, Tega]

After installing the short OSEMs into PR2, we moved it into ITMX Chamber. While Tega loaded some of the damping filters and other settings, we took time to balance the heavily tilted ITMX chamber. After running out of counterweigths, Anchal had to go into the cleanroom and bring the SOS stands, two of which had to be stacked near the edge of the breadoard. Finally, we connected the OSEMs following the canonical order

LL -> UR-> UL

LR -> SD

But found that UR was reading -14000 counts. So, we did a quick swap of the UR and UL sensors and verified that the OSEM itself is working, just in a different channel... So it's time to debug the electronics (probably PR2 Sat Amp?)...


PR2 Sat Amp preliminary investigation:

  • The UR channel (CH1 or CH4, on PR2 Sat Amp)  gives negative value as soon as an OSEM is connected.
  • Without anything connected, the channel reports the usual 0V output.
  • With the PDA inputs of PR2 Sat Amp shorted, we again see 0V output.
  • So it seems like when non-zero photodiode current goes, the circuit sees a reversal of gain polarity and the gain is roughly half in magnitude as the correct one.
  16627   Thu Jan 27 17:57:35 2022 AnchalSummaryBHDPart III of BHR upgrade - Replaced small suspended PR3 with new SOS PR3 and OSEM tuning

[Anchal, Paco]

We removed the old PR3 housed in a tip-tilt style suspension and put it on the North flow bench in the cleanroom. I put PR3 in an accessible position near the North West edge of the BS chamber and balanced the table again with many weights. The OSEM tuning was very uneventful and easy. Following are the full brightness ADC counts for the OSEMs:
UL 25693. -> 12846
UR 24905. -> 12452
LL 23298. -> 11649
LR 24991. -> 12495
SD 26357. -> 13178

I was able to damp the optic easily after the OSEM installation with no issues.


Photos: https://photos.app.goo.gl/jcAqwFJoboeUuR7F9

  16626   Thu Jan 27 16:40:57 2022 TegaSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics

[Ian, Paco, Tega]

Last night we set up the four main matrices that handle the conversion between the degrees of freedom bases and the sensor bases. We also wrote a bash script to automatically set up the system. The script sets the four change of bases matrices and activates the filters that control the plant. this script should fully set up the plant to its most basic form. The script also turns off all of the built-in noise generators. 

After this, we tried damping the optic. The easiest part of the system to damp is the side or y motion of the optic because it is separate from the other degrees of freedom in both of the bases. We were able to damp that easily. in attachment 1 you can see that the last graph in the ndscope screen the side motion of the optic is damped. Today we decided to revisit the problem.

Anyways, looking at the problem with fresh eyes today, I noticed the in pit2pit coupling has the largest swing of all the plant filters and thought this might be the reason why the inputs (UL,UR,LR,LL) to the controller was hitting the rails for pit DoF. I reduce the gain of the pit2pit filter then slowly increased it back to one. I also reduced the gain in the OSEM input filter from 1 to 1/100. The attached image (Attachment2) is the output from this trial. This did not solve the problem. The output when all OSEM input filter gain set to one is shown in Attachment2.

We will try to continue to tweak the coefficients. We are probably going to ask Anchal and Paco to sit down with us and really hone in on the right coefficients. They have more experience and should be able to really get the right values.

Attachment 1: simplant_control_1.png
simplant_control_1.png
Attachment 2: simplant_control_0.png
simplant_control_0.png
  16625   Thu Jan 27 08:27:34 2022 PacoSummaryBHDPart IX of BHR upgrade - AS1 inspection and correction

[Paco]

This morning, I went into ITMY chamber to inspect AS1 after the free swinging test failed. Indeed, as forecasted by Anchal, the top front EQ stop was slightly touching, which means AS1 was not properly installed before. I proceeded by removing it well behind any chance of touching the optic, and did the same for all the other stops, of which most were already recessed significantly. Finally, the OSEMs changed accordingly to produced a PITCHed optic (top front EQ was slightly biasing the pitch angle), so I did a reinstallation until the levels were around the 14000 count region. After damping AS1 relatively quickly, I closed the ITMY chamber.

Quote:

For some reasonf the free swing test showed only one resonance peak (see attachment 1). This probably happened because one of the earthquake stops is touching the optic. Maybe after the table balancing, the table moved a little over its long relazation time and by the time the free swing test was performed at 3 am, one of the earthquake stops was touching the optic. We need to check this when we open the chamber next.

 

  16624   Tue Jan 25 18:37:12 2022 TYehonathanUpdateBHDPR2 Suspension

PR2's side magnet height was adjusted and its roll was balanced (attachment 1,2). I verified that the OpLev beam is still aligned. The pitch was balanced: First, using an iris for rough adjustment. Then, with the QPD. I locked the counterweight setscrew.

I turned off the HEPAs, damped PR2, and measured the QPD spectra (attachment 3). Major peaks are at 690mHz, 953mHz, and 1.05Hz. I screwed back the lower OSEM plate. The wires were clamped to the suspension block and were cut. Winch adapter plate removed. I wanted to push OSEMs into the OSEM plates but the wiki is down so I can't tell what was the plan. This will have to wait for tomorrow. Also here like with AS1 we need to apply glue to the counterweights.

Attachment 1: PR2_magnet_height.png
PR2_magnet_height.png
Attachment 2: PR2_roll_balance.png
PR2_roll_balance.png
Attachment 3: FreeSwingingSpectra_div_50mV.pdf
FreeSwingingSpectra_div_50mV.pdf
  16623   Tue Jan 25 16:42:03 2022 AnchalSummaryBHDReduced filter gains in all damped new SOS

I noticed that our current suspension damping loops for the new SOS were railing the DAC outputs. The reason being that cts2um module has not been updated for most optics and thus teh OSEM signal (with the new Sat Amps) is about 30 times stronger. That means our usual intuition of damping gains is too high without applying correct conversion cts2um filter module. I reduced all these gains today and nothing is overflowing the c1su2 chassis now. I also added two options in the "!" (command running drop down menu) in the sus_single medm screens for opening ndscope for monitoring coil outputs or OSEM inputs of the optic whose sus screen is used.

 

  16622   Tue Jan 25 11:02:54 2022 AnchalSummaryBHDPart IX of BHR upgrade - AS1 free swing test failed

For some reasonf the free swing test showed only one resonance peak (see attachment 1). This probably happened because one of the earthquake stops is touching the optic. Maybe after the table balancing, the table moved a little over its long relazation time and by the time the free swing test was performed at 3 am, one of the earthquake stops was touching the optic. We need to check this when we open the chamber next.

Attachment 1: AS1_freeSwingTestFailed.pdf
AS1_freeSwingTestFailed.pdf
  16621   Tue Jan 25 10:55:02 2022 AnchalSummaryBHDPart VIII of BHR upgrade - LO2 input matrix diagonalization performed.

The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the LO2 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/LO2 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks.


Free Swinging Resonances Peak Fits
  Resonant Frequency [Hz] Q A
POS 0.981 297 3967
PIT 0.677 202 1465
YAW 0.775 2434 1057
SIDE 1.001 244 4304

LO2 New Input Matrix
  UL UR LR LL SIDE
POS
0.46
1.237
1.094
0.318
0.98
PIT
1.091
0.252
-1.512
-0.672
-0.088
YAW
0.722
-1.014
-0.217
1.519
0.314
SIDE
-0.747
1.523
1.737
-0.534
3.134

The new matrix was loaded on LO2 input matrix and this resulted in no control loop oscillations at least. I'll compare the performance of the loops in future soon.

  16620   Mon Jan 24 21:18:40 2022 PacoSummaryBHDPart IX of BHR upgrade - AS1 placed and OSEM tuned

[Paco]

AS1 was installed in the ITMY chamber today. For this I moved AS4 to its nominal final placement and clamped it down with a single dog clamp. Then, I placed AS1 near the center of the table, and quickly checked AS4 could still be damped. After this, I leveled the table using a heavier/lighter counterweight pair. 

Once things were leveled, I proceeded to install AS1 OSEMs. The LL, UL, UR OSEMs had a bright level of 27000 counts, while SD and LR were at 29500, and 29900 respectively. After a while, I managed to damp all degrees of freedom around the 50% thousand count levels, and decided to stop. 

UL 27000.  -> 16000
UR 27000. -> 13800
LL 27000 -> 14600
LR 29900 -> 14900
SD 29500 -> 12900


Free swinging test set to trigger

AS1 is set to go through a free swinging test at 3 am this evening. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.

To access the test, on allegra, type:

tmux a -t AS1

Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.


SUSPENSION STATUS UPDATED HERE

  16619   Mon Jan 24 20:48:48 2022 AnchalUpdateBHDAS4 Input Matrix Diagonalization performed.

I agree. That's an interesting idea. But does that mean that there is an always working inverse matrix solution or that any solution will be vulnerable to the alignment biases.

I think we can also calculate the matrix rotation required as a function of dc biases and do that rotation in the simulimk model.

Quote:

I think our suspension input matrix diagonalization is not so robust usually because we only choose a inverting matrix which gives the best separation for a single suspension alignment.

i.e. we have seen in the past that adjusting the bias for the alignment makes the matrix inversion not work well. Sometime people turn OFF the alignment bias before making the ringdown and that makes the whole measurement invalid.

This is because the sensitivity of the OSEMs to longitudinal and/or transverse motion is significantly different for different alignment.

I wonder if there's a way we can choose a better matrix by putting in random gain errors on the shadow sensor signals and then finding the matrix which gives the best diag under an ensemble of gain errors.

 

  16618   Mon Jan 24 18:53:09 2022 AnchalSummaryBHDPart VIII of BHR upgrade - LO2 placed and OSEMs tuned

I placed LO2 in its planned position in BS chamber, inserted the OSEMs, and tuned their position to halfway brightness. At the end of the work, I was able to damp the optic successfully. The full open (full brightness) OSEM ADC counts are:

UL 25743.  -> 12876
UR 27384. -> 13692
LL 25550. -> 12775
LR 27395 -> 13697
SD 28947 -> 14473

Today's OSEM tuning was relatively unhappening. I have only following two remarks:

  • BS table was 3 SOS near the East end and PRM is parked in the center, thus the table is very unevenly balanced. I had to use all available counter weights to make it flat near the LO2 suspension.
  • The side OSEM for LO2 is not exactly centered (probably due to table imbalance). I was able to balance the table to a point though that the side OSEM was responsive to full range and I was able to damp the optic.

Free swinging test set to trigger

LO2 is set to go through a free swinging test at 10 pm tonight. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.

To access the test, on allegra, type:

tmux a -t LO2

Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.


Photos: https://photos.app.goo.gl/Ff3yGBprj9xgPbnLA

SUSPENSION STATUS UPDATED HERE

  16617   Mon Jan 24 17:58:21 2022 YehonathanUpdateBHDPR2 Suspension

I picked up the PR2 mirrors (labeled M1, M2) from Anchel's table and took them to the cleanroom. By inspection, I spotted some dust particles on M1. I wasn't able to remove them with clean air so I decided to use M2 which looked much cleaner. I wasn't able to discern any wedge angle on the optic. I inserted the optic into a thin optic adapter. The optic is thicker than I expected so I use long screws for the mirror clamping. I expect that the pitch balance will shift towards the front of the mirror so I assembled only 1 counterweight for now. The side blocks with wires in them were installed.

I engraved the SOS and installed the winches on it. Paco came in and helped me to hang the optic. Looking at the wire hanging angle I realize that 1 counterweight at the front is not enough. I install a second counterweight at the back and observe that I can cross the balancing point.

I locked the EQ stops. Suspension work continues tomorrow...

  16616   Mon Jan 24 17:12:27 2022 ranaUpdateBHDAS4 Input Matrix Diagonalization performed.

I think our suspension input matrix diagonalization is not so robust usually because we only choose a inverting matrix which gives the best separation for a single suspension alignment.

i.e. we have seen in the past that adjusting the bias for the alignment makes the matrix inversion not work well. Sometime people turn OFF the alignment bias before making the ringdown and that makes the whole measurement invalid.

This is because the sensitivity of the OSEMs to longitudinal and/or transverse motion is significantly different for different alignment.

I wonder if there's a way we can choose a better matrix by putting in random gain errors on the shadow sensor signals and then finding the matrix which gives the best diag under an ensemble of gain errors.

  16615   Mon Jan 24 17:10:25 2022 TegaSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics

[Ian, Tega]

Connected the New SUS screens to the controller for the simplant model. Because of hard-coded links in the medm screen links, it was necessary to create the following path in the c1sim computer, where the new medm screen files are located:

/opt/rtcds/userapps/trunk/sus/c1/medm/templates/NEW_SUS_SCREENS

 

We noticed a few problems:

1. Some of the medm files still had C1 hard coded, so we need to replace them with $IFO instead, in order for the custom damping filter screen to be useful.

2. The "Load coefficient" button was initially blank on the new sus screen, but we were able to figure out that the problem came from setting the top-level DCU_ID to 63.

medm -x -macro "IFO=X1,OPTIC=OPT_CTRL,DCU_ID=63" SUS_SINGLE_OVERVIEW.adl

 

[TODO]

Get the data showing the controller damping the pendulum. This will involve tweaking some gains and such to fine-tune the settings in the controller medm screen. Then we will be able to post some data of the working controller.

 

[Useful aside]

We should have a single place with all the instructions that are currently spread over multiple elogs so that we can better navigate the simplant computer.

Attachment 1: Screen_Shot_2022-01-24_at_5.33.15_PM.png
Screen_Shot_2022-01-24_at_5.33.15_PM.png
  16614   Mon Jan 24 12:33:41 2022 ranaConfigurationWikiAIC Wiki: txz files allowed

I updated the mime.local.conf file for the AIC Wiki so as to allow attachments with the .txz format. THis should be persistent over upgrades, since its a local file.

  16613   Fri Jan 21 16:40:10 2022 AnchalUpdateBHDAS4 Input Matrix Diagonalization performed.

The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the AS4 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/AS4 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks.


Free Swinging Resonances Peak Fits
  Resonant Frequency [Hz] Q A
POS 1.025 337 3647
PIT 0.792 112 3637
YAW 0.682 227 1228
SIDE 0.993 164 3094

AS4 New Input Matrix
  UL UR LR LL SIDE
POS
0.844
0.707
0.115
0.253
-1.646
PIT
0.122
0.262
-1.319
-1.459
0.214
YAW
1.087
-0.901
-0.874
1.114
0.016
SIDE
0.622
0.934
0.357
0.045
3.822

The new matrix was loaded on AS4 input matrix and this resulted in no control loop oscillations at least. I'll compare the performance of the loops in future soon.

 

Attachment 1: AS4_SUS_InpMat_Diagnolization_20220121.pdf
AS4_SUS_InpMat_Diagnolization_20220121.pdf
Attachment 2: AS4_FreeSwingData_PeakFitting_20220121.pdf
AS4_FreeSwingData_PeakFitting_20220121.pdf
  16612   Fri Jan 21 14:51:00 2022 KojiUpdateBHDV6-704/705 Mirror now @Downs

Camille@Downs measured the surface of these M1 and M2 using Zygo.

Result of the ROC measurements:M1: ROC=2076m (convex)M2: ROC=2118m (convex)
Here are screenshots. One file shows the entire surface and the other shows the central 30mm.
Attachment 1: M1.PNG
M1.PNG
Attachment 2: M1_30mm.PNG
M1_30mm.PNG
Attachment 3: M2.PNG
M2.PNG
Attachment 4: M2_30mm.PNG
M2_30mm.PNG
  16611   Fri Jan 21 12:46:31 2022 TegaUpdateCDSSUS screen debugging

All done (almost)! I still have not sorted the issue of pitch and yaw gains growing together when modified using ramping time. Image of custom ADC and DAC panel is attached.

 

Quote:

Seen. Thanks.

 
Quote:

Indicated by the red arrow:
Even when the side damping servo is off, the number appears at the input of the output matrix

Indicated by the green arrows:
The face magnets and the side magnets use different ADCs. How about opening a custom ADC panel that accommodates all ADCs at once? Same for the DAC.

Indicated by the blue arrows:
This button opens a custom FM window. When the pitch gain was modified with a ramping time, the pitch and yaw gain grows at the same time even though only the pitch gain was modified.

Indicated by the orange circle:
The numbers are not indicated here, but they are input-related numbers (for watchdogging) rather than output-related numbers. It is confusing to place them here.

 

 

Attachment 1: Custom_ADC_DAC_monitors.png
Custom_ADC_DAC_monitors.png
  16610   Fri Jan 21 11:24:42 2022 AnchalSummaryBHDSR2 Input Matrix Diagonalization performed.

The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on theSR2 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/SR2 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks.


Free Swinging Resonances Peak Fits
  Resonant Frequency [Hz] Q A
POS 0.982 340 3584
PIT 0.727 186 1522
YAW 0.798 252 912
SIDE 1.005 134 3365

SR2 New Input Matrix
  UL UR LR LL SIDE
POS
1.09
0.914
0.622
0.798
-0.977
PIT
1.249
0.143
-1.465
-0.359
0.378
YAW
0.552
-1.314
-0.605
1.261
0.118
SIDE
0.72
0.403
0.217
0.534
3.871

The new matrix was loaded on SR2 input matrix and this resulted in no control loop oscillations at least. I'll compare the performance of the loops in future soon.

 

Attachment 1: SR2_SUS_InpMat_Diagnolization_20220121.pdf
SR2_SUS_InpMat_Diagnolization_20220121.pdf
Attachment 2: SR2_FreeSwingData_PeakFitting_20220121.pdf
SR2_FreeSwingData_PeakFitting_20220121.pdf
  16609   Thu Jan 20 18:41:55 2022 AnchalSummaryBHDSR2 set to trigger free swing test

SR2 is set to go through a free swinging test at 3 am tonight. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.

To access the test, on allegra, type:

tmux a -t SR2

Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.

 

ELOG V3.1.3-