For some reasonf the free swing test showed only one resonance peak (see attachment 1). This probably happened because one of the earthquake stops is touching the optic. Maybe after the table balancing, the table moved a little over its long relazation time and by the time the free swing test was performed at 3 am, one of the earthquake stops was touching the optic. We need to check this when we open the chamber next.
I noticed that our current suspension damping loops for the new SOS were railing the DAC outputs. The reason being that cts2um module has not been updated for most optics and thus teh OSEM signal (with the new Sat Amps) is about 30 times stronger. That means our usual intuition of damping gains is too high without applying correct conversion cts2um filter module. I reduced all these gains today and nothing is overflowing the c1su2 chassis now. I also added two options in the "!" (command running drop down menu) in the sus_single medm screens for opening ndscope for monitoring coil outputs or OSEM inputs of the optic whose sus screen is used.
We removed the old PR3 housed in a tip-tilt style suspension and put it on the North flow bench in the cleanroom. I put PR3 in an accessible position near the North West edge of the BS chamber and balanced the table again with many weights. The OSEM tuning was very uneventful and easy. Following are the full brightness ADC counts for the OSEMs:
UL 25693. -> 12846
UR 24905. -> 12452
LL 23298. -> 11649
LR 24991. -> 12495
SD 26357. -> 13178
I was able to damp the optic easily after the OSEM installation with no issues.
AS1, PR2 and PR3 are set to o go through a free swinging test at 3 am. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.
To access the test, on allegra, type:
Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.
The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the PR2 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/PR2 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks. I chose to not type out the matrix values now. One can find them in teh repo links above.
The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the PR3 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/PR3 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks. I chose to not type out the matrix values now. One can find them in teh repo links above.
We still did not get a good AS1 free swinging spectrum. It seems the Side OSEM is reporting coupling to too many DOFs and some extra resonances than we expect. So we did not upload the new input matrix calculated from this test. I'm attaching teh peak fitting (attachment 1) and the diagonalization attempt (attachment 2) to give some idea of what happened. Note how different this fre swinging spectrum looks from the other optics. Given that Yehonathan also felt that somthing might be off with this optic, we need to reevaluate if the suspension has wire not aligned in grove, or it is imbalanced or something else if touching it still.
Please edit this same entry throughout the day for the shutdown elogging.
I took a screenshot of C0VAC_MONITOR.adl to ensure that all pnematic valves are in closed positions:
The status message says "All pnematic valves closed" and the latest error message is about "V7 closed, N2 < 6.50e+01".
I found out that there was no autoburt happening for c1vac channels. I created an autoBurt.req file for the vac.db file and saved one snapshot. I also added the path of this file in autoburt/.requestfilelist . Let's see if autoburting starts by that for this file as well.
With this, I think we can safely shutdown acromag chassis. Hopefully, the relays are configured such that the valves are nominally closed in absence of a control signal. After the chassis is shut down, wwe can shutdown C1VAC by:
At the 1x8 rack, the following were switched off on their respective front panels:
PTP2 & PTP3 Controller
MKS Gauge controller
PRP Gauge Controller
G2P316a & b Controllers
Serial Device Server
Powered off from back of unit:
TP2 and 3 controllers were unplugged from respective power strips (labeled C2 and C3)
C1vac and the laptop at the workstation were shut down
Manual Gate valve was closed
Bringing back CDS took a lot of work yesterday. I'm gonna try to summarize the main points here.
For some reason, fb1 was not able to mount mx devices automatically on system boot. This was an issue I earlier faced in fb1(clone) too. The fix to this problem is to run the script:
To make this persistent, I've configured a daemon (/etc/systemd/system/mx_start_stop.service) in fb1 to run once on system boot and mount the mx devices as mentioned above. We did not see this issue of later reboots yesterday.
Next was the issue of gpstime module out of date on fb1. This issue is also known in the past and requires us to do the following:
controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 1$ sudo modprobe gpstime
Again, to make this persistent, I've configured a daemon (/etc/systemd/system/re-add-gpstime.service) in fb1 to run the above commands once on system boot. This corrected gpstime automatically and we did not face these problems again.
Later we found that fb1-FE computers, ntp time synchronization was not working and the main reason was that fb1 was unable to access internet. As a rule of thumb, it is always a good idea to try pinging www.google.com on fb1 to ensure that it is connected to internet. The issue had to do with fb1 not being able to find any namespace server. We fixed this issue by reloading bind9 service on chiara a couple of times. We're not really sure why it wasn't working.
After the above, we saw that fb1 ntp server is working fine. You see following output on fb1 when that is the case:
On the FE models, timedatectl should show that NTP synchronized feild is yes. That wasn't happening even after us restarting the systemd-timesyncd service. After this, I just tried restarting all FE computers and it started working.
We had removed all db9 enabling plugs on the new SOSs beforehand to keep coils off just in case CDS does not come back online properly.
Everything in CDS loaded properly except the c1oaf model which kepy showing 0x2bad status. This meant that some IPC flags are red on c1sus, c1mcs and c1lsc as well. But everything else is green. See attachment 1. I then burtrestroed everything in the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2022/Feb/4/12:19 directory. This includes the snapshot of c1vac as well that I added on autoburt that day. All burt restore statuses were green OK. I think we are in good state now to start watchdogs on the new SOSs and put back the db9 enabling plugs.
When somebody gets time, we should make cutom service files in fb1:/etc/systemd/system/ symbolic links to a repo directory and version control these important services. We should also make sure that their dependencies and startup order is correctly configured. I might have done a half-assed job there since I recently learned how to make unit files. We should do the same on nodus and chiara too. Our hope is that on one glorious day, the lab can be restarted without spending more than 20 min on booting up the computers and network.
I found out that the ESP300 service needs to be run in root mode for it to be able to connect to the USB port of HWP motor controller. While doing this change, I noticed that the channels hosted by c1psl might have a duplication conflict with some other channel hosting computer, because a lot of them show the Warning: "Identical process variable names on multiple servers" which is not good. Someone should look into this conflict.
I added instructions on the power control MEDM screen as it was very non-trivial to use. I have set the power such that the C1:IOO-MC_RFPD_DCMON is 5.6 and this happened at C1:IOO-HWP_POS_SET 2.29.
Something is wrong with the Video MUX. The system did not turn back on with full functionality. Even though we see the screens as they were before the power shutdown, we have lost control on switching any of the videos. I went to check the wiki page about Video MUX which told be we should be able to see the configuration screen on this link, but the page wasn't opening. I went and removed the power cable and put it back in. That brought back the configuration page. Still, I could not change any of the video feeds however this time, I could see the EPICS channel values (like C1:VID-QUAD1_4) change. I tired to go to the configuration page and change the matrix values from the control tab there. I found out that the matrix was mislabeled and while making the changes, I started seeing blue screen on QUAD1_3 (where MC2T was set before). I set the QUAD1_3 (output 23) to MC2T (input 16), but no change. The EPICS values are also set properly, so I don't understand the reason behind blue screen. The same happened when I tried to use:
Weirdly, this caused the QUAD1_4 screen to go blue. Running following had no effect:
So, I'm not sure what to do. This really needs to be fixed! I wanted to see teh MC2F camera so that I can align IMC, that was the whole reason for this rabit hole. Help needed.
Yeah, this is a known issue actually. We go to ASC screen and manually swich off all the outputs after every reboot. We haven't been able to find a way to set default so that when the model comes online, these outputs remain switched off. We should find a way for this.
I found that two computers are not powering up in the control room, Ottavia and Allegra. Allegra was important for us as it had the current version of LIGO CDS workstation installed on, providing us with options to use latest packages written by LIGO CDS team. I think the power issue should be resolvable if someone opens it and knows what thye are doing. Do we have any way of getting fuse repairs on such computers? Both these computers are Dell XPS 420.
We increased the input power to IMC by replacing the 98% transmission BS by a 10% transmission BS on the detection table (reverse of what mentioned in 40m/16408 see attachment 8-9). We then realigned the BS so that MC RFPD is centered. Then we realigned two steering mirrors to get the beam centered on the WFS1 and WFS2 QPD. Then we increased the power of the input beam to get 5.307 reading on the C1:IOO-MC_RFPD_DCMON channel. We did this so that we can align the IMC. Once we have it aligned, we'll go back to low poer for doing chamber work.
Beware, there is about 1W beam on the detection table right now.
The autoBurt file for FE already has the C1:ASC-ETMX_PIT_SW2 (and other channels for ETMY, ITMX, ITMY, BS and for YAW) present, and I checked the last snapshot file from Feb 7th, 2022, which has 0 for these channels. So I'm not sure why when FE boots up, it does not follow the switch configuration. Fr safety, I changed all the gains of these filter modules, named like C1:ASC-XXXX_YYY_GAIN (where XXXX is ETMX, ETMY, ITMX, ITMY, or BS , and YYY is PIT or YAW) to 0.0. Now, even if the FE loads with switches in ON configuration, nothing should happen. In future, if we use this model for anything, we can change the gain values which won't be hard to track as the reason why no signal moves forward. Note, the BS connections from this model to BS suspension model do not work.
you can hand edit the autoBurt file which the FE uses to set the values after boot up. Just make a python script that amends all of the OFF or ZERO that are needed to make things safe. This would be the autoBurt snap used on boot up only, and not the hourly snaps.
I reconfigured the MC reflection path for low power. This meant the following changes:
Note, even the pick-off for WFS1 and WFS2 is too low I think. The IOO WFS alignment does not work properly for such low levels of light. I tried running the WFS loop for IMC and it just took the cavity out of the lock. So for low power scenario, we would keep the WFS loops OFF.
As discussed in the meeting, I removed the extra beam splitter that dumps most of the beam going towards WFS photodiodes. This beam splitter needs to be placed back in position before increasing the input power to IMC at nominal level. This is to get sufficient light on the WFS photodiodes so that we can keep IMC locked for more than 3 days. Currently IMC is unlocked and misaligned. I have marked the position of this beam splitter on the table, so putting it back in should be easy. Right now, I'm trying to align the mode cleaner back and start the WFS loops once we get it locked.
I found a peculiar issue today. The C1:IOO-MC_RFPD_DCMON remains constantly 0. I wonder if the RFPF output is being read properly. I opened the table and used an oscilloscope to confirm that the DC output from the MC REFL photodiode is coming consistently but our EPICs channel is not reading it. I tried restarting the modbusIOC service but that did not affect anything. I power cycled the acromag chassis while keeping the modbusIOC service off, and then restarted teh modbusIOC service. After this, I saw more channels got stuck and became unresponsive, including the PMC channels. So then I rebooted c1psl without doing anything to the acromaf chasis, and finally things came back online. Everything looks normal to me now but I'm not sure if one of the many channels is not in the right state. Anyways, problem is solved now.
I think I have aligned the cavity, including MC1 such that we are seeing flashing of fundamental mode and significant transmission sum value as well.However, I'm unable to catch lock following Koji's method in 40m/16673. Autolocker could not catch lock either. Maybe I am doing something wrong, I'll pickup again tomorrow, hopefully the cavity won't drift too much in this time.
c1teststand has been restructured. There is no port computer called 'c1teststand' anymore. When you ssh into the c1teststand network using ssh c1teststand from inside martian or from outside network using the method mentioned in this wiki page , you would land into chiara (clone) computer and you can navigate into any teststand network computer from there.
I'll be repurposing 1U c1teststand computer into the new c1susaux2 slow machine now. All files from home directory and from /etc directory of former c1teststand have been zipped and stored in /home/controls of chiara (clone). Just a aside, the network configuration of teststand can be done from inside the teststand network, by going to a browser on either fb1 (clone) or chair (clone) and going to address 10.0.1.1. The login and password are same as our usual workstation username and password.
I took the c1teststand computer from teststand and converted it into c1susaux2. To do so, I installed a fresh copy of debian 10 on it and followed the steps on this wiki page. I did some parts slightly differently though. The directory /cvs/cds/caltecg/c1susaux2 is a repository and contains the service unit file modbusIOC.service as well. A symbolic link is created at /etc/systemd/system to use this service file for creating the modbusIOC service. All db files are generated by parsing the acromag chassis wiring file using this python script.
The service file is running without any errors now and all channels are available. The leftmost bench on EEshop at 40m is now ready to do LO1 slow controls and monitor testing. If someone gets time today, they can hookup an unused coil driver to the chassis and verify ENABLE switching and monitoring through the optical isolators. We can also drive some voltage on the PD monitors and verify the functioning of our ADCs. Once this test passes, it is straight forward to finish the remaining 6 SOS wiring and we would be good to install the chassis.
Attaching wiring diagram of c1susuaux2 acromag chassis. Any comments/modification suggestions should come soon as we'll go ahead and wire it soon.
Note: While accessing channels using caget on c1susuaux2, you might get a warning "Identical process variable names on multiple servers". You can safely ignore it. It just means that channel is accessible on that particular computer via two different network interfaces (martian network eno1 and acromag subnetwork eno2) and it will just pick one of them.
I tried to perform a simple enabling test of coils using c1susaux2 modbus channels but failed. I'm able to do the enabling of coils using the windows GUI of acromag card but I can not do it when the cards are connected to the computer subnetwork. The issue is two-fold:
There's also an issue in reading back the ENABLE_MON channels. Here we suspect that one of the optical isolator box that we have been using might have a short in one of it's output channel. I'll investigate this more tomorrow. Again, the issue is two-fold. The EPICS channel values do not really change. So there is clearly some issue of communicating with the acromag cards.
[Anchal, Yehonathan, Ian]
We installed c1susaux2 acromag chassis in 1Y0 with c1susaux2 computer. We connected PD monitors, Binary inputs, Binary outputs, and Run/Acquire RTS signals for 6 of the 7 suspensions. We ran out of DB9 cables to connect PR3. Of the ones that were connected, LO2, AS1, AS4, SR2, and PR2 are showing no issues in the functionality from the chassis. For LO1, everything is working except for UR EnableMon channel. The enable monitor does not show an ON state for the coil even though the coil driver chassis shows that it is ON via the LED lights. A possible reason could be that a wire got disconnected when we closed the chassis (there are a lot of wires pushing against each other. Another reason could be that the optical isolator ISO10 could have developed a bad channel on channel 2. The circuit was tested before closing the chassis, so not sure what went wrong after closing it.
PR2 is showing a non-acromag chassis related issue. As soon as we close the loop by enabling the coils, the watchdog triggers because the loop is unstable. Not sure what has changed for PR2, but someone should take a look at it.
For the issue with LO1, I suggest we keep a note that the C1:SUS-LO1_UR_ENABLEMon channel is faulty and don't take its value seriously. We should diagnose and fix this issue once we have more reasons to disconnect the chassis and open it.
I routed the XXX_COIL_DW signals from the 7 SOS blocks in c1su2.mdl (located at /cvs/cds/rtcds/userapps/trunk/sus/c1/models/c1su2.mdl) to the binary outputs from the FE model. The routing is done such that when these binary outputs are routed through the binary interface card mounted on 1Y0, they go to the acromag chassis just installed and from there they go to the binary inputs of the coil drivers together with the acromag controlled coil outputs.
I have not restarted the rtcds models yet. This needs more care and need to follow instructions from 40m/16533. Will do that sometime later or Koji can follow up this work.
I have restarted c1su2 model with the connections of Run Acquire switch to analog filters on coil drivers. Following steps were taken:
First ssh to c1sus2 and then:
controls@c1sus2:~ 0$ rtcds make c1su2
### building c1su2...
Parsing the model c1su2...
Building EPICS sequencers...
Building front-end Linux kernel module c1su2...
RCG source code directory:
The following files were used for this build:
Successfully compiled c1su2
Compile Warnings, found in c1su2_warnings.log:
WARNING *********** No connection to subsystem output named SUS_DAC1_12
WARNING *********** No connection to subsystem output named SUS_DAC1_13
WARNING *********** No connection to subsystem output named SUS_DAC1_14
WARNING *********** No connection to subsystem output named SUS_DAC1_15
WARNING *********** No connection to subsystem output named SUS_DAC2_7
WARNING *********** No connection to subsystem output named SUS_DAC2_8
WARNING *********** No connection to subsystem output named SUS_DAC2_9
WARNING *********** No connection to subsystem output named SUS_DAC2_10
WARNING *********** No connection to subsystem output named SUS_DAC2_11
WARNING *********** No connection to subsystem output named SUS_DAC2_12
WARNING *********** No connection to subsystem output named SUS_DAC2_13
WARNING *********** No connection to subsystem output named SUS_DAC2_14
WARNING *********** No connection to subsystem output named SUS_DAC2_15
controls@c1sus2:~ 0$ rtcds install c1su2
### installing c1su2...
Installing system=c1su2 site=caltech ifo=C1,c1
Installing start and stop scripts
Updating testpoint.par config file
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_220315_135808.par -gds_node=26 -site_letter=C -system=c1su2 -host=c1sus2
Installing GDS node 26 configuration file
Installing auto-generated DAQ configuration file
Installing Epics MEDM screens
Running post-build script
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 4 5 C1:SUS-AS1_INMATRIX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_AS1_INMATRIX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 2 4 C1:SUS-AS1_LOCKIN_INMTRX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_AS1_LOCKIN_INMTRX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 5 6 C1:SUS-AS1_TO_COIL --fi > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_AS1_TO_COIL_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 4 5 C1:SUS-AS4_INMATRIX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_AS4_INMATRIX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 2 4 C1:SUS-AS4_LOCKIN_INMTRX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_AS4_LOCKIN_INMTRX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 5 6 C1:SUS-AS4_TO_COIL --fi > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_AS4_TO_COIL_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 4 5 C1:SUS-LO1_INMATRIX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_LO1_INMATRIX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 2 4 C1:SUS-LO1_LOCKIN_INMTRX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_LO1_LOCKIN_INMTRX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 5 6 C1:SUS-LO1_TO_COIL --fi > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_LO1_TO_COIL_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 4 5 C1:SUS-LO2_INMATRIX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_LO2_INMATRIX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 2 4 C1:SUS-LO2_LOCKIN_INMTRX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_LO2_LOCKIN_INMTRX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 5 6 C1:SUS-LO2_TO_COIL --fi > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_LO2_TO_COIL_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 4 5 C1:SUS-PR2_INMATRIX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_PR2_INMATRIX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 2 4 C1:SUS-PR2_LOCKIN_INMTRX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_PR2_LOCKIN_INMTRX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 5 6 C1:SUS-PR2_TO_COIL --fi > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_PR2_TO_COIL_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 4 5 C1:SUS-PR3_INMATRIX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_PR3_INMATRIX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 2 4 C1:SUS-PR3_LOCKIN_INMTRX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_PR3_LOCKIN_INMTRX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 5 6 C1:SUS-PR3_TO_COIL --fi > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_PR3_TO_COIL_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 4 5 C1:SUS-SR2_INMATRIX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_SR2_INMATRIX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 2 4 C1:SUS-SR2_LOCKIN_INMTRX > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_SR2_LOCKIN_INMTRX_KB.adl
/opt/rtcds/userapps/release/cds/common/scripts/generate_KisselButton.py 5 6 C1:SUS-SR2_TO_COIL --fi > /opt/rtcds/caltech/c1/medm/c1su2/C1SUS_SR2_TO_COIL_KB.adl
Then on rossa, run activateSUS2DQ.py which creates a file C1SU2.ini.NEW. Remove old backup file C1SU2.ini.bak, rename C1SU2.ini to C1SU2.ini.bak and rename C1SU2.ini.NEW to C1SU2.ini:
Then ssh back to c1sus2 and restart the rtcds model:
Then restart daqd services from rossa and burtrestore to latest snap of c1su2epics.snap:
All suspensions are back online and everything is same as before now. Will test later the Run/Acquire switch functionality.
[Anchal, Yehonathan] (on Y Arm)
We first checked if the PZT mirrors M1 and M2 can be controlled. They indeed show no motion even after being connected with a power supply. So green injection path can not be aligned using cds controls right now.
We also noticed that all ETMY slow controls and monitors are offline. That's because the electronics upgrade did not include acromag chassis. This means that the DC bias adjustment is not accessible for ETMY.
[Paco, Ian] (on X Arm)
We installed c1auxey1 computer and the acromag chassis in 1Y4. The computer has been configured properly for nfs mounts to happen and we have initialized a git repo for /cvs/cds/caltech/target/c1auxey1 directory which stores all files for running modbusIOC service on this computer. We connected 18V power source but have not connected the 24V power yet as we need to make a new connector for it. Going on what Koji recommended, we'll connect the 24V power input to 18 V strip as well as the acromags can run on that voltage too.
I started the modbusIOC service on c1auxey1 and added PD variance channels for UR and SD as well. There are unfortunately two issues here:
I'm not sure why but the PIT and YAW offset values of +2725 and -2341 were not sufficient for the reflected OPLEV beam to reach the OPLEV QPD. I had to change the C1:SUS-ETMY_PIT_OFFSET to 5641 and C1:SUS-ETMY_YAW_OFFSET to -4820 to come back to center of the OPLEV QPD. We aligned the Oplevs to center before venting, so hopefully this is our desired ETMY position.
On another note, the issue of ETMY unable to damp was simple. The alignment offsets were on to begin with with values above 1000. This meant that whenever we enabled coil output, ETMY would necessarily get a kick. All I had to do was keep alignment offsets off before starting the damping and slowly increase the alignment offsets to desired position.
- This modification allowed me to align the oplev spot to the center of the QPD. C1:SUS-ETMY_PIT_OFFSET and C1:SUS-ETMY_YAW_OFFSE are +2725 (8%FS) and -2341 (7%FS), respectively.
C1:SUS-ETMY_YAW_OFFSE are +
We did the following alignment steps on Yarm today:
After this, we came back to the control room. One peculiar thing to note is that the C1:LS-Y_REFL_DC_OUT channel is inverted i.e. it is showing negative values for the reflection DC voltage. On this signal, we see a little bit higher-order mode flashing but it is not bright enough to be seen on the face camera of the suspension. We'll continue aligning the cavity using CDS now to get TEM00 mode.
After trying for a bit, we were able to get flash of about 2000 counts which is about 16% of the max value of 12200 counts. We adjusted the ITMY angular position using the IFO_ALIGN screen but used the OPTICAL_ALIGN offset screen to adjust the angular position of ETMY. The pitch and yaw values for ITMY are 1.0360 and -0.260 respectively whereas the pitch and yaw values for ETMY are 5664.0 and -4937.0 respectively.
Another issue, Green Y Shutter can not be controlled with the EPICS controls right now. This needs to be investigated.
Uneloged: yesterday, Paco and Ian tried locking the green laser on Yend to the Y-arm. Nothing noteworthy happened, no luck.
After Ian updated the cts2um filters for OSEM, shouldn't the damping gains be increased back by factor of 10 to previos values? Was the damping gain for SIDE ever changed? we found it at 250.
Can you explain why gain_offset filter was required and why this wasn't done for the side coil?
I updated the gain of the ct2um filter on the OSEMS for ETMY and decreased their gain by a factor of 9 from 0.36 to 0.04.
I added a filter called "gain_offset" to all the coils except for side and added a gain of 0.48.
together these should negate the added gain from the electronics replacement of the ETMY
[Anchal, Paco, Shruti]
[Anchal, JC, Ian, Paco]
We have now fixed all issues with the PD mons of c1susaux2 chassis. The slow channels are now reading same values as the fast channels and there is no arbitrary offset. The binary channels are all working now except for LO2 UL which keeps showing ENABLE OFF. This was an issue earlier on LO1 UR and it magically disappeared and now is on LO2. I think the optical isolators aren't very robust. But anyways, now our watchdog system is fully functional for all BHD suspended optics.
[Paco, Anchal, Ian, JC]
We attempted the alignment of IR beam into the arm cavities. We used PR2 and PR3 (moved manually as well as using cdsutils) and got the YAW aligned pretty good on both X and Y directions. PIT alignment however turned out to be much harder to align. PR2 PR3 didn't have much range, so we zeroed there offset and tried to use TT1, TT2, MMT1, and MMT2 to align the PIT but it would get clipped before reaching BS table if we were to correct for PIT misalignment happening downstream. We concluded that the issue is that one of the PR2, PR3 mirrors have too much PIT offset in equilibrium position. We have requested Koji to change the output resistors in the coil drivers of PR2 and PR3 so that we can correct for the PIT offset in them directly using the coils and reduce load on upstream optics. We have tweaked TT1, TT2, MMT1, and MMT2 positions today, so we do not have the previous reference anymore.
After Koji reduced the output resistors on PR2/PR3 coil drivers, we got much better actuation range. We aligned TT1 TT2 again to get beam centered on PR2 and PR3. Then we used only PR2 and PR3 to do the input beam alignment to Y arm cavity. Using access in ETMY chamber, we aligned the input beam parallel to cavity axis. Slight changes were required in ETMY alignment offsets to get first roundtrip in same spot on ITMY. remaining alignment is finer and needs to be done with a help of a reflection photodiode and cameras in the control room. Immediate next step is to setup POY path for locking the Yarm with IR.
Side note: Because of the large PIT correction required in PR3, we found that our upper OSEMs were hitting totally bright limit and lower OSEMs were hitting totally dark limit on PR3. This also destablized our damping loops. We pushed the upper OSEMs slighlty and pulled back the lower OSEMs slightly to get the PD signal in half shadow region again. This worked and our damping loops are stable again. However, we think we should repeat free swing test in future to diagonalize the input matrix for new OSEM positions.
[Anchal, Paco, JC, Tega]
Today we have aligned the Yarm cavity for IR, with verified 1.5 roundtrips. We also placed following optics in the BS table.
We also cleared the in-air BS table of all previous optics. JC and Tega setup HeNe laser for Oplevs roughly for now. Tega also transported the POY RFPD from ITMY in-air table to the BS in-air table. We aligned the POY path to the table, but we had to move ITMY after that to get 2nd roundtrip in the arm cavity, which misaligned our POY path again. POY path would need to be modified tomorrow.
Nice. Please put this information on the DCC pages of the coil driver units. You'll find links to all the units in this document tree LIGO-E2100447. For each page, click on "Change Metadata" from the left panel and add the change made to the resistor (including the resistor name on PCB, previous and new value), and add a link to your previous elog post which has more details like photos, to "Notes and Changes", and upload an updated version of the circuit schematic by creating an annotation in the previous circuit schematic pdf. Every unit that has a serial number in the lab has a DCC page (if not, we should create one) where we should track all such hard changes.
If someone gets time, let's put in all the cable posts and clean up our cable routing on the tables.
[Anchal, Tega, JC]
We installed cable posts in ITMY, BS, and ITMX chambers for all the new suspensions. Now, there is no point where the OSEM connections are hanging freely.
In BS chamber, we installed one post for LO2 near the north edge of the table and another post for PR3 on the Western edge with the blue cable running around the table on the floor.
In ITMY chamber, we installed the cable post in between AS1 and AS4 with the blue cables running around the table on the floor. This is to ensure the useful part of the table remains empty for future and none of the OSEM cables are taught in air.
I investigated this issue today. At first, it seemed that only new suspension testpoints are inaccessible. I was able to use diaggui for a measurement on MC2. The DAQ network cable between 1X4 and 1Y1 was tied and is very taught now (we should relieve this as soon as possible, best solution is to lay down a longer cable over the bridge). My hypothesis is that the DAQ network might have broken while tying this cable and it probably did not come back since then.
The simplest solution would have been to restart c1su2 models. As I restarted those models though, I found that c1lsc and c1sus models failed. This is very unusual as c1su2 models are independent and share nothing with the other vertex models. So I had to restart all the FE computers eventually. But this did not solve the issue. Worse, now the DAQ isn't working for the vertex machiens as well.
Next step was to try restarting fb1 itself. We switched off all the FE computers, restarted fb1, stopped daqd_* processes, reloaded gpstime module, restarted open-mx, mx, nds and daqd_* process. But the mx.service failed to load with following error message:
● mx.service - LSB: starts MX driver for Myrinet card
Loaded: loaded (/etc/init.d/mx)
Active: failed (Result: exit-code) since Mon 2022-04-25 17:18:02 PDT; 1s ago
Process: 4261 ExecStart=/etc/init.d/mx start (code=exited, status=1/FAILURE)
Apr 25 17:18:02 fb1 mx: Loading mx driver
Apr 25 17:18:02 fb1 mx: insmod: ERROR: could not insert module /opt/mx/sbin/mx_mcp.ko: File exists
Apr 25 17:18:02 fb1 mx: insmod: ERROR: could not insert module /opt/mx/sbin/mx_driver.ko: File exists
Apr 25 17:18:02 fb1 systemd: mx.service: control process exited, code=exited status=1
Apr 25 17:18:02 fb1 systemd: Failed to start LSB: starts MX driver for Myrinet card.
Apr 25 17:18:02 fb1 systemd: Unit mx.service entered failed state.
(Ignore the timestamp above, I ran the command again to capture the error message.)
However, I was able to start all the FE models without any errors and daqd processes are also all running without showing any errors. Everything is green in CDS screen with no error messages. But the only thing still wrong is mx.service which is not running.
From my limited knowledge and experience, mx.service is a one-time script that mounts mx devices in /dev and loads the mx driver. I tried running the script /opt/mx/sbin/mx_start_stop :
This gave the same error. On searching little bit online, "insmod: ERROR; cound not insert module" error comes up when the kernel version of the driver doesnot match the Linux kernel (whatever that means!). Such deep issues should not appear out of nowhere in a previosuly perfectly runnig system. I'll check around more what changed in fb1, network cables etc.
[Anchal, Paco, JC]
We had to open ITMY, ETMY chamber doors to get the cavity aligned again. Once we did that, we regained cavity flashing and were able to align the input injection and cavity alignment to get transmission flashing to 1.0 (C1:LSC-TRY_OUT_DQ). JC later centered both ITMY and ETMY oplevs. The ITMY oplev had become completely out of range.
We also opened ITMX, ETMX chamber doors to get Xarm alignment. Again, it seems that ITMX had moved a lot due to cable post installation.
To be continued
This would be a daily first task in the morning. We'll need to check the status of arm alignment and optimize it back to maximum every morning for the rest of the day's work.
Today, when I came, on openin gthe PSL shutter, IMC was aligned good, both arms were flashing but YARM maximum transmission count was around 0.7 (as opposed to 1 from yesterday) and XARM maximum transmission count was 0.5 (as opposed to 1 from yesterday). I did not change the input alignment to the interferometer. I only used ITMY-ETMY to regain flashing count of 1 on YARM and used BS and tehn ITMX-ETMX to regain flashing count of 0.9 to 1 in XARM.
Even thought the oplevs were centered yesterday, I found the oplev had drifted from the center and the optimal position also is different for all ooptics except EMTY and BS. It is worth nothign that in optimal position both PIT and YAW of ITMY and ITMX are off by 70-90 uradians and ETMX Pit oplev is off by 55 uradians.
I tried out this stack today and found some change of plans.
tl,dr; Jordan is preparing PLS-T238 and TR-1.5 with venting holes and C&B and they would be ready by tomorrow. I have collected all other parts for assembly, still looking for the mirror but I know other lab members know where it is, so no big issue there.
The assemly of this mirror is complete. A slight change here as well, we were supposed to use the former POYM1 (Y1-2037-0) mirror for POP_SM5 but I could not find it. It was stored on the right most edge of the table (see 40m/16450), but it is not there anymore. I found another undocumented mirror on the flow bench on the left edge marked (2010 July: Y1-LW1-2037-UV-0-AR) which means this mirror has a wedge of 1 degree and an AR coating as well. We do not need or care about the wedge or AR coating, so we can use this mirror for POP_SM5. Please let me know if someone was saving this mirror for some other purpose.
I'll finish assembly of POP_SM4 tomorrow and install them in ITMX chamber and resurrect POP path.
Here is more detail of the POP_SM4 mount assembly.
It's a combination of BA2V + PLS-T238 + BA1V + TR-1.5 + LMR1V + Mirror: CM254-750-E03
Between BA1V and PLS-T238, we have to do a washer action to fix the post (8-32) with a 1/4-20 slot. Maybe we can use a 1" post shim from thorlabs/newport.
Otherwise, we should be able to fasten the other joints with silver-plated screws we already have/ordered.
I think TR-1.5 (and a shim) has not been given to Jordan for C&B. I'll take a look at these.
We investigated the low power issue with POX11 photodiode.
Restored arm algiment to get 0.8 max flashing on YARM and 1 max flashing on XARM. I had to move input alignment using TT2-PR3 pair and realign YARM cavity axis using ITMY-ETMY pair.
I would like to advertise this useful tool that I've been using for moving cavity axis or input beam direction. It's a simple code that makes your terminal kind of videogame area. It moves two optics together (either in same direction or opposite direction) by arrow key strokes (left, right for YAW, up, down for PIT). Since it moves two optics together, you actually control the cavity axis or input beam angle or position depening on the mode.
We found that one of the Y1-1037-45P marked mirror that we used was actually curved. So we removed it and used a different Y1-1037-45P mirror, adjusted the position of the lens and got the beam to land on POX11 RFPD successfully.
Then in control room, we maximized the POX11_I_ERR PDH signal amplitude by changing C1:LSC-POX11_PHASE_R to 42.95 from -67.7. We kept the C1:LSC-POX11_PHASE_D same at 90. We were getting +/- 200 PDH signal on POX_I_ERR.
Then in our attempt to lock the XARM, when we ran the "Restore XARM (POX)" script, YARM locked!
We are not sure why the YARM locked, we might have gotten lucky today. So we ran ASS on YARM and got the transmission (TRY_OUT) stable at 1. The lock is very robust and retrievable.
Coming back to XARM, we realized that the transmission photodiode used for XARM was the low-gain QPD instead of the thorlabs high gain photodiode. The high-gain photodiode was outputing large negative counts for some reason. We went to the Xend to investigate and found that the high gain photodiode was disconnected for some reason. Does anyone know/remember why we disconnected this photodiode?
We connected the photodiode back and it seems to work normally. We changed the photodiode selection back to high gain photodiode for TRX and on 40 dB attenuation, we see flashing between 1.4 to 1.6. However, we were unable to lock the XARM. We tried changing the gain of the loop, played a little bit with the trigger levels etc but couldn't get it to lock. Next shift team, please try to lock XARM.
[Paco, Anchal, Yuta]
We opened the BSC and ITMX chamber in the morning (Friday) to investigate POX11 beam clipping. We immediately found that the POX11 beam was clipping by the recently installed cable posts, so luckily no major realingment had to be done after reinstalling the cable post in a better location.
Because we had the BSC open, we decided to steer the AS1 mirror to align the AS path from ITMY all the way to the vertex chamber. Relatively small AS1 offsets (of ~ 2000 counts each) were added on PIT / YAW to center the beam on ASL (there is slight clipping along PIT, potentially because of the AS2 aperture. We then opened the vertex chamber and located the AS beam with relative ease. We decided to work on this chamber, since major changes propagate heavily downstream (simply changing the IMC pointing).
Anchal removed old optics from the vertex chamber and we installed the steering pair of mirrors for AS path. This changed the balance of the vertex table by a lot. By using the MC REFL camera beam spot we managed to coarsely balance the counterweights and recover the nominal IMC injection pointing. Simply reenabling the IMC autolocker gave us high transmission (~ 970 counts out of the typical 1200 these days).
The final IMC alignment was done by Anchal with delicate PIT motion on the input injection IMC miror to maximize the transmission (to our satisfaction, Anchal's motion was fine enough to keep the IMC locked). The end result was quite satisfying, as we recovered ~ 1200 counts of MC transmission.
Finally, we looked at the arm cavity transmission to see if we were lucky enough to see flashing. After not seeing it, we adjusted TT1 / TT2 to correct for any MMTT1 pitch adjustment needed after the vertex table rebalancing. Suprisingly, we didn't take too long and recovered the nominal arm cavity pointing after a little adjustment. We stopped here, but now the vertex table layout is final, and AS beam still needs to be aligned to the vertex in-air table.