We ran the f2a filter test for MC1, MC2, and MC3.
The new filters differ from previous versions by a adding non-unity Q factor for the pole pairs as well.
This in terms of zpk is: [ [zr + i zi, zr - i zi], [pr + i pi, pr - i pi], 1] where
We uploaded all these filters using foton, into the three last FM slots on the POS output gain coil.
We ran tests on all suspended optics using the following (nominal) procedure:
C1:IOO-WFS_GAIN to 0.05.
** Excitation = 0.05 - 3.5 Hz uniform noise, 100 amplitude, 100 gain
I checked out what happened on c1vac. There are actually two independent monitoring codes running:
The interlocks did not trip because the low-pressure delivery line, downstream of the dual-tank regulator, never fell below the minimum pressure to operate the valves (65 PSI). This would have eventually occurred, had Jordan been slower to replace the tanks. So I see no problem with the interlocks.
On the other hand, the N2 mailer should have sent an email at 2021-04-18 15:00, which was the first time C1:Vac-N2T1_pressure dropped below the 600 PSI threshold. N2check.log shows these pressures were recorded at this time, but does not log that an email was sent. Why did this fail? Not sure, but I found two problems which I did fix:
The code then ran fine for me when I retested it. I don't see any further issues.
Installed T2 today, and leaked checked the entire line. No issues found. It could have been a bad valve on the tank itself. Monitored T2 pressure for ~2 hours to see if there was any change. All seems ok.
When I came into the lab this morning, I noticed that both N2 tanks were empty. I had swapped one on Friday (4-16-21) before I left the lab. Looking at the logs, the right tank (T2) sprung a leak shortly shortly after install. I leak checked the tank coupling after install but did not see a leak. There could a leak further down the line, possibly at the pressure transducer.
The left tank (T1) emptied normally over the weekend, and I quickly swapped the left tank for a full one, and is curently at ~2700 psi. It was my understanding that if both tanks emptied, V1 would close automatically and a mailer would be sent out to the 40m group. I did not receive an email over the weekend, and I checked the Vac status just now and V1 was still open.
I will keep an eye on the tank pressure throughout the day, and will try to leak check the T2 line this afternoon, but someone should check the vacuum interlocks and verify.
I have uploaded all the new settings mentioned in 16066 and 16072. The settings were uploaded through a single script present at anchal/20210428_IMC_Tuned_Suspension/uploadNewConfigIMC.py. The settings can be reverted back to old settings through anchal/20210428_IMC_Tuned_Suspension/restoreOldConfigIMC.py. Both these scripts can be run only through python3 in donatella or allegra.
GPSTIME of new settings: 1303690144
New settings include:
We'll wait and watch the performance through summary pages and check back the performance on Monday.
We took a Supermicro from the lab (along with a keyboard, a mouse, and a screen taken from a table on the Y arm) and placed it near the Acromag chassis.
We installed Debian 10 on the machine. I followed the steps on the slow machine wiki for setting up the host machine. Some steps had to be updated. Most importantly, in the new Debian, the network interfaces are given random names like enp3s0 and enp4s0 instead of eth0 and eth1. I updated the wiki accordingly.
To operate the chassis using one 15V source I disconnected the +24V cable from the Acromag units and jumpered the +15V wire into the power input instead. I started up the Acromags. They draw 0.7A. I connected an Ethernet cable to the front interface. I checked that all the Acromags are connected to the local network of the host machine by pinging them one by one.
Yesterday I unpacked and installed the three 18-bit DAC cards received from Hanford. I then repeated the low-level PCIe testing outlined in T1900700, which is expanded upon below. I did not make it to DAC-ADC loopback testing because these tests in fact revealed a problem with the new hardware. After a combinatorial investigation that involved swapping cards around between known-to-be-working PCIe slots, I determined that one of the three 18-bit DAC cards is bad. Although its "voltage present" LED illuminates, the card is not detected by the host in either I/O chassis.
I installed one of the two working DACs in the c1bhd chassis. This now 100% completes this system. I installed the other DAC in the c1sus2 chassis, which still requires four more 18-bit DACs. Lastly, I reran the PCIe tests for the final configurations of both chassis.
For future reference, below is the set of command line tests to verify proper detection and initialization of ADC/DAC/BIO cards in I/O chassis. This summarizes the procedure described in T1900700 and also adds the tests for 18-bit DAC and 32-channel BO cards, which are not included in the original document.
Each command should be executed on the host machine with the I/O chassis powered on:
where xxxx is a four-digit device code given in the following table.
The command will return a two-line entry for each PCIe device of the specified type that is detected. For example, on a system with a single ADC this command should return:
In 16087 we mentioned that we were unable to do a step response test for WFS loop to get an estimate of their UGF. The primary issue there was that we were not putting the step at the right place. It should go into the actuator directly, in this case, on C1:SUS-MC2_PIT_COMM and C1:SUS-MC2_YAW_COMM. These channels directly set an offset in the control loop and we can see how the error signals first jump up and then decay back to zero. The 'half-time' of this decay would be the inverse of the estimated UGF of the loop. For this test, the overall WFS loops gain, C1:IOO-WFS_GAIN was set to full value 1. This test is performed in the changed settings uploaded in 16091.
I did this test twice, once giving a step in PIT and once in YAW.
Attachment 1 is the striptool screenshot for when PIT was given a step up and then step down by 0.01.
Attachment 2 is the striptool screenshot when YAW was given a step up and down by 0.01. Note the difference in x-scale in this plot.
Tried locking the arms
Did the WFS step response test on IMC in between while waiting for help. See 16094.
Back to trying arm locking
PMC got unlocked
To add the required library: put the .mdl file that contains the library into the userapps/lib folder. That will allow it to compile correctly
I got these errors:
I removed all IPC parts (as seen in Attachment 1) and that did the trick. IPC parts (Inter-Process Communication) were how this model was linked to the controller so I don't know how exactly how I can link them now.
I also went through the model and grounded all un-attached inputs and outputs. Now the model compiles
Also, The computer seems to be running very slowly in the past 24 hours. I know Jon was working on it so I'm wondering if that had any impact. I think it has to do with the connection speed because I am connected through X2goclient. And one thing that has probably been said before but I want to note again is that you don't need a campus VPN to access the docker.
The problem here was that the RFM errors cropped up again - seems like it started ~4am today morning judging by TRX trends. Of course without the triggering signal the arm cavity couldn't lock. I rebooted everything (since just restarting the rfm senders/receivers did not do the trick), now arm locking works fine again. It's a bit disappointing that the Rogue Master setting did not eliminate this problem completely, but oh well...
It's kind of cool that in this trend view of the TRX signal, you can see the drift of the ETMX suspension. The days are getting hot again and the temp at EX can fluctuate by >12C between day and night (so the "air-conditioning" doesn't condition that much I guess 😂 ), and I think that's what drives the drift (idk what the transfer function to the inside of the vacuum chamber is but such a large swing isn't great in any case). Not plotted here but i hypothesize TRY levels will be more constant over the day (modulo TT drift which affects both arms).
The IMC suspension team should double check their filters are on again. I am not familiar with the settings and I don't think they've been added to the SDF.
I installed the EPICs base, asyn and modbus modules according to Jon's instructions.
Since the modbus configurations files were already writtten for c1auxey1 (see elog 15292) the only thing I did was to change the IP addresses in ETMYaux.cmd to match the actual assigned IPs.
to match the actual assigned IPs.
I followed the rest of the instructions as written.
The modbus service was activated succesfully.
The only thing left to do is to change ETMYaux.db to reflect to new channels that were added. I believe these are BI channels named C1:SUS-ETMY_xx_ENABLEMon.
to reflect to new channels that were added. I believe these are BI channels named C1:SUS-ETMY_xx_ENABLEMon.
The other day I felt hot at the X end. I wondered if the Xend A/C was off, but the switch right next to the SP table was ON (green light).
I could not confirm if the A/C was actually blowing or not.
I double checked today and the F2A filters in the output matrices of MC1, MC2 and MC3 in the POS column are ON. I do not get what SDF means? Did we need to add these filters elsewhere?
t Both arms were locked simply by using IFO > Configure > ! (YARM) > Restore YARM. I had to use ASS to improve the TRX/TRY to ~0.95.
I measured C1:LSC-XARM_IN1_DQ and C1:LSC-YARM_IN1_DQ while injecting band limited noise in C1:IOO-WFS1_PIT_EXC using uniform noise with amplitude 1000 along with filter defined by string: cheby1("BandPass",4,1,80,100). I calibrated the control arms signals by 2.44 nm/cts calibration factor directly picked up from 13984.
For the duration of this test, all LIMIT switches in the WFS loops were switched OFF.
I do not see any affect on the arm control signal power spectrums with or without the noise injection. Attachement 1 shows the PSD along with PSD of the injection site IN2 signal. I must be doing something wrong, so would like feedback before I go further.
With the input matrix, coil ouput gains and F2A filters loaded as in 16091, I tested the suspension loops' step response to offsets in LSC, ASCPIT and ASCYAW channels, before and after applying the "new damping gains" mentioned in 16066 and 16072. If these look better, we should upload the new (higher) damping gains as well. This was not done in 16091.
Note that in the plots, I have added offsets in the different channels to plot them together, hence the units are "au".
We received a stock of DB9 male feed-through connectors. That allowed me to complete the remaining wiring on the c1auxey Acromag chassis. The only thing left to be done is the splicing to the RTS.
This is the actuator calibration. For the error point calibration, you have to look at the filter in the calibration model. I think it's something like 8e-13m/ct for POX and similar for POY.
I calibrated the control arms signals by 2.44 nm/cts calibration factor directly picked up from 13984.
The SDF system is supposed to help with restoring the correct settings, complementary to burt. My personal opinion is that there is no need to commit these filters to SDF until we're convinced that they help with the locking / noise performance.
I double checked today and the F2A filters in the output matrices of MC1, MC2 and MC3 in the POS column are ON. I do not get what SDF means? Did we need to add these filters elsewhere
Now that the model is finally compiled I need to make an medm screen for it and put it in the c1sim:/home/controls/docker-cymac/userapps/medm/ directory.
But before doing that I really want to test it using the autogenerated medm screens which are in the virtual cymac in the folder /opt/rtcds/tst/x1/medm/x1sup. In Jon's post he said that I can use the virtual path for sitemap after running $ eval $(./env_cymac)
We finished the installation procedure on the c1auxey1 host machine. There were some adjustments that had to be made for Debian 10. The slow machine wiki page has been updated.
A test database file was made were all the channel names were changed from C1 to C2 in order to not interfere with the existing channels.
We starting testing the channels one by one to check the wiring and the EPICs software. We found some misswirings and fixed them.
Its getting late. I'll continue with the rest of the channels on Monday.
Notice that for all the AI channels the RTN was disconnected while testing.
WFS1 noise injection
C1:LSC-XARM_IN1_DQ / C1:LSC-YARM_IN1_DQ
C1:SUS-ETMX_LSC_OUT_DQ / C1:SUS-ETMY_LSC_OUT_DQ
C1:SUS-MC1_**COIL_OUT / C1:SUS-MC2_**COIL_OUT / C1:SUS-MC3_**COIL_OUT
C1:IOO-WFS1_PIT_ERR / C1:IOO-WFS1_YAW_ERR
** denotes [UL, UR, LL, LR]; the output coils.
When the cymac is started it gives me a list of channels shown below.
$ Initialized TP interface node=8, host=98e93ecffcca
$ Creating X1:DAQ-DC0_X1IOP_STATUS
$ Creating X1:DAQ-DC0_X1IOP_CRC_CPS
$ Creating X1:DAQ-DC0_X1IOP_CRC_SUM
$ Creating X1:DAQ-DC0_X1SUP_STATUS
$ Creating X1:DAQ-DC0_X1SUP_CRC_CPS
$ Creating X1:DAQ-DC0_X1SUP_CRC_SUM
But when I enter it into the Diaggui I get an error:
The following channel could not be found:
My guess is that need to connect to the Diaggui to something that can access those channels. I also need to figure out what those channels are.
We repeated the same test with IMC unlocked. We had found these gains when IMC was unlocked and their characterization needs to be done with no light in the cavity. attached are the results. Everything else is same as before.
Edit Tue May 4 14:43:48 2021 :
Overall, I would recommend setting the new gains in the suspension loops as well to observe long term effects too.
I found a "vice" in the cleanroom (attachment 1). I used it to push dowel pins into the last suspension block using some alcohol as a lubricant.
I then assembled the 7th and last suspension tower (attachment 2).
Things that need to be done:
1. Push Viton tips into vented screws and assemble the earthquake stops.
2. Glue magnets to dumbells.
Rana came and helped us figure us where to inject the noise. Following are the characteristics of the test we did:
Attachment 1 shows a screenshot with awggui and diaggui screens displaying the signal in both angular and longitudinal channels.
Attachment 2 shows the analogous screenshot for MC2.
For past few days, a weird sound of decaying gas leakage comes in the 40m control room from the south west corner of ceiling. Attached is an audio capture. This comes about every 10 min or so.
It seemed like the BIO channels were not working, both the inputs and the outputs. The inputs were working on the windows machine though. That is, when we shorted the BIO channel to the return, or put 0V on it, we could see the LED turn on on the I/O testing screen and when we ramped up the voltage above 3 the LED turned off. This is the expected behavior from a sinking digital input. However, the EPICs caget didn't show any change. All the channels were stuck on Disabled.
We checked the digital outputs by connecting the channels to a fluke. Initially, the fluke showed 13V. We tried to toggle the digital output channels with caput and that didn't work. We checked the outputs with the windows software. For that, we needed to stop the Modbus. To our surprise, the windows software was not able to flip the channels either. We realized that this BIO Acromag unit is probably defective. We replaced it with a different unit and put a warning sticker on the defective unit. Now, the digital outputs were working as expected. When we turned them on the voltage output dropped to 0V. We checked the channels with the EPICs software. We realized that these channels were locked with the closed loop definition. We turned on the channels tied to these output channels (watchdog and toggles) and it worked. The output channels can be flipped with the EPICs software. We checked all the digital output channels and fixed some wiring issues along the way.
The digital input channels were still not working. This is a software issue that we will have to deal with later.
(Yehonathan) Rana noticed that the BNC leads on the chassis front panel didn't have isolation on them so I redid them with shrinking tubes.
I also noticed some sound in the control room. (didn't open the MP3 yet)
I'm afraid that the hard disk in the control room iMac is dying.
With all the PCIe issues now resolved, yesterday I proceeded to build an IOP model for each of new FEs. I assigned them names and DCUIDs consist with the 40m convention, listed below. These models currently exist on only the cloned copy of /opt/rtcds running on the test stand. They will be copied to the main network disk later, once the new systems are fully tested.
The models compile and install successfully. The RCG runtime diagnostics indicate that all is working except for the timing synchronization and DAQD data transmission. This is as expected because neither of these have been set up yet.
The next step is to provide the 65 kHz clock signals from the timing fanout via LC optical fiber. I overlooked the fact that an SPX optical transceiver is required to interface the fiber to the timing slave board. These were not provided with the timing slaves we received. The timing slaves require a particular type of transceiver, 100base-FX/OC-3, which we did not have on hand. (For future reference, there is a handy list of compatible transceivers in E080541, p. 14.) I placed a Digikey order for two Finisar FTLF1217P2BTL, which should arrive within two days.
We redid the WFS noise injection test and have compiled some results on noise contribution in arm cavity noise and IMC frequency noise due to angular noise of IMC.
Attachment 1: Shows the calibrated noise contribution from MC1 ASCPIT OUT to ARM cavity length noise and IMC frequency noise.
After a helpful meeting with Jon, we realized that I have somehow corrupted the sitemap file. So I am going to use the code Chris wrote to regenerate it.
Also, I am going to connect the controller using the IPC parts. The error that I was having before had to do with the IPC parts not being connected properly.
I put the box containing the untested OSEMs from KAGRA near the south flow bench on the floor.
We have uploaded the new damping gains on all the suspensions of IMC. This completes changing all the configuration to as mentioned in 16066 and 16072. The old setting can be restored by running python3 /users/anchal/20210505_IMC_Tuned_SUS_with_Gains/restoreOldConfigIMC.py from allegra or donatella.
Assembled chassis from De Leone placed in the 40 Meter Lab, along the west wall and under the display pedestal table. The leftover parts are in smaller Really Useful boxes, also on the parts pile along the west wall.
I added the IPC parts back to the plant model so that should be done now. It looks like this again here.
I can't seem to find the control model which should look like this. When I open sus_single_control.mdl, it just shows the C1_SUS_SINGLE_PLANT.mdl model. Which should not be the case.
When using mdl2adl I was getting the error:
$ cd /home/controls/mdl2adl
$ ./mdl2adl x1sup.mdl
error: set $site and $ifo environment variables
to set these in the terminal use the following commands:
$ export site=tst
$ export ifo=x1
On most of the systems, there is a script that automatically runs when a terminal is opened that sets these but that hasn't been added here so you must run these commands every time you open the terminal when you are using mdl2adl.
Here's my first attempt at doing angular actuation calibration for IMC mirrors using the method descibed in /users/OLD/kakeru/oplev_calibration/oplev.pdf by Kakeru Takahashi. The key is to see how much is the cavity mode misaligned from the input mode of beam as the mirrors are moved along PIT or YAW.
There two possible kinds of mismatch:
Kakeru's document goes through cases for linear cavities. For IMC, the mode mismatches are bit different. Here's my take on them:
Calibration factor at DC [µrad/cts]
I copied c1scx.mdl to the docker to attach to the plant using the commands:
$ ssh nodus.ligo.caltech.edu
$ cd opt/rtcds/userapps/release/isc/c1/models/simPlant
$ scp c1scx.mdl controls@c1sim:/home/controls/docker-cymac/userapps
We today measured the calibration factors for XARM_OUT and YARM_OUT in nm/cts and replotted our results from 16117 with the correct frequency dependence.
Calibration of XARM_OUT and YARM_OUT
Inferring noise contributions to arm cavities:
Edit Mon May 10 18:31:52 2021
See corrections in 16129.
A few corrections to last analysis:
Today I brought and installed the new optical transceivers (Finisar FTLF1217P2BTL) for the two timing slaves. The timing slaves appear to phase-lock to the clocking signal from the master fanout. A few seconds after each timing slave is powered on, its status LED begins steadily blinking at 1 Hz, just as in the existing 40m systems.
However, some other timing issue remains unresolved. When the IOP model is started (on either FE), the DACKILL watchdog appears to start in a tripped state. Then after a few minutes of running, the TIM and ADC indicators go down as well. This makes me suspect the sample clocks are not really phase-locked. However, the models do start up with no error messages. Will continue to debug...
Did you match the local PC time with the GPS time?
Attached is the control loop diagram when main laser is locked to IMC and a single arm (XARM) is locked to the transmitted light from IMC.
We picked a few parameters from 40m summary page and plotted them to see the effect of new settings. On April 4th, old settings were present. On April 28th (16091), new input matrices and F2A filters were uploaded but suspension gains remained the same. On May 5th (16120), we uploaded new (higher) suspension gains. We chose Sundays on UTC so that it lies on weekends for us. Most probably nobody entered 40m and it was calmer in the institute as well.
We can download data and plot comparisons ourselves and maybe calculate the spectrums of MC_TRANS_PIT/YAW and MC_REFL_DC when IMC was locked. But we want to know if anyone has better ways of characterizing the settings that we should know of before we get into this large data handling which might be time-consuming. From this preliminary 40m summary page plots, maybe it is already clear that we should go back to old settings. Awaiting orders.
Working with Chris, we decided that it is probably better to use a simple filter module as a controller before we make the model more complicated. I will use the plant model that I have already made (see attachment 1 of this). then attach a single control filter module to that: as seen in attachment 1. because I only want to work with one degree of freedom (position) I will average the four outputs which should give me the position. Then by feeding the same signal to all four inputs I should isolate one degree of freedom while still using the premade plant model.
The model I made that is shown in attachment 2 is the model I made from the plan. And it complies! yay! I think there is a better way to do the average than the way I showed. And since the model is feeding back on itself I think I need to add a delay which Rana noted a while ago. I think it was a UnitDelay (see page 41 of RTS Developer’s Guide). So I will add that if we run into problems but I think there is enough going on that it might already be delayed.
Since our model (x1sup_isolated.mdl) has compiled we can open the medm screens for it. I provide a procedure below which is based on Jon's post.
$ cd docker-cymac
$ eval $(./env_cymac)
$ medm -x /opt/rtcds/tst/x1/medm/x1sup_isolated/X1SUP_ISOLATED_GDS_TP.adl
To see a list of all medm screens use:
$ cd docker-cymac
# cd /opt/rtcds/tst/x1/medm/x1sup_isolated
Some of the other useful ones are:
See attachment 4. This screen shows the POS plant filter module that will be filled by the filter representing the transfer function of a damped harmonic oscillator:
THIS TF HAS BEEN UPDATED SEE NEXT POST
The first one of these screens that are of interest to us (shown in attachment 3) is the X1SUP_ISOLATED_GDS_TP.adl screen, which is the CDS runtime diagnostics screen. This screen tells us "the success/fail state of the model and all its dependencies." I am still figuring out these screens and the best guide is T1100625.
The next step is taking some data and seeing if I can see the position damp over time. To do this I need to:
No, this is the property of the suspension assembly. The mass says 10kg
Could you do the same for the testmass assembly (only the suspended part)? The units are good, but I expect that the values will be small. I want to keep at least three significant digits.
Here are the mass properties for the only the test mass assembly (optic, 3" ring, and wire block). (Updated with g*mm^2)
We came in the morning with the following scene on the zita monitor:
The MC1 watchdog was tripped and seemed like IMC struggled all night with misconfigured WFS offsets. After restoring the MC1 WD, clearing the WFS offsets, and seeing the suspension damp, the MC caught lock. It wasn't long before the MC unlocked, and the MC1 WD tripped again.
We tried few things, not sure what order we tried them in:
Nothing worked. We kept seeing that ULPD var on MC1 keeps showing kicks every few minutes which jolts the suspension loops. So we decided to record some data with PSL shutter closed and just suspension loops on. Then we switched off the loops and recorded some data with freely swinging optic. Even when optic was freely swinging, we could see impulses in the MC1 OSEM UL PD var which were completely uncorrelated with any seismic activity. Infact, last night was one fo teh calmer nights seismically speaking. See attachment 2 for the time series of OSEM PD variance. Red region is when the coil outputs were disabled.
Edit Thu May 13 14:47:25 2021 :
Added OSEM Sensor timeseries data on the plots as well. The UL OSEM sensor data is the only channel which is jumping hapazardly (even during free swinging time) and varying by +/- 30. Other sensors only show some noise around a stable position as should be the case for a freely suspended optic.
Koji and I did a few tests with an OSEM emulator on the satellite amplifier box used for MC1 which is housed on 1X4. This sat box unit is S2100029 D1002812 that was recently characterized by me 15803. We found that the differential output driver chip AD8672ARZ U2A section for the UL PD was not working properly and had a fluctuating offset at no input current from the PD. This was the cause of the ordeal of the morning. The chip was replaced with a new one from our stock. The preliminary test with the OSEM emulator showed that the channel has the correct DC value.
In further testing of the board, we found that the channel 8 LED driver was not working properly. Although this channel is never used in our current cable convention, it might be used later in the future. In the quest of debugging the issue there, we replaced AD8672ARZ at U1 on channel 8. This did not solve the issue. So we opened the front panel and as we flipped the board, we found that the solder blob shorted the legs of the transistor Q1 2N3904. This was replaced and the test with the LED out and GND shorted indicated that the channel is now properly providing a constant current of 35mA (5V at the monitor out).
After the debugging, the UL channel became the least noisy among the OSEM channels! Mode cleaner was able to lock and maintain it.
We should redo the MC1 input matrix optimization and the coil balancing afterward as we did everything based on the noisy UL OSEM values.