With limited proof of working for a few times (but robustly), I'm happy to report that ASS on YARM and XARM is working now.
The issue is that PR3 is not placed in correct position in the chamber. It is offset enough that to send a beam through center of ITMY to ETMY, it has to reflect off the edge of PR3 leading to some clipping. Hence our usual ASS takes us to this point and results in loss of transmission due to clipping.
Solution: We can not solve this issue without moving PR3 inside the chamber. But meanwhile, we can find new spot positions on ITMY and ETMY, off the center in YAW direction only, which would allow us to mode match properly without clipping. This would mean that there will be YAW suspension noise to Length coupling in this cavity, but thankfully, YAW degree of freedom stays relatively calm in comparison to PIT or POS for our suspensions. Similarly, we need to allow for an offset in ETMX beam spot position in YAW. We do not control beam spot position on ITMX due to lack of enough actuators to control all 8 DOFs involved in mode matching input beam with a cavity. So instead I found the right offset for ITMX transmission error signal in YAW that works well.
I found these offsets (found empirically) to be:
These offsets have been saved in the burt snap file used for running ASS.
I'll reiterate here procedure to run ASS.
We performed local damping loop tuning for ETMs today to ensure all damping loops' OLTF has a Q of 3-5. To do so, we applied a step on C1:SUS-ETMX/Y_POS/PIT/YAW_OFFSET, and looked for oscillations in the error point of damping loops (C1:SUS-EMTX/Y_SUSPOS/PIT/YAW_IN1) and increased or decreased gains until we saw 3-5 oscillations before the error signal stabilizes to a new value. For side loop, we applied offset at C1:SUS-ETMX/Y_SDCOIL_OFFSET and looked at C1:SUS-ETMX/Y_SUSSIDE_IN1. Following are the changes in damping gains:
C1:SUS-ETMX_SUSPOS_GAIN : 61.939 -> 61.939
C1:SUS-ETMX_SUSPIT_GAIN : 4.129 -> 4.129
C1:SUS-ETMX_SUSYAW_GAIN : 2.065 -> 7.0
C1:SUS-ETMX_SUSSIDE_GAIN : 37.953 -> 50
C1:SUS-ETMY_SUSPOS_GAIN : 150.0 -> 41.0
C1:SUS-ETMY_SUSPIT_GAIN : 30.0 -> 6.0
C1:SUS-ETMY_SUSYAW_GAIN : 30.0 -> 6.0
C1:SUS-ETMY_SUSSIDE_GAIN : 50.0 -> 300.0
I updated follwoing in teh rtcds models and medm screens:
I further updated LSC model today with following changes:
The model built and installed with no issues.
Further, the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db
The IPC issue that we were facing earlier is resolved now. The BH55_I and BH55_Q signal after phase rotation is successfully reaching c1hpc model where it can be used to lock LO phase. To resolve this issue, I had to restart all the models. I also power cycled the LSC I/O chassis during this restart as Tega suspected that such a power cycle is required while adding new dolphin channels. But there is no way to find out if that was required or not. Good news is that with the new cds upgrade, restarting rtcds models will be much easier and modular.
Since ETMY does not use HV coil driver anymore, the watchdog on ETMY needs to be similar to other new optics. We made these updated today. Now ETMY watchdog while slowly ramps down the alignment offsets when it is tripped.
I updated c1ass model today to use PR2 PR3 instead of TT1 TT2 for YARM ASS. This required changes in c1su2 too. I have split c1su2 into c1su2 (LO1., LO2, AS1, AS4) and c1su3 (SR2, PR2, PR3). Now the two models are using 31 and 21 CPU out of 60 which was earlier 55/60. All changed compiled correctly and have been uploaded. Models have been restared and medm screens have been updated.
I turned on WFS on IMC at:
PDT: 2022-10-01 13:09:18.378404 PDT
UTC: 2022-10-01 20:09:18.378404 UTC
The following channels are being saved in frames at 1024 Hz rate:
We can keep it running over the weekend as we will not use the interferometer. I'll keep an eye on it with occasional log in. We'll post the time when we switch it off again.
The IMC lost lock at:
UTC Oct 03, 2022 01:04:16 UTC
Central Oct 02, 2022 20:04:16 CDT
Pacific Oct 02, 2022 18:04:16 PDT
GPS Time = 1348794274
The WFS loops kept running and thus took IMC to a misaligned state. Between the above two times, IMC was locked continuously with very brief lock loss events, and had all WFS loops running.
[Yuta, Paco, Anchal]
BH55 RFPD installation was still not complete until yesterday because of a peculiar issue. As soon as we would increase the whitening gain on this photodiode, we saw spikes coming in at around 10 Hz. Following events took place while debugging this issue:
The installation of BH55 RFPD is complete now.
Chris and I discussed our plan for CDS upgrade which amounts to moving new FEs, new chiara, and new FB1 OS system tomartian network.
Date: Fri Oct 7, 2022
Time: 11:00 am (After coffee and donuts)
Minimum required people: Chris, Anchal, JC (the more the merrier)
sudo systemctl stop modbusIOC.service
sudo shutdown -h now
We estimated meas diff angle for BH55 today by following this elog post. We used moku:lab Moku01 to send a 55 MHz tone to PD input port of BH55 demodulation board. Then we looked at I_ERR and Q_ERR signals. We balanced the gain on I channel to 1.16 to get the two signals to same peak to peak heights. Then we changed the mead diff angle to 91.97 to make the "bounding box" zero. Our understanding is that we just want the ellipse to be along x-axis.
We also aligned beam input to BH55 bit better. We used the single bounce beam from aligned ITMY as the reference.
We attempted to lock LO phase with just using BH55 demodulated output.
We expected that we would be able to lock to 90 degree LO phase just like DC locking. But now we understand that we can't beat the light with it's own phase modulated sidebands.
The confusion happened because it would work with Michelson at the dark port output of michelson, amplitude modulation is generated at 55 MHz. We tried to do the same thing as was done for DC locking with single bounce and then michelson, but we should have seen this beforehand. Lesson: Always write down expectation before attempting the lock.
[Chris, Anchal, JC, Paco, Yuta]
We finished all steps upto step 3 without any issue. We restarted all workstations to get the new nfs mount from New Chiara. Some other machined in lab might requrie restart too if they require nfs mounts. Note, c1sus was initially connected using a fiber oneStop cable that tested OK with the teststand IO chassis, but it still did not work with the c1sus chassis, and was reverted to a copper cable.
[Chris, Anchal, JC]
While doing step 4, we realized that all 8 drive bays in the existing fb1 are occupied by disks that are managed by a hardware RAID controller (MegaRAID). All 8 hard disks seem to be combined into a single logical volume, which is then partitioned and appears to the OS as a 2 TB storage device (/dev/sda for OS) and 23.5 TB storage device (/dev/sdb for frames). There was no free drive bay to install our OS drive from fb1 (clone), nor was there any already installed drive that we could identify as an "OS drive" and swap out, without affecting access to the frame data. We tried to boot fb1 with the OS drive from fb1 (clone) using multiple SATA to USB cables, but it was not detected as a bootable drive. We then tried to put the OS drive back in fb1 (clone) and use the clone machine as the 40m framebuilder temporarily, in order to work on booting up fb1 in parallel with the rest of the upgrade. We found that fb1 (clone) would no longer boot from the drive either, as it had apparently lost (or never possessed?) its grub boot loader. The boot loader was reinstalled from the debian 10 install thumbdrive, and then fb1 (clone) booted up and functioned normally, allowing the remainder of the upgrade to go forward.
Jamie investigated the situation with the existing fb1, and found that there seem to be additional drive bays inside the fb1 chassis (not accessible from the front panel), in which the new OS disk could be installed and connected to open SATA ports on the motherboard. We can try this possible route to booting up fb1 and restoring access to past frames next week.
We carried out the rest of the steps upto 7.3. We started all slow machines, some of them required reloading the daemons using:
We found that we were unable to ssh to c1psl, c1susaux, and c1iscaux. It turned out that chiara (clone) had a very outdated martian host table for the nameserver. Since Chris had introduced some new IPs for IPMI ports, dolphin switch etc, we could not simply copy back from the old chiara. So Chris used diff command to go through all changes and restored DNS configuration.
We were able to burt restore to Oct 7, 03:19 am point using the latest snapshot on New Chiara. All suspensions were being locally damped properly. We restarted megatron and optimus to get nfs mounts. All docker services are running normally, IMC autolocker is working and FF slow PID is working as well. PMC autolocker is also working fine. megatron's autourt cron job is running properly and restarted creating snapshots from 6:19 pm onwards.
c1mcs and c1ioo models have been updated to add new acquisition of data.
We found from https://ldvw.ligo.caltech.edu/ldvw/view that following channels with "WFS" in them are acquired at the sites:
These are most probably error signals of WFS1 and WFS2. At 40m, we have following channel names instead:
And similar for Q outputs as well. We also have chosen quadrature signals (signals after sensing matrix) at:
We added following testpoints and are acquiruing them at 1024 Hz:
For the transmission QPD at MC2, we found following acquired channels at the site:
We created testpoints in c1mcs.mdl to add these channel names and acquire them. Following channels are now available at 1024 Hz:
We started acquiring following channels for the 6 error signals at 1024 Hz:
We started acquiring following 6 control signals at 1024 Hz as well:
RXA: useful to know that you have to append "_DQ" to all of the channel names above if you want to find them with nds2-client.
In order to get C1:IOO-MC_TRANS_[DC/P/Y], we had to get rid of same named EPICS output channels in the model. These were been acquired at 16 Hz before this way. We then updated medm screens and autolocker config file. For slow outputs of these channels, we are using C1:IOO-MC_TRANS_[PIT/YAW/SUMFILT]_OUTPUT now. We had to restart daqd service for changes to take effect. This can be done by sshing into fb1 and running:
Now there is a convinient button present in FE overview status medm screen to restart DAQD service by a simple click.
We have added an RF amplifier to the output of BH55. See the MICH signal on BH55 outputs as compared to AS55 output on the attached screenshot.
- Amplify the BH55 RF signal before demodulation to increase the SNR. In order to power an RF amplifier, we need to use a breakout board to divert some power from the DB15 cable currently powering BH55.
Please throw away malfunctioning parts or label them malfunctioning before storing them with other parts. If we have to test each and every part before installation, it will waste too much of our time.
WFS loops were running for past 2 hours when I made the overall gain slider zero at:
PDT: 2022-10-18 20:42:53.505256 PDT
UTC: 2022-10-19 03:42:53.505256 UTC
The output values are fixed to a good alignment. IMC transmission is about 14100 counts right now. I'll turn on the loop tomorrow morning. Data from tonight can be used for monitoing open loop noise.
Turning WFS loops back on at:
PDT: 2022-10-19 09:48:16.956979 PDT
UTC: 2022-10-19 16:48:16.956979 UTC
After the amplifier was modified with a capacitor, we continued trying to approach locking LO phase to in quadrature with AS beam. Following is a short summary of the efforts:
Following discussion in this elog thread (40/6004), I used the design of F2A (force to angle(pitch)) decoupling filter as mentioned in this DCC doc T010140. This document is very useful as it talks about the overall control loops of a suspension, including sensor signal conditioning, damping filter shapes, force to pitch decoupling, pitch to position decoupling, and coil strength balancing. In future, if people are working on suspension characterization and damping, this document is a good resource to read.
The document address this problem with first principle calculation using the geometry of single suspensions. As a first pass, I decided to use the design value of these geometric paramters to create a filter design for upper coils and one for lower coils. The parameters are listed in table 2. I used following:
Using above parameters, we can define the F2A filter for upper coils as:
and for lower coil:
I used the design values as listed in the table above and got the filters as shown in attachment 1 for Q=3 case. I think the Q is higher than what other f2a filters I have seen for example at ETMY, the filters are as shown in attachment 2.
I tried turning on MC1 f2a filters but the watchdog tripped in about 4 minutes. This was the case when WFS were turned off. Another trial also lead to the same result. I tried this on MC2 and MC3 as well, all of them tripped after som time. See attachment 3 to see MC1 tripping on these filters.
I'll now try to use a lower Q filter.
After a quick discussion with Yuta, we figured that the introduction of a finite Q that Peter Fritschel does in this DCC doc T010140 for the poles pair, he should have done the same for the zeros pair as well otherwise there will be a notch at around 1 Hz. So I simply modified the filter design to have same Q for both zero pair and pole pair and got following transfer functions:
For upper coils:
for lower coils:
Attachment 1 shows the new filter design. I tested this filter set on MC1 and the optic kept on going as if nothing changed. That is atleast a good sign. Now next step would be test test if this actually helped in reducing the POS->PIT coupling on MC1, maybe using WFS signals.
The filters were added using this createF2Afilters.py script.
I've cleared all old attempts on F2A filters on MC1, MC2, and MC3, and added the default F2A filter described in the last post. I added 3 such sets of filters, with Q=3, 7, and 10. I have turned on Q=3 filter for all IMC optics right now. I'll setup some test of switching between different Q filters in future.
I balanced the face coil strengths of MC1 using following steps:
By the end, I was able to see no actuation on POS when butterfly is driven with 30000 counts amplitude at 13 Hz. I was able to see no PIT or YAW actuation when POS is driven with 10000 counts at 13 Hz.
Final coil strengths found:
I used this notebook while doing the above work. It has a couple of functions that could be useful in future while doing similar balancing.
Balanced MC2 coil strengths using the same method.
Balanced MC3 coil strengths using the same method.
I'll setup some test of switching between different Q filters in future.
The f2A filters are set to test on IMC optics. The script used is testF2AFilters.py. The script is running on rossa in a tmux session named f2aTest. It will trigger at 1 am, Nov 4th 2022. First the script will turn off all F2A filters on IMC optics, wait for an hour, then it will try out the three F2A filter sets with different Q values, one at a time, for one hour each. So the test should last for roughly 4 hours. The gpstime stamps will be written in a logfile that can be used later to readback noise performance of IMC with different filter. The script has a try-except failsafe to revert things to original state if something fails. To stop the script from triggering or stop it during runtime, do following on rossa:
The LO phase lock that was achieved lasts for a short time because as soon as a considerable POS offset is required on AS1, the POS to PIT coupling causes the AS-LO overlap to go away. To fix this, we need to balance the coil outputs of AS1 atleast and add the f2a filters too. To follow similar method as used for IMC optics, we need a sensor for true PIT and YAW motion of AS1. Today, we looked into the possiblity of installing a QPD at BHD output path to use it for AS1, AS4, LO1, LO2, SR2, PR2 and PR3 coil strength tuning. We found a QPD which is mentioned in this elog. We found QPD interface boards setup for old MCT and MC Refl QPDs (dating before 2008). We also found the old IP-POS QPD cable between 1Y2 and BS Oplev table. We took out this cable from BS oplev end upto ITMY opleve table, put on a new DB25 connector on the ribbon cable, and connected it to the QPD on ITMY table. There is still following work to be done:
This test was not successfull as IMC lost lock during the f2A filter trial. However, we do have 1 hour off data when all f2A filters were turned off in between following GPS times:
start gpstime: 1351584077
stop gpstime: 1351587677
After this gpstime, the f2A filters were turned ON for all IMC optics. After about 2000 seconds of no issues, the MC3 suspension suddenly rung up 1 Hz oscillations around 1351590720 gpstime. See attachment one for noise spectra of local damping error signals for MC3 before and after this event. See attachment 2 for time series of this event.
So, after this point, MC3 remained rung up and IMC remained unlocked, so no WFS signals are meaningful after gpstime 1351590720.
I have seen this happening out of nowhere to MC3 today too when PSL shutter was closed and only thing interacting with MC3 was the local damping loop. This suggests that some glitch event happens in MC3 which is not taken well by the f2a filter on it. The ringing goes down as soon as we turn OFF the f2a filter. The other optics show no such signs.
We'll do more tests in future to figure out the issue. For now, MC3 f2a filters are kept off. Maybe we need custom filter for MC3 rather than the design value default filter we are using right now. I'm attaching foton bode plot for MC3 f2a filters for verification that correct filters are in place.
Following configurations were kept today morning:
I checked again today by sending excitation at POS and reading back from C1:IOO-MC_TRANS_P and C1:IOO-MC_TRANS_Y. I found that there was some POS->PIT and POS->YAW coupling remaining that I was to remove by same method. New coil gains are:
I tired running for a few hours F2A filter with Q=1 and for maybe 30 min Q=0.3 on MC3 today and that keeps the suspension stable. So I'm going to put in Q=0.3 at FM1, Q=0.7 at FM2, and Q=1 filter on FM3. I am setting the test again for tonight with some modifications. Now the separate set of filters will be tried one by one on the three different optics so that we know the best Q filter for each optic. It is set to trigger at 1 am tonight in tmux sessions f2aMC1Test, f2aMC2Test, f2aMC3Test on rossa. To cancel the test or interrupt, do:
The new QPD installation is turning out to be much more hard than it originally seemed. After finsing the cable, QPD and interface board, when I tried to use the cable, it seems like it is not powered or connected to the interface board at all. I tried both QPD ports on the QPD interface board (D990692) both none worked. I measured the output pins of IDC style connector on the interface board and they seem to have the correct voltages at the correct pins. But when I connect this to our cable and go to the other side of the cable which is a DB25, use a breakout board and see for the voltages, I see nothing. The even pins which are supposed to be connected to each other and to GND are also not connected to each other. I pulled out teh DB25 end of the cable and brought it close to the IDC end to do a direct conitnuity test and this test failed too.
I even foudn another IDC end of a spare QPD cable hanging near 1Y2, but could not find the other end of this cable either.
So moving forward, we have following options:
MC2 OSEM outputs were calibrated today using MC_F to get the output values in microns. This was done using this diaggui file. We drive a sine wave at 13 Hz and 5000 cts at C1:SUS-MC1_BIASPOS_EXC. This signal is read at C1:IOO-MC_F and the C1:SUS-MC1_ULSEN_OUT and similar OSEM output channels. MC_F calibration in Hz is assumed to be correct. In diaggui, a calibration conversion of 4.8075e-14 m/Hz is added to convert MC_F signal into meters. This is then used to calibrate the OSEM outputs and necessary gain changes were done in teh cts2um filter module in all of the face OSEM input filters. Following are the new gains:
Note that this measurement was done after the coil strengths for MC2 have been balanced in 40m/17223.
Following up, I tried to do this exercise with MC1 and MC3. While MC3 shows expected minute corrections to the previous value, MC1 showed much alrger corrections which led me to investigate further. Koji suggested to take a transfer function between MC_F and the OSEM outputs for both MC1 and MC3 the same way to see if something is different. And Koji was absolutely right. MC1 MC_F to OSEM outptu transfer function has a frequency dependent value, with a slope of ~0.6. Very weird. I'm holding on to doing OSEM calibration on both MC1 and MC3 until we know better on what is happening. See attached transfer functions.
Reminder, MC1 is using new satellite amplifier box, but OSEM outputs are read through single ended PDMon outputs rather than the differential ended PD Output port, because rest of the MC1 electronics is still last generation and the whitening board for them take in single ended input.
I tuned MC3 local damping gains by looking at step responses in the DOF bassis. The same procedure was followed as described in 40m/17133. The gains were changed as following:
Attachement 1 shows the step responsed with the old gains and attachment 2 shows the step responses with the new gains. There is considerable cross coupling between SIDE OSEM and Coil to the face DOFs (POS, PIT, YAW). I think the high SIDE gain earlier was the culprit that started ringing with the f2a filters.
I agree that POS and SIDE step responses could look better but this was the best I was able to achieve. Further attempts by others is most welcome.
I also verified running f2a filter with Q=3 and it has been stably running with no ringing for past few minutes. More long term behavior is yet to be seen.C1:SUS-MC3_SUSSIDE_GAIN
This time the test was succesful but I have reverted MC3 f2a filters back to with Q=3, 7, and 10. The inital part of the test is still useful though. I'm attaching amplitude spectral density curves for WFS control points and C1:IOO-MC_F_DQ in the different configurations. The shaded region is the 15.865 percentile to 84.135 percentile bounds of the PSD data. This corresponds to +/- 1 sigma percentiles for a gaussian variable. Also note that in each decade of freqeuncy, the FFt bin width is different such that each decade has 90 points (eg 0.1 Hz bin width for 1Hz to 10 Hz data, 1 Hz binwidth for 10 Hz to 100 Hz and so on.)
The WFS control points do not show any significant difference in most of the frequency band. The differences below 10 mHz are not averaged enough as this was 30min data segments only.
C1:IOO-MC_F_DQ channel also show no significant difference in 0.1 Hz to 20 Hz. Between 20-100 Hz, we see that higher Q filters resulted in slightly less noise but the effect of the filters in this frequency band should be nothing, so this could be just coincidence or maybe the system behaves better with hgiher Q filters. In teh lower frequency band, we would should take more data to average more after shortlisting on some of these f2a filters. It seems like MC1 Q=10 (red curve) filter performs very good. For MC2, there is no clear sign. I'm not sure why MC2 Q=3 curve got a big offset in low frequency region. Such things normally happen due to significant linear trend presence in signal.
I'm not sure what other channels might be interesting to look at. Some input would be helpful.
I reran this measurement at low frequency 0.1016 Hz. Following were the cts2um gain changes:
Edited AG: Wed Nov 9 12:17:12 2022 : Uncertainties added by taking coherence of each channel and MC_F with excitation, using to get fractional error in ASD values I used for taking ratios(where is coherence and is number of averages (5 in this case)), and adding MC_F ASD frac error to all sensor's frac error, and finally multiplyingit witht he ratios obtained above to get error in cts2um gain values.
RXA: I don't believe it. This is more accurate than the LIGO calibration of strain and also more accurate than the NIST calibration of laser power.
I did the same measurement for MC3 with one difference that OSEMs report more motion than IMC cavity length change due to it being at 45 degrees. Following are the new cts2um gain values
f2a filters with Q=10 (FM3) were turned on all IMC optics.
I took a coil to OSEM transfer function for MC1 osems (LL, UR) today and again the slope of the transfer function was -1.4 instead of -2 as expected. I compared this with MC3 coil to osem transfer function (LL) which correctly had the slope of -2. See attachments 1 and 2 for the results. This measurement was taken with PSL shutter closed and local damping loops turned off.
As I mentioned earlier, MC1 is using new satellite amplifier box (S2100029-v2) whose transfer function data exists and was actually measured by me in 40m/15776. Using this transfer function data, and the foton 3:30 (FM1) filter, I tried to recreate the product transfer function that should happen if both filters are working correctly. Attachment 3 shows these transfer function plots. I overlayed on top of this the measured transfer function of OSEM to position displacement as done in 40m/17238 by making the magnitude equal at 1 Hz. It is suspicious how nicely the measured transfer function overlay with the satellite amplifier measured transfer function, both in magnitude and phase. I'll investigate more tomorrow.
I have set a free swing test for MC2 and MC3 to trigger at 1 am tonight. The test should last for about 4.5 hrs upto 5:30 am. It will close the PSL shutter, perform the test, and open the shutter afterwards. To cancel or interrupt the test, go to rossa and do:
Late elog; original time Thursday, Nov 10 16:00 2022
MC1 is using a new satellite amplifier which was a whitening circuit on it with 3 Hz zero and 30 Hz pole. But to read out this signal, we use the old whitening board as it serves as the interface board with the ADC too. This is D000210 Whitening and Interface Board. This board has a switchable whitening filter which our RTS models supply GND as the switch input. It was not immediately clear to me if the GND input to this switch means whitening is ON or not.
I disconnected inputs and outputs to the whitening Board used for MC1 OSEM PDs, and I used a moku:go to measure the transfer function for the UR channel. This confirmed that whitening is turned ON on this interface board as well, which means the MC1 OSEM signals are whitened twice, while digitally we have been dewhitening only once. To fix this there are two possible solutions:
I stopped the Docker PID script and started the old python script on megatron. Instructions on how to do this are here.
On optimus I ran:
On megatron I ran:
However, the daemon service keeps failing and restarting. So currently the FSSSlow is not running. I do not know how to debug this script.
On a side note, I tested the docker service by restarting it, and it was working. From the logs, it seems like it got stuck because it could not find C1:IOO-MC_LOCK channel which occurs when c1psl epics servers fail or get stuck. The blinker on this script runs when the script is running but it does not stop if the script gets stuck somewhere. If someone decides to use this script in the future, they would need to correct error catching so that no reply from caget looks like an error and the script restarts rather than keep trying to get the channel value. Or the blinker implementation should change in the script so that it displays a stuck state.
Whoever knows about this, please stop that Docker PID and we can just run the old python script on megatron.
I've moved the FSS Slow PID script running to megatron through systemd daemons. The script is working as expected right now. I've updated megatron motd and the always running scripts page here.
After the MC1 osem dewhitening was fixed, I did the calibration of MC1 OSEM signals using MC_F using this notebook. A 0.1 Hz oscillation with amplitude of 1000 cts was sent to MC1 lockin2 and was kept on between 1352851381 and 1352851881. Then I read back the data from DQ channels and performed a welch with standard deviation calculation from the different segments used. From this measurement, I arrive to the following cts2um gain values that were changed in MC1 filter file. The damping remained stable after the changes:
UL: 0.09 -> 0.105(12)
UR: 0.09 -> 0.078(9)
LR: 0.09 -> 0.065(7)
LL: 0.09 -> 0.087(10)
I followed the same method for MC3 as well to get mroe meaningful error bars. This measurement was done between 1352856980 and 1352857480 using this notebook. Here are the changes made:
UL: 0.39827 -> 0.509(57)
UR: 0.33716 -> 0.424(48)
LR: 0.335 -> 0.365(40)
LL: 0.34469 -> 0.376(43)
The larger error bars could be due to more noisy MC3 osem outputs as the satellite amplifier gain is lower here.
I repeated this test for the following configuration:
The test ran for 4000 seconds between following timestamps:
start time: 1352878206
stop time: 1352882207
This script was used to run this test and can be used again in future to repeat the same test.
Repeated this for MC2 using the error measurement technique mentioned in 40m/17286 using this notebook. Following are the cts2um gain changes:
UL: 0.30408 -> 0.415(47)
UR: 0.28178 -> 0.361(39)
LR: 0.80396 -> 0.782(248)
LL: 0.38489 -> 0.415(49)
I averaged 19 samples to get these values hoping to have reached systematic error limit. The errors did not change from a trial with 9 samples except for the LR OSEM.
I followed this procedure to balance the coil strengths on ITMY. The position sensor was created by closing PSL shutter so that IR laser is free running, and locking the green laser to YARM, this makes C1:ALS-BEATY_FINE_PHASE_OUT a position sensor for ITMY. The oplev channels C1:SUS-ITMY_OL_PIT_IN1 and C1:SUS-ITMY_OL_YAW_IN1 were used for PIT and YAW sensors. Everything else followed the procedure. The coil gains were changed as follow:
C1:SUS-ITMY_ULCOIL_GAIN : 1.036 -> 1.061
C1:SUS-ITMY_URCOIL_GAIN : -1.028 -> -0.989
C1:SUS-ITMY_LRCOIL_GAIN : 0.930 -> 0.943
C1:SUS-ITMY_LLCOIL_GAIN : -1.005 -> -1.007
I used this notebook and this diaggui to do this balancing.
c1hpc has option of dithering BS now (sending excitation to BS LSC port to c1sus over IPC). This is available for demodulating BHDC and BH55 signals. Also BS is a possible feedback point, however, we would stick to using LSC screen for any MICH locking.
c1sus underwent 2 changes. All suspension models were upgraded to the new suspension model (see 40m/16938 and 40m/17165). Now the channel data rates are set in simulink model and activateDQ script is not doing anything for any of the suspension models.
I've turned off HEPA fan and all lights at
PST: 2022-11-24 11:23:59.509949 PST
UTC: 2022-11-24 19:23:59.509949 UTC
c1ioo model has been updated to acquire C1:IOO-MC2_TRANS_PIT_OUT and C1:IOO-MC2_TRANS_YAW_OUT at 512 Hz rate.
I'll update when I turn the HEPA on again. I plan to turn it on for a few hours everyday to keep the PSL enclosure clean.
Turned on HEPA again at:
PST: 2022-11-25 12:14:34.848054 PST
UTC: 2022-11-25 20:14:34.848054 UTC
However this was probably not a low noise state due to vacuum disruption mentioned here.
I tried following the steps and the method I was using converged to same output matrix upto 2 decimal points but there is still left over cross coupling as you can see in Attachment 1. With the new output matrix, WFS loop can be turned on with full overall gain of 1.
-2. 4.8 -7.3
3.6 3.5 -2.
2. 1. -6.8
3.44 4.22 -7.29
0.75 0.92 -1.59
3.41 4.16 -7.21
Note: The standard deviation on the averages was very high even after averaging for 30s. This data should be averaged after low passing high frequencies but I couldn't find the filter module medm screens for these signals, so I just proceeded with simple averaging of full rate signal using cdsultis avg command.
Fri Nov 25 12:46:31 2022
The WFS loop are unstable again. This could be due to the matrix balancing done while vacuum was disrupted. The above matrix does not work anymore.
I came today to find that PSL shutter was closed. I orginially thought some shimmer obersvations are underway in the quiet state. But that was not the case. When I tried to open the shutter, it closed back again indicating a hard compliance condition making it close. This normally happens when vacuum level is not sufficient, so I opened the vacuum screena dn indeed all gate valves were closed. This most probably happend during this interlock trip. So the main volume was just slowly leaking and reached to milli torr level today.
Lesson for future: Always check vacuum status when interlock trips.
Paco came by to help. We went to asia (the Asus laptop at vacuum workstation) but could not open the medm or find the nfs mounted files. The chiara change did something and nfs mounted directories are not available on asia of c1vac. We rebooted asia and the nfs mount was working again. We can't simply restart c1vac because it runs acromag channels for vacuum system and needs to be done more carefully, a task for Monday.
After restarting asia, we opened the the vacuum control medm screen and followed the vaccum pump down instructions (mainly opening of the gate vales as the pumps were already on). Point to keep in mind, rule of thumb, do not open valve between a turbo pump and a volume if the pressure differential is more than 3 orders of magnitude. Saving turbo pumps is the priority.. Now the main volume is pumping down.
Turned off HEPA at:
PST: 2022-11-25 15:34:55.683645 PST
UTC: 2022-11-25 23:34:55.683645 UTC
Turned on HEPA back at:
PST: 2022-11-28 11:14:31.310453 PST
UTC: 2022-11-28 19:14:31.310453 UTC