Tega has kindly made a summary page for the IMC WFS. Its in a tab on the usual summary pages.
One thing I notice is that the feedback to MC2 YAW seems to have very little noise. What's up with that?
The output matrix (attached) shows that the WFS have very little feedback to MC2 in YAW, but normal feedback in PIT. Has anyone recalculated this output matrix in the past ~1-2 years?
I'm going to read Prof. Izumi's paper (https://arxiv.org/abs/2002.02703) to get some insight.
The output matrix doesn't seem to have any special thing to make this happen. Any ideas on what this could be?
I have moved the following electronics / components to "Section Y10 beneath the tube"
During Wednesday’s lab clean up, we made a ton of progress in organization. Our main focus was to tackle CDS debris from the ongoing upgrade. We proceeded with the following tasks.
Attachment #2 shows that all the CDS equipment has been relocated behind Section X3 of the X-Arm.
Although we've usually used this empirical way to run the alignment, I'd prefer if we had an analytic / numerical model for it.
Radhika, coud you look into writing down the equations for how the various dithers show up in the various error signals into a Overleaf doc? I'm thinking about this currently for the IMC, so we can zoom about it next week once you've had a chance to think about it for a few days. It would be helpful to have this worked out for the 40m alignment for future debugging. Co-located dither and actuation is not likely the best way to do things from a noise and loop x-coupling point of view.
We proceeded with the TODO items from .
We tried to update the YARM ASS output matrix to appropriately feed back the ETM and ITM T error signals (input beam pointing) to actuate on PR2 and PR3. Using the existing matrix (used for actuating on TT1 and TT2) led to diverging error signals and big drops in transmission. We iteratively tried flipping signs on the matrix elements, but exhausting all combinations of parity was not efficent, since angular sign conventions could be arbitrary across optics.
We decided to go ahead with Yuta's suggestion of dithering on PR2 and PR3 for input beam pointing, instead of ETMY and ITMY. This would simplify the output matrix greatly since dithering and actuation would now be applied to the same optics. Anchal made the necessary model changes. We tried a diagonal identity submatrix (for input pointing) to map each error signal to the corresponding DOF. With the length (L) control loops disengaged, this configuration decreased all T error signals and increased YARM transmision. We then re-engaged the L loops: the final result is that YARM transmission reached just below 1 [Attachment 1].
c1mcs and c1ioo models have been updated to add new acquisition of data.
We found from https://ldvw.ligo.caltech.edu/ldvw/view that following channels with "WFS" in them are acquired at the sites:
These are most probably error signals of WFS1 and WFS2. At 40m, we have following channel names instead:
And similar for Q outputs as well. We also have chosen quadrature signals (signals after sensing matrix) at:
We added following testpoints and are acquiruing them at 1024 Hz:
For the transmission QPD at MC2, we found following acquired channels at the site:
We created testpoints in c1mcs.mdl to add these channel names and acquire them. Following channels are now available at 1024 Hz:
We started acquiring following channels for the 6 error signals at 1024 Hz:
We started acquiring following 6 control signals at 1024 Hz as well:
RXA: useful to know that you have to append "_DQ" to all of the channel names above if you want to find them with nds2-client.
In order to get C1:IOO-MC_TRANS_[DC/P/Y], we had to get rid of same named EPICS output channels in the model. These were been acquired at 16 Hz before this way. We then updated medm screens and autolocker config file. For slow outputs of these channels, we are using C1:IOO-MC_TRANS_[PIT/YAW/SUMFILT]_OUTPUT now. We had to restart daqd service for changes to take effect. This can be done by sshing into fb1 and running:
Now there is a convinient button present in FE overview status medm screen to restart DAQD service by a simple click.
Perhaps 1PPS around the PSL was used for the Rb standard to be locked to the GPS 1PPS.
If we need to drive multiple devices, we should use a fanout circuit to avoid distorting the 1PPS. -CW
The original fb1 now boots from its new drive, which is installed in a fixed drive bay and connected via SATA. There are no spare SATA power cables inside the chassis, so we’re temporarily powering it from an external power supply borrowed from a USB to SATA kit (see attachment).
The easiest way to eliminate the external supply would be to use a 4-pin Molex to SATA adapter, since the chassis has plenty of 4-pin Molex connectors available. Unfortunately those adapters sometimes start fires. The ones with crimped connectors are thought to be safer than those with molded connectors. However, the safest course will probably be to just migrate the OS partition onto the 2 TB device from the hardware RAID array, once we’re happy with it.
Historic data from the past frames should now be available, as should NDS2 access.
Starting to look into the timing issue:
List of remaining tasks to iron out various wrinkles with the upgrade:
The situation with the timing system is that we have not touched it at all in the upgrade, but have added a new diagnostic: Spectracom timing boards in each FE, to compare vs the ADC clock. So I expect that what we’re seeing now is not new, but may not have been noticed previously.
What we’re seeing is:
Possible issues stemming from unstable timing:
Thanks for pointing out that EPICS data collection (slow channels) was not working. I started the service that collects these channels (standalone_edc, running on c1sus), and pointed it to the channel list in /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini, so this should be working now.
controls@c1sus:~$ systemctl status rts-edc_c1sus
● rts-edc_c1sus.service - Advanced LIGO RTS stand-alone EPICS data concentrator
Loaded: loaded (/etc/systemd/system/rts-edc_c1sus.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2022-10-09 23:30:15 PDT; 10h ago
Main PID: 32154 (standalone_edc)
├─32154 /usr/bin/standalone_edc -i /etc/advligorts/edc.ini -l 0.0.0.0:9900
The IMC and the IMC WFS kept running for ~2days. 👍
I wanted to look at the trand of the IMC REFL DC, but the dataviewer showed that the recorded values are zero. And this slow channel is missing in the channel list.
I checked the PSL PMC signals (slow) as an example, and many channels are missing in the channel list.
So something is not right with some part of the CDS.
Note that I also reported that the WFS plot in the above refered previous elog has the value like 1e39
[Chris, Anchal, JC, Paco, Yuta]
sudo systemctl stop modbusIOC.service
sudo shutdown -h now
We finished all steps upto step 3 without any issue. We restarted all workstations to get the new nfs mount from New Chiara. Some other machined in lab might requrie restart too if they require nfs mounts. Note, c1sus was initially connected using a fiber oneStop cable that tested OK with the teststand IO chassis, but it still did not work with the c1sus chassis, and was reverted to a copper cable.
[Chris, Anchal, JC]
While doing step 4, we realized that all 8 drive bays in the existing fb1 are occupied by disks that are managed by a hardware RAID controller (MegaRAID). All 8 hard disks seem to be combined into a single logical volume, which is then partitioned and appears to the OS as a 2 TB storage device (/dev/sda for OS) and 23.5 TB storage device (/dev/sdb for frames). There was no free drive bay to install our OS drive from fb1 (clone), nor was there any already installed drive that we could identify as an "OS drive" and swap out, without affecting access to the frame data. We tried to boot fb1 with the OS drive from fb1 (clone) using multiple SATA to USB cables, but it was not detected as a bootable drive. We then tried to put the OS drive back in fb1 (clone) and use the clone machine as the 40m framebuilder temporarily, in order to work on booting up fb1 in parallel with the rest of the upgrade. We found that fb1 (clone) would no longer boot from the drive either, as it had apparently lost (or never possessed?) its grub boot loader. The boot loader was reinstalled from the debian 10 install thumbdrive, and then fb1 (clone) booted up and functioned normally, allowing the remainder of the upgrade to go forward.
Jamie investigated the situation with the existing fb1, and found that there seem to be additional drive bays inside the fb1 chassis (not accessible from the front panel), in which the new OS disk could be installed and connected to open SATA ports on the motherboard. We can try this possible route to booting up fb1 and restoring access to past frames next week.
We carried out the rest of the steps upto 7.3. We started all slow machines, some of them required reloading the daemons using:
We found that we were unable to ssh to c1psl, c1susaux, and c1iscaux. It turned out that chiara (clone) had a very outdated martian host table for the nameserver. Since Chris had introduced some new IPs for IPMI ports, dolphin switch etc, we could not simply copy back from the old chiara. So Chris used diff command to go through all changes and restored DNS configuration.
We were able to burt restore to Oct 7, 03:19 am point using the latest snapshot on New Chiara. All suspensions were being locally damped properly. We restarted megatron and optimus to get nfs mounts. All docker services are running normally, IMC autolocker is working and FF slow PID is working as well. PMC autolocker is also working fine. megatron's autourt cron job is running properly and restarted creating snapshots from 6:19 pm onwards.
After the CDS upgrade team called for a day (their work TBD), I took over the locked IMC to check how it looked like.
The lock was robust but the IMC REFL spot and the WFS DC/MC2 QPD were moving too much.
I wondered if there were something wrong with the damping. I thought MC3 damping seemed weak, but this was torelable level.LR
During the ring down check of MC2, I saw that the OSEM signals were glitchy. In the end I found it was LR sensor which was glitchy and fluctuating.
I went into the lab and pushed the connectors on the euro card modules and the side connectors as well as the cables on the MC2 sat amp.
I came back to the control room and found the MC2 LR OSEMs had the jump and it seems more stable now.
I leave the IMC locked & WFS running. This sus situation is not great at all and before we go too far, we'll work on the transition to the new electronics (but not today or next week).
By the way the unit of the signals on the dataviewer didn't make sense. Something seemed wrong with them.
[Yuta, Paco, Anchal]
We estimated meas diff angle for BH55 today by following this elog post. We used moku:lab Moku01 to send a 55 MHz tone to PD input port of BH55 demodulation board. Then we looked at I_ERR and Q_ERR signals. We balanced the gain on I channel to 1.16 to get the two signals to same peak to peak heights. Then we changed the mead diff angle to 91.97 to make the "bounding box" zero. Our understanding is that we just want the ellipse to be along x-axis.
We also aligned beam input to BH55 bit better. We used the single bounce beam from aligned ITMY as the reference.
We attempted to lock LO phase with just using BH55 demodulated output.
We expected that we would be able to lock to 90 degree LO phase just like DC locking. But now we understand that we can't beat the light with it's own phase modulated sidebands.
The confusion happened because it would work with Michelson at the dark port output of michelson, amplitude modulation is generated at 55 MHz. We tried to do the same thing as was done for DC locking with single bounce and then michelson, but we should have seen this beforehand. Lesson: Always write down expectation before attempting the lock.
Chris and I discussed our plan for CDS upgrade which amounts to moving new FEs, new chiara, and new FB1 OS system tomartian network.
Date: Fri Oct 7, 2022
Time: 11:00 am (After coffee and donuts)
Minimum required people: Chris, Anchal, JC (the more the merrier)
BH55 RFPD installation was still not complete until yesterday because of a peculiar issue. As soon as we would increase the whitening gain on this photodiode, we saw spikes coming in at around 10 Hz. Following events took place while debugging this issue:
The installation of BH55 RFPD is complete now.
Last night’s CDS upgrade attempt succeeded in taking over the IFO. If the IFO users are willing, let’s try to run with it today.
The new system was left in control of the IFO hardware overnight, to check its stability. All looks OK so far.
The next step will be to connect the new FEs, fb1, and chiara to the martian network, so they’re directly accessible from the control room workstations (currently the system remains quarantined on the teststand network). We’ll also need to bring over the changes to models, scripts, etc that have been made since Tega’s last sync of the filesystem on chiara.
The previous elog noted a mysterious broken state of the OneStop link between FE and IO chassis, where all green LEDs light up on the OneStop board in the IO chassis, except the four next to the fiber link connector. This was seen on c1sus and c1iscex. It was recoverable last night on c1iscex, by fully powering down both FE computer and chassis, waiting a bit, and then powering up chassis and computer again. Currently c1sus is running with a copper OneStop cable because of the fiber link troubles we had, but this procedure should be tried to see if one of the fiber links can be made to work after all.
In order to string the short copper OneStop cable for c1sus, we had to move the teststand rack closer to the IO chassis, up against the back of 1X6/1X7. This is a temporary state while we prepare to move the FEs to their new rack. It hopefully also allows sufficient clearance to the exit door to pass the upcoming fire inspection.
At first, we connected the teststand rack’s power cables to the receptacle in 1X7, but this eventually tripped 1X7’s circuit breaker in the wall panel. Now, half of the teststand rack is on the receptacle in 1X6, and the other half is on 1X7 (these are separate circuits).
After the breaker trip, daqd couldn’t start. It turned out that no data was flowing to it, because the power cycle caused the DAQ network switch to forget a setting I had applied to enable jumbo frames on the network. The configuration has now been saved so that it should apply automatically on future restarts. For future reference, the web interface of this switch is available by running firefox on fb1 and navigating to 10.0.113.254.
When the FE machines are restarted, a GPS timing offset in /sys/kernel/gpstime/offset sometimes fails to initialize. It shows up as an incorrect GPS time in /proc/gps and on the GDS_TP MEDM screens, and prevents the data from getting timestamped properly for the DAQ. This needs to be looked at and fixed soon. In the meantime, it can be worked around by setting the offset manually: look at the value on one of the FEs that got it right, and apply it using sudo sh -c "echo CORRECT_OFFSET >/sys/kernel/gpstime/offset".
sudo sh -c "echo CORRECT_OFFSET >/sys/kernel/gpstime/offset"
In the first ~30 minutes after the system came up last night, there were transient IPC errors, caused by drifting timestamps while the GPS cards in the FEs got themselves resynced to the satellites. Since then, timing has remained stable, and no further errors occurred overnight. However, the timing status is still reported as red in the IOP state vectors. This doesn’t seem to be an operational problem and perhaps can be ignored, but we should check it out later to make sure.
Also, the DAC cards in c1ioo and c1iscey reported FIFO EMPTY errors, triggering their DACKILL watchdogs. This situation may have existed in the old system and gone undetected. To bypass the watchdog, I’ve added the optimizeIO=1 flag to the IOP models on those systems, which makes them skip the empty FIFO check. This too should be further investigated when we get a chance.
[Jamie, JC, Chris]
Today we made a failed attempt to take over the 40m hardware with the new front ends on the test stand.
As an initial test, we connected the new c1iscey to its I/O chassis using the OneStop fiber link. This went smoothly, so we tried to proceed with the rest of the system, which uncovered several problems. Consequently, we’ve reverted control back to the old front ends tonight, and will regroup and make another attempt tomorrow.
There are various pathologies that we've seen with the OneStop interface cards in the I/O chassis. We don't seem to have the documentation for these cards, but our interpretive guesses are as follows:
Tomorrow, we plan to swap out the c1iscex I/O chassis for the one in the test stand, and see if that lets us get the full system up and running.
We started the day by taking a spectrum of C1:HPC-LO_PHASE_IN1, the BHD error point, and confirming the absence of 268 Hz peaks believed to be violin modes on LO1. We then locked the LO phase by actuating on LO2, and AS1. We couldn't get a stable loop with AS4 this morning. In all of these trials, we looked to see if the noise increased at 268 Hz or its harmonics but luckily it didn't. We then decided to add the necessary output filters to avoid exciting these violin modes. The added filters are in the C1:SUS-LO1_LSC bank, slots FM1-3 and comprise bandstop filters at first, second and third harmonics observed previously (268, 536, and 1072 Hz); bode plots for the foton transfer functions are shown in Attachment #1. We made sure we weren't adding too much phase lag near the UGF (~ 1 degree @ 30 Hz).
We repeated the LO phase noise measurement by actuating on LO1, LO2 and AS1, and observe no noise peaks related to 268 Hz this time. The calibrated spectra are in Attachment #2. Now the spectra look very similar to one another, which is nice. The rms is still better when actuating with AS1.
After the above work ended, I tried enabling FM1-3 on the C1:HPC_LO_PHASE control filters. These filters boost the gain to suppress noise at low frequencies. I carefully enabled them when actuating on LO1, and managed to suppress the noise by another factor of 20 below the UGF of ~ 30 Hz. Attachment #3 shows the screenshot of the uncalibrated noise spectra for (1) unsupressed (black, dashed), (2) suppressed with FM4-5 (blue, solid), and (3) boosted FM1-5 suppression (red).
I pushed a notebook and a Finesse model for comparing different LO phase locking schemes. Notebook is on https://git.ligo.org/40m/bhd/-/blob/master/controls/compare_LO_phase_locking_schemes.ipynb,
Here's a description of the Finesse modeling:
I use a 40m kat model https://git.ligo.org/40m/bhd/-/blob/master/finesse/C1_w_initial_BHD_with_BHD55.kat derived from the usual 40m kat file. There I added and EOMs (in the spaces between the BS and ITMs and in front of LO2) to simulate audio dithering. A PD was added at a 5% pickoff from one of the BHD ports to simulate the RFPD recently installed on the ITMY table.
First I find the nominal LO phase by shaking MICH and maximizing the BHD response as a function of the LO phase (attachment 1).
Then, I run another simulation where I shake the LO phase at some arbitrary frequency and measure the response at different demodulation schemes at the RFPD and at the BHD readout.
The optimal responses are found by using the 'max' keyword instead of specifying the demodulation phase. This uses the demodulation phase that maximizes the signal. For example to extract the signal in the 2 RF sideband scheme I use:
pd3 BHD55_2RF_SB $f1 max $f2 max $fs max nPickoffPDs
I plot these responses as a function of LO phase relative to the nominal phase divided by 90 degrees (attachment 2). The schemes are:
1. 2 RF sidebands where 11MHz and 55MHz on the LO and AS ports are used.
2. Single RF sideband (11/55 MHz) together with the LO carrier. As expected, this scheme is useful only when trying to detect the amplitude quadrature.
3. Audio dithering MICH and using it together with one of the LO RF sidebands. The actuation strength is chosen by taking the BS actuation TF 1e-11 m/cts*(50/f)**2 and using 10000 cts giving an amplitude of 3nm for the ITMs.
For LO actuation I can use 13 times more actuation strentgh becasue its coild drivers' output current is 13 more then the old ones.
4. Double audio dithering of LO2+MICH detecting it directly at the BHD readout (attachment 3).
Without noise considerations, it seems like double audio dithering is by far the best option and audio+RF is the next best thing.
The next thing to do is to make some noise models in order to make the comparison more concrete.
This noise model will include Input noises, residual MICH motion, and laser noise. Displacement noise will not be included since it is the thing we want to be detected.
For the upcoming NN test on the IMC, we need to add some more channels to the frames. Can someone please add the MC2 TRANS SUM, PIT, YAW at 256 Hz? and then make sure they're in frames.
and even though its not working correctly, it would be good if someone can turn the MC WFS on for a little while. I'd just like to get some data to test some code. If its easy to roughly close the loops, that would be helpful too.
Currently, none of these channels are being written on frames. From simulink model, it seems the channels:
are supposed to be DQed but are not present in the /opt/rtcds/caltech/c1/chans/daq/C1MCS.ini file. I tried simply adding these channels to the file and rerunning the daqd_ services but that caused 0x2000 error on c1mcs model. In my attempt, I did not know what chnnum to give for these channels so I omitted that and maybe that is the issue.
The only way I know to fix this is to make and install c1mcs model again which would bring these channels into C1MCS.ini file. But We'll have to run activateDQ.py if we do that which I am not totally sure if it is in running condition right now. @Christopher Wipf do you have any suggestions?
aren't they all filtered? If so, perhaps we can choose whatever is the equivalent naming at the LIGO sites rather than roll our own again.
@Tega Edo can we run activateDQ.py or will that break everything now?
@Rana Adhikari Looking into this now.
@Anchal Gupta The only problem I see with activateDQ.py is the use of the deprecated print function, i.e. print var instead of print(var). After fixing that, it runs OK and does not change the input INI files as they already have the required channel names. I have created a temporary folder, /opt/rtcds/caltech/c1/chans/daq/activateDQtests/, which is populated with copies of the original INI files, a modified version of activateDQ.py that does not overwrite the original input files, and a script file difftest.sh that compares the input and output files so we can test the functionality of activateDQ.py in isolation. Furthermore, looking through the code suggests that all is well. Can you look at what I have done to check that this is indeed the case? If so, your suggestion of rebuilding and installing the updated c1mcs model and running activateDQ.py afterward should do the trick.
I tested the code with:
which creates output files with an _ prefix, for example _C1MCS.ini is the output file for C1MCS.ini, then I ran
to compare all the input and corresponding output files.
Note that the channel names you are proposing would change after running activateDQ.py, i.e.
C1:IOO-MC_TRANS_SUMFILT_OUT_DQ -> C1:IOO-MC_TRANS_SUM_ERR
C1:IOO-MC_TRANS_PIT_OUT_DQ -> C1:IOO-MC_TRANS_PIT_ERR
C1:IOO-MC_TRANS_YAW_OUT_DQ -> C1:IOO-MC_TRANS_YAW_ERR
My question is this: why aren't we using the correct channel names in the first place so that we have less work to do later on when we finally decide to stop using this postbuild script?
Yeah I found that these ERR channels are acquired and stored. I don't think we should do this either. Not sure what was the original motivation for this change. I tried commenting out this part of activateDQ.py and remaking and reinstalled c1mcs but it seems that activateDQ.py is called as postbuild script automatically on install and it uses some other copy of this file as my changes did not take affect and the DQ name change still happened.
Ah, we encountered the same puzzle as well. Chris found out that our models have `SCRIPT=activateDQ.py` embedded in the cds parameter block description, see attached image. We believe this is what triggers the postbuild script call to `activateDQ.py`. As for the file location, modern rtcds would have it in /opt/rtcds/caltech/c1/post_build, but I am not sure where ours is located. I did a quick search for this but could not find it in the usual place so I looked around for a bit and found this:
controls@rossa> find /opt/rtcds/userapps/ -name "activateDQ.py"
My guess is the last one /opt/rtcds/userapps/trunk/cds/c1/scripts/activateDQ.py.
Maybe we can ask @Yuta Michimura since he wrote this script?
Anyway, we could also try removing SCRIPT=activateDQ.py from the cds parameter block description to see if that stops the postbuild script call, but keep in mind that doing so would also stop the update of the OSEM and oplev channel names. This way we know what script is being used since we will have to run it after every install (this is a bad idea).
controls@c1sus:~ 0$ env | grep script
It looks like the guess was correct. Note that in the newer version of rtcds, we can use `rtcds env` instead of `env` to see what is going on.
I turned on WFS on IMC at:
PDT: 2022-10-01 13:09:18.378404 PDT
UTC: 2022-10-01 20:09:18.378404 UTC
The following channels are being saved in frames at 1024 Hz rate:
We can keep it running over the weekend as we will not use the interferometer. I'll keep an eye on it with occasional log in. We'll post the time when we switch it off again.
The IMC lost lock at:
UTC Oct 03, 2022 01:04:16 UTC
Central Oct 02, 2022 20:04:16 CDT
Pacific Oct 02, 2022 18:04:16 PDT
GPS Time = 1348794274
The WFS loops kept running and thus took IMC to a misaligned state. Between the above two times, IMC was locked continuously with very brief lock loss events, and had all WFS loops running.
We took lo phase noise spectra actuating on the for different optics-- LO1, LO2, AS1, and AS4. The servo was not changed during this time with a gain of 0.2, and we also took a noise spectrum without any light on the DCPDs. The plot is shown in Attachment #1, calibrated in rad/rtHz, and shown along with the rms values for the different suspension actuation points. The best one appears to be AS1 from this measurement, and all the optics seem to show the same 270 Hz (actually 268 Hz) resonant peak.
Koji suspected the observed noise peak belongs to some servo oscillation, perhaps of mechanical origin so we first monitored the amplitude in an exponentially averaging spectrum. The noise didn't really seem to change too much, so we decided to try adding a bandstop filter around 268 Hz. After the filter was added in FM6, we turned it on and monitored the peak height as it began to fall slowly. We measured the half-decay time to be 264 seconds, which implies an oscillation with Q = 4.53 * f0 * tau ~ 3.2e5. This may or may not be mechanical, further investigation might be needed, but if it is mechanical it might explain why the peak persisted in Attachment #1 even when we change the actuation point; anyways we saw the peak drop ~ 20 dB after more than half an hour... After a while, we noticed the 536 Hz peak, its second harmonic, was persisting, even the third harmonic was visible.
So this may be LO1 violin mode & friends -
We should try and repeat this measurement after the oscillation has stopped, maybe looking at the spectra before we close the LO_PHASE control loop, then closing it carefully with our violin output filter on, and move on to other optics to see if they also show this noise.
I updated c1ass model today to use PR2 PR3 instead of TT1 TT2 for YARM ASS. This required changes in c1su2 too. I have split c1su2 into c1su2 (LO1., LO2, AS1, AS4) and c1su3 (SR2, PR2, PR3). Now the two models are using 31 and 21 CPU out of 60 which was earlier 55/60. All changed compiled correctly and have been uploaded. Models have been restared and medm screens have been updated.
[Jamie, Christopher, JC]
This morning we decided to label the the fiber optic cables. While doing this, we noticed that the ends had different label, 'Host' and 'Target'. Come to find out, the fiber optic cables are directional. Four out of Six of the cables were reversed. Luckily, 1 cable for the 1Y3 IO Chassis has a spare already laid (The cable we are currently using). Chris, Jamie, and I have begun reversing these cable to there correct position.
We laid 4 out of 6 fiber cables today. The remaining 2 cables are for the I/O chassis on the vertex so we would test the cables the lay it tomorrow. We were also able to identify the problems with the 2 supposedly faulty cable, which are not faulty. One of them had a small bend in the connector that I was able to straighten out with a small plier and the other was a loose connection in the switch part. So there was no faulty cable, which is great! Chris wrote a matlab script that does the migration of all the model files. I am going through them, i.e. looking at the CDS parameter block to check that all is well. Next task is to build and install the updated models. Also need to update the '/opt/rtcds' and '/opt/rtapps' directory to the latest in the 40m chiara system.
Repeated the LO phase noise measurement, this time with the LO - ITMY single bounce, and a couple of fixes Koji hinted at including:
This time, after alignment the fringe amplitude was 500 counts. Attachment #1 shows the updated plot with the calibrated noise spectra for the individual DCPD signals A and B as well as their rms values. Attachment #2 shows the error point, in loop and the estimated out of loop spectra with their rms as well. The peak at ~ 240 Hz is quite noticeable in the error point time series, and dominates the high frequency rms noise. The estimated rms out of loop noise is ~ 9.2 rad, down to 100 mHz.
I don't know what was wrong with the past setup but the 950nm laser (QPHOTONICS QFLD-950-3S) just worked fine up to ~300MHz with basically the same setup.
A 20dB coupler picks up a small amount of the driving signal from the source signal of the network analyzer. This was fed to CHR. The fiber-coupled NewFocus PD RF output was connected to CHA.
The calibration of the response was done with the thru response (connect the source signal to the CHA via all the long cables).
Attachment 1 shows the response CHA/CHR. The output is somewhat flat up to 20MHz and goes down towards 100MHz, but still active up to 500MHz as long as the normalization with the New Focus PD works.
The structure around 200MHz~300MHz changes with how the wires of the clips are arranged. I have twisted and coiled them as shown and the notch disappeared. For the permanent setup we should keep the lines as short as possible and take care of the stray capacitance and the inductance.
Attachment 2 shows the setup at the network analyzer side. Nothing special.
Attachment 3 shows the setup at the laser side. The DB9 connector on the Jenne's laser has the negative output of the LD driver connected to the coax core and the positive output connected to the shield of the coax. Therefore the coax core (red clip) has to be connected to Pin 9 and the coax shield (black clip) to PIn 5.
Update; the high frequency ( > 100 Hz) drop is of course not real and comes from a 4th order LP filter in the HPC demod I filter which I haven't accounted for. Furthermore, we have gone through the calibration factors and corrected a factor of 2 in the optical gain. Then, I also added the CLTF to show in loop and out of loop error respectively. The updated plot, though not final, is in Attachment #1.
Locked LO phase to ITMX single bounce beam at the AS port, using the DCPD (A-B) error point and actuating on LO1 POS. For this the gain was tuned from 0.6 to 4.0. A rough Michelson fringe calibration gives a counts to meters conversion of ~0.212 nm/count, and the OLTF looks qualitatively like the one in a previous measurement (~ 20 dB at 1 Hz, UGF = 30 Hz). The displacement was then converted to phase using lambda=1e-6; I'm not sure what the requirement is on the LO phase (G1802014 says 1e-4 rad/rtHz at 1 Hz, but our requirement doc says 1 to 20 nrad/rtHz (rms?)... anyways wit this rough calibration we are still off in either case.
The balancing gain is obvious at DC in the individual DCPD spectra, and the common mode rejection in the (A-B) signal is also appreciable. I'll keep working on refining this, and implementing a different control scheme.
We followed rana's suggestion for stress relief on the SMA joint in the BH55 RFPD that Radhika resoldered. We used a single core, pigtailed wire segment after cleaning up the solder joint on J7 (RF Out) and also soldered the SMA shield to the RF cage (see Attachment #1). This had a really good effect on the rigidity of the connection, so we moved back to the ITMY table.
We measured the TEST in to RF Out transfer function using the Agilent network analyzer, just to see the qualitative features (resonant gain at around 55 MHz and second harmonic suppression at around 110 MHz) shown in Attachment #2. We used 10kOhm series resistance in test input path to calibrate the measured transimpedance in V/A. The RFPD has been installed in the ITMY table and connected to the PD interface box and IQ demod boards in the LSC rack as before.
I further updated LSC model today with following changes:
The model built and installed with no issues.
Further, the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db
The IPC issue that we were facing earlier is resolved now. The BH55_I and BH55_Q signal after phase rotation is successfully reaching c1hpc model where it can be used to lock LO phase. To resolve this issue, I had to restart all the models. I also power cycled the LSC I/O chassis during this restart as Tega suspected that such a power cycle is required while adding new dolphin channels. But there is no way to find out if that was required or not. Good news is that with the new cds upgrade, restarting rtcds models will be much easier and modular.
Since ETMY does not use HV coil driver anymore, the watchdog on ETMY needs to be similar to other new optics. We made these updated today. Now ETMY watchdog while slowly ramps down the alignment offsets when it is tripped.
A design flaw in these initial LIGO RFPDs is that the SMA connector is not strain releieved by mounting to the case. Since it is only mounted to the tin can, when we attach/remove cables, it bends the connector, causing stress on the joint.
To get around this, for this gold box RFPD, connect the SMA connector to the PCB using a S shaped squiggly wire. Don't use multi-strand: this is usually good, since its more flexible, but in this case it affects the TF too much. Really, it would be best to use a coax cable, but a few-turns cork-screw, or pig-tail of single-core wire should be fine to reduce the stress on the solder joint.
Later Anchal noticed that there was no RFPD output (C1:LSC-BH55_I_ERR, C1:LSC-BH55_Q_ERR). I took out the RFPD and opened it up, and the RF OUT SMA to PCB connection wire was broken [Attachment 2]. I re-soldered the wire and closed up the box [Attachment 3]. After placing the RFPD back, we noticed spikes in C1:LSC-BH55_I_ERR and C1:LSC-BH55_Q_ERR channels on ndscope. We suspect there is still a loose connection, so I will revisit the RFPD circuit on Monday.
[Radhika, Paco, Anchal]
I placed a lens in the B-beam path to focus the beam spot onto the RFPD [Attachment 1]. To align the beam spot onto the RFPD, Anchal misaligned both ETMs and ITMY so that the AS and LO beams would not interfere, and the PD output would remain at some DC level (not fringing). The RFPD response was then maximized by scanning over pitch and yaw of the final mirror in the beam path (attached to the RFPD).
[Chub, Anchal, Tega, JC]
After replacing an empty tank this morning, I heard a hissing sound coming from the nozzle area. It turns out that this was from the copper tubing. The tubing we slightly broken and this was comfirmed with soapy water bubbles. This caused the N2 pressure to drop and the Vac interlocks to be triggered. Chub and I went ahead and replaced the fitting in connected this back to normal. Anchal and Tega have used the Vacuum StartUp procedures to restore the vacuum to normal operation.
Adding screenshot as the pressure is decreasing now.
We built, installed and started all the 40m models on the teststand today. The configuration we employ is to connect c1sus to the teststand I/O chassis and use dolphin to send the timing to other frontends. To get this far, we encounterd a few problems that was solved by doing the following:
0. Fixed frontend timing sync to FB1 via ntp
1. Set the rtcds enviroment variable `CDS_SRC=/opt/rtcds/userapps/trunk/cds/common/src` in the file '/etc/advligorts/env'
2. Resolved chgrp error during models installation using sticky bits on chiara, i.e. `sudo chmod g+s -R /opt/rtcds/caltech/c1/target`
3. Replaced `sqrt` with `lsqrt` in `RMSeval.c` to eliminate compilation error for c1ioo
4. Created a symlink for 'activateDQ.py' and 'generate_KisselButton.py' in '/opt/rtcds/caltech/c1/post_build'
5. Installed and configured dolphin for new frontend 'c1shimmer'
6. Replaced 'RFM0' with 'PCIE' in the ipc file, '/opt/rtcds/caltech/c1/chans/ipc/C1.ipc'
We still have a few issues namely:
1. The user models are not running smoothly. the cpu usage jumps to its maximum value every second or so.
2. c1omc seems to be unable to get its timing from its IOP model (Issue resolved by changing the CDS block parameter 'specific_cpu' from 6 to 4 bcos the new FEs only have 6 cores, 0-5)
3. The need to load the `dolphin-proxy-km` library and start the `rts-dolphin_daemon` service whenever we reboot the front-end
I updated follwoing in teh rtcds models and medm screens:
We realized the DCPD - B beam path was already using a 95:5 beamsplitter to steer the beam, so we are repurposing the 5% pickoff for a 55 MHz RFPD. For the RFPD we are using a gold RFPD labeled "POP55 (POY55)" which was on the large optical table near the vertex. We have decided to test this in-situ because the PD test setup is currently offline.
Radhika used a Y1-1025-45S mirror to steer the B-beam path into the RFPD, but a lens should be added next in the path to focus the beam spot into the PD sensitive area. The current path is illustrated by Attachment #1.
We removed some unused OPLEV optics to make room for the RFPD box, and these were moved to the optics cabinet along Y-arm [Attachment #2].
In parallel to setting up the optical path configuration in the ITMY table, we repurposed a DB15 cable from a PD interface board in the LSC rack to the RFPD in question. Then, an SMA cable was routed from the RFPD RF output to an "UNUSED" I&Q demod board on the LSC rack. Lucky us, we also found a terminated REFL55 LO port, so we can draw our demod LO from there. There are a couple (14,15,20,21) ADC free inputs after the WF2 and WF3 whitening filter interfaces.
Came this morning to find that all the BHD optics watchdogs tripped. Watchdogs were restored but the all ADCs were stuck.
We realized that the C1SU2 model crashed.
We run /scripts/cds/restartAllModels.sh to reboot all machines. All the machines rebooted and burt restored to yesterday around 5 PM.
The optics seem to have returned to good alignment and are damped.
[Tega, Radhika, JC]
We wired the front-ends for power, DAQ and martian network connections. Then moved the I/O chassis from the buttom of the rack to the middle just above the KVM switch so we can leave the top og the I/O chassis open for access to the ports of OSS target adapter card for testing the extension fiber cables.
Attachment 1 (top -> bottom)
When I turned on the test stand with the new front-ends, after a few minutes, the power to 1x7 was cut off due to overloading I assume. This brought down nodus, chiara and FB1. After Paco reset the tripped switch, everything came back without us actually doing anything, which is an interesting observation.
After this event, I moved the test stand power plug to the side wall rail socket. This seems fine so far. I then brought chiara (clone) and FB1 (clone) online. Here are some changes I made to get things going:
sudo service isc-dhcp-server restart
Getting the new front-ends booting off FB1 clone:
1. I found that the KVM screen was flooded with setup info about the dolphin cards on the LLO machines. This actually prevented login using the KVM switch for two of these machines. Strangely, one of them 'c1sus' seemed to be fine, see attachment 2, so I guessed this was bcos the dolphin network was already configured earlier when we were testing the dolphin communications. So I decided to configure the remaining dolphin cards. To do so, we do the following
1a. Ideally running
sudo /opt/DIS/sbin/dis_mkconf -fabrics 1 -sclw 8 -stt 1 -nodes c1lsc c1sus c1ioo c1iscex c1iscey c1sus2 -nosessions
should set up all the nodes, but this did not happen. In fact, I could no longer use the '/opt/DIS/sbin/dis_admin' GUI after running this operation and restarting the 'dis_networkmgr.service' via
sudo systemctl restart dis_networkmgr.service
so I logged into each front-end and configured the dolphin adapter there using
After which I shut down FB1 (clone) bcos restarting it earlier didn't work, I waited a few minutes and then started it. Everything was fine afterward, although I am not quite sure what solved the issue as I tried a few things and I was glad to see the problem go!
1b. I later found after configuring all the dolphin nodes that 2 of them failed the '/opt/DIS/sbin/dis_diag' test with an error message suggesting three possible issues of which one was 'faulty cable'. I looked at the units in question and found that swapping both cables with the remaining spares solved the problem. So it seems like these cables are faulty (need to double-check this). Attachment 3 shows the current state of the dolphin nodes on the front-ends and the dolphin switch.
2. I noticed that the NFS mount service for the mount points '/opt/rtcds' and '/opt/rtapps' in /etc/fstab exited with an error, so I ran
sudo mount -a
3. edit '/etc/hosts' to include c1iscex and c1iscey as these were missing
To test the PCIe extension fiber cables that connect the front-ends to their respective I/O chassis, we run the following command (after booting the machine with the cable connected):
controls@c1lsc:~$ lspci -vn | grep 10b5:3
If we see the output above, then both the cable and OSS card are fine (We know from previous tests that the OSS card on the I/O chassis is good). Since we only have one I/O chassis, we repeat the step above 8 times, also cycling through the six new front-end as we go so that we are also testing the installed OSS host adapter cards. I was able to test 4 cables and 4 OSS host cards (c1lsc, c1sus, c1ioo, c1sus2), but the remaining results were inconclusive (i.e. it seems to suggest that 3 out of the remaining 5 fiber cables are faulty, which in itself would be considered unfortunate but I found the reliability if the test to be in question when I went back to test the functionality to the 2 remaining OSS host cards using a cable that passed the test earlier and it didn't pass. After a few retries, I decided to call it a day b4 I lose my mind) and need to be redone again tomorrow.
Note: We were unable to lay the cables today bcos these tests were not complete, so we are a bit behind the plan. Would see if we can catch up tomorrow.
We performed local damping loop tuning for ETMs today to ensure all damping loops' OLTF has a Q of 3-5. To do so, we applied a step on C1:SUS-ETMX/Y_POS/PIT/YAW_OFFSET, and looked for oscillations in the error point of damping loops (C1:SUS-EMTX/Y_SUSPOS/PIT/YAW_IN1) and increased or decreased gains until we saw 3-5 oscillations before the error signal stabilizes to a new value. For side loop, we applied offset at C1:SUS-ETMX/Y_SDCOIL_OFFSET and looked at C1:SUS-ETMX/Y_SUSSIDE_IN1. Following are the changes in damping gains:
C1:SUS-ETMX_SUSPOS_GAIN : 61.939 -> 61.939
C1:SUS-ETMX_SUSPIT_GAIN : 4.129 -> 4.129
C1:SUS-ETMX_SUSYAW_GAIN : 2.065 -> 7.0
C1:SUS-ETMX_SUSSIDE_GAIN : 37.953 -> 50
C1:SUS-ETMY_SUSPOS_GAIN : 150.0 -> 41.0
C1:SUS-ETMY_SUSPIT_GAIN : 30.0 -> 6.0
C1:SUS-ETMY_SUSYAW_GAIN : 30.0 -> 6.0
C1:SUS-ETMY_SUSSIDE_GAIN : 50.0 -> 300.0
We resume the LO phase locking work. MICH was locked with an offset of 80 cts. LO and AS beams were aligned to maximize the BHD readout visibility on ndscope.
We lock the LO phase on a fringe (DC locking) actuating on LO1.
Attachment 1 shows BHD readout (DCPD_A_ERR = DCPD_A - DCPD_B) spectrum with and without fringe locking while LO2 line at 318 Hz is on. It can be seen that without the fringe locking the dithering line is buried in the A-B noise floor. This is probably due to multiple fringing upconversion. We figured that trying to directly dither-lock the LO phase might be too tricky since we cannot resolve the dither line when the LO phase is unlocked.
We try to handoff the lock from the fringe lock to the AC lock in the following way: Since the AC error signal reads the derivative of the BHD readout it is the least sensitive to the LO phase when the LO phase is locked on a dark fringe, therefore we offset the LO to realize an AC error signal. LO phase offset is set to ~ 40 cts (peak-to-peak counts when LO phase is uncontrolled is ~ 400 cts).
We look at the "demodulated" signal of LO1 from which the fringe locking error signal is derived (0 Hz frequency modulation 0 amplitude) and the demodulated signal of LO2 where a ~ 700 Hz line is applied. We dither the LO phase at ~ 50Hz to create a clear signal in order to compare the two error signals. Although the 50 Hz signal was clearly seen on the fringe lock error signal it was completely unresolved in the LO2 demodulated signal no matter how hard we drove the 700Hz line and no matter what demodulation phase we chose. Interestingly, changing the demodulation phase shifted the noisy LO2 demodulated signal by some constant. Will post a picture later.
Could there be some problem with the modulation-demodulation model? We should check again but I'm almost certain we saw the 700Hz line with an SNR of ~ 100 in diaggui, even with the small LO offset changes in the 700Hz signal phase should have been clearly seen in the demodulated signal. Maybe we should also check that we see the 50Hz side-bands around the 700Hz line on diaggui to be sure.
[JC, Tega, Paco ]
I would like to mention that during the Vacuum startup, after the AUX pump was turned on, Tega and I were walking away while the pressure decreases. While we were, valves opened on their own. Nobody was near the VAC Desktop during this. I asked Koji if this may be an automatic startup, but he said the valves shouldn't open unless they are explicitely told to do so. Has anyone encountered this before?
[Paco, Tega, JC, Yehonathan]
We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.
We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.
We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:
These action seem to make things better.
[Tega, Paco, JC]
We moved the GPS network time server and the Frequency distribution amplifier from 1X7 to 1X6 and the PEM AA, ADC adapter and Martian network switch from 1X6 to 1X7. Also mounted the dolphin IX switch at the rear of 1X7 together with the DAQ and martian switches. This cleared up enough space to mount all the front-ends, however, we found that the mounting brackets for the frontends do not fit in the 1X7 rack, so I have decided to mount them on the upper part of the test stand for now while we come up with a fix for this problem. Attachments 1 to 3 show the current state of racks 1X6, 1X7 and the teststand.
Attachment 1: Front of racks 1X6 and 1X7
Attachment 2: Rear of rack 1X7
Attachment 3: Front of teststand rack
Thursday [CAN I GET THE IFO ON THIS DAY PLEASE?]
Locked the LO phase with a MICH offset=+91. The LO is midfringe (locked using the A-B zero crossing), so it's far from being "useful" for any readout but we can at least look at residual noise spectra.
I spent some time playing with the loop gains, filters, and overall lock acquisition, and established a quick TF template at Git/40m/measurements/BHD/HPC_LO_PHASE_TF.xml
So far, it seems that actuating on the LO phase through LO2 POS requires 1.9 times more strength (with the same "A-B" dc sensing). After closing the loop by FM4, and FM5, actuating on LO2 with a filter gain of 0.4 closes the loop robustly. Then, FM3 and FM6 can be enabled and the gain stepped up to 0.5 without problem. The measured UGF (Attachment #1) here was ~ 20 Hz. It can be increased to 55 Hz but then it quickly becomes unstable. I added FM1 (boost) to the HPC_LO_PHASE bank but didn't get to try it.
The noise spectra (Attachment #2) is still uncalibrated... but has been saved under Git/40m/measurements/BHD/HPC_residual_noise_spectra.xml
Doing POX-POY noise measurement as a poor man's FPMI for diagnostic purposes. (Notebook in /opt/rtcds/caltech/c1/Git/40m/measurements/LSC/POX-POY/Noise_Budget.ipynb)
The arms were locked individually using POX11 and POY11. The optical gain was estimated to be by looking at the PDH signal of each arm: the slope was computed by taking the negative peak to positive peak counts and assuming that the arm length change between those peaks is lambda/(2*Finesse), where lambda = 1um and the arm finesse is taken to be 450.
Xarm peak-to-peak counts is ~ 850 while Yarm's is ~ 1100. This gives optical gains of 3.8e11 cts/m and 4.95e11 cts/m respectively.
Next, ETMX actuation TF is measured (attachments 1,2) by exciting C1:LSC-ETMX/Y_EXC and measuring at C1:LSC-X/YARM_IN1_DQ and calibrating with the optical gain.
Using these calibrations I plot the POX-POY (attachment 3) and POX+POY (attachment 4) total noise measurements using two methods:
1. Plotting the calibrated IN and OUT channels of XARM-YARM (blue and orange). Those two curves should cross at the UGF (200Hz in this case).
2. Plotting the calibrated XARM-YARM IN channels times 1-OLTF (black).
The UGF bump can be clearly seen above the true noise in those plots.
However, POX+POY OUT channel looks too high for some reason making the crossing frequency between IN and OUT channels to be ~ 300Hz. Not sure what was going on with this.
Next, I will budget this noise with the individual noise contributions.
With limited proof of working for a few times (but robustly), I'm happy to report that ASS on YARM and XARM is working now.
The issue is that PR3 is not placed in correct position in the chamber. It is offset enough that to send a beam through center of ITMY to ETMY, it has to reflect off the edge of PR3 leading to some clipping. Hence our usual ASS takes us to this point and results in loss of transmission due to clipping.
Solution: We can not solve this issue without moving PR3 inside the chamber. But meanwhile, we can find new spot positions on ITMY and ETMY, off the center in YAW direction only, which would allow us to mode match properly without clipping. This would mean that there will be YAW suspension noise to Length coupling in this cavity, but thankfully, YAW degree of freedom stays relatively calm in comparison to PIT or POS for our suspensions. Similarly, we need to allow for an offset in ETMX beam spot position in YAW. We do not control beam spot position on ITMX due to lack of enough actuators to control all 8 DOFs involved in mode matching input beam with a cavity. So instead I found the right offset for ITMX transmission error signal in YAW that works well.
I found these offsets (found empirically) to be:
These offsets have been saved in the burt snap file used for running ASS.
I'll reiterate here procedure to run ASS.