ID |
Date |
Author |
Type |
Category |
Subject |
15926
|
Tue Mar 16 19:13:09 2021 |
Paco, Anchal | Update | SUS | First success in Input Matric Diagonalization | After jumping through few hoops, we have one successful result in diagonalizing the input matrix for MC1, MC2 and MC3.
Code:
- Attachment 2 has the code file contained. For now, we can only guarantee it to work on Donatella in conda base environment. Our code is present in scripts/SUS/InMatCalc
- We mostly follow the steps mentioned in 4886 and the matlab codes in scripts/SUS/peakFit.
- Data is first multiplied with currently used inpur matrix to get time series data in DOF (POS, PIT, YAW, SIDE) basis.
- Then, the peak frequencies of each resonance are identified.
- For getting these results, we did not attempt to fit the peaks with lorentzians and took the maxima point of the PSD to get the peak positions. This is only possible if the current input matrix is good enough. We have to adjust some parameters so that our fitting code works always.
- TF estimate between the sensor data w.r.t UL sensor is taken and the values around the peak frequencies of oscillations are averaged to get the sensing matrix.
- This matrix is normalized along DOF axis (columns in our case) and then inverted.
- After inversion, another normaliation is done along DOF axis (now rows).
- Finally we plot the comparison of ASD in DOF basis when using current input matrix and when using our calculated inpur matrix (diagonalizing).
- You can notice in Attachment 1 that after the diagonalization, each DOF shows resonance at only one and its own resonance frequency while earlier there was some mixing shown.
- Absolute value of the calculated DOF might have changed and we need to calibrate them or apply appropriate gain factors in the DOF basis filter chains.
Next steps:
- We'll complete our scripts and make them more general to be used for any optic.
- We'll combine all of them into one single script which can be called by medm.
- In parallel, we'll start onwards from step 2 in 15881.
- Anything else that folks can suggest on our first result. Did we actually do it or are we fooling ourselves?
|
15925
|
Tue Mar 16 19:04:20 2021 |
gautam | Update | CDS | Front-end testing | Now that I think about it, I may only have backed up the root file system of chiara, and not/home/cds/ (symlinked to /opt/ over NFS). I think we never revived the rsync backup to LDAS after the FB fiasco of 2017, else that'd have been the most convenient way to get files. So you may have to resort to some other technique (e.g. configure the second network interface of the chiara clone to be on the martian network and copy over files to the local disk, and then disconnect the chiara clone from the martian network (if we really want to keep this test stand completely isolated from the existing CDS network) - the /home/cds/ directory is rather large IIRC, but with 2TB on the FB clone, you may be able to get everything needed to get the rtcds system working). It may then be necessary to hook up a separate disk to write frames to if you want to test that part of the system out.
Good to hear the backup disk was able to boot though!
Quote: |
And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.
For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success.
|
|
15924
|
Tue Mar 16 16:27:22 2021 |
Jon | Update | CDS | Front-end testing | Some progress today towards setting up an isolated subnet for testing the new front-ends. I was able to recover the fb1 backup disk using the Rescatux disk-rescue utility and successfully booted an fb1 clone on the subnet. This machine will function as the boot server and DAQ server for the front-ends under test. (None of these machines are connected to the Martian network or, currently, even the outside Internet.)
Despite the success with the framebuilder, front-ends cannot yet be booted locally because we are still missing the DHCP and FTP servers required for network booting. On the Martian net, these processes are running not on fb1 but on chiara. And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.
For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success. The repair tool I used to recover the fb1 disk does not find a problem with the chiara disk. However, the chiara disk is an external USB drive, so I suspect there could be a compatibility problem with these old (~2010) machines. Some of them don't even recognize USB keyboards pre-boot-up. I may try booting the USB drive from a newer computer.
Edit: I removed one of the new, unused Supermicros from the 1Y2 rack and set it up in the test stand. This newer machine is able to boot the chiara USB disk without issue. Next time I'll continue with the networking setup. |
15923
|
Tue Mar 16 16:02:33 2021 |
Koji | Update | LSC | REFL11 demod retrofitting | I'm going to remove REFL11 demod for the noise check/circuit improvement.
----
- The module was removed (~4pm). Upon removal, I had to loosen AS110 LO/I out/Q out. Check the connection and tighten their SMAs upon restoration of REFL11.
- REFL11 configuration / LO: see below, PD: a short thick SMA cable, I OUT: Whitening CH3, Q OUT: Whitening CH4, I MON daughterboard: CM board IN1 (BNC cable)
- The LO cable for REFL11 was made of soft coax cable (Belden 9239 Low Noise Coax). The vendor specifies that this cable is for audio signals and NOT recommended for RF purposes [Link to Technical Datasheet (PDF)].
I'm going to measure the delay of the cable and make a replacement.
- There is a bunch of PD RF Mon cables connected to many of the demo modules. I suppose that they are connected to the PD calibration system which hasn't been used for 8 years. And the controllers are going to be removed from the rack soon.
I'm going to remove these cables.
----
First I checked the noise levels and the transfer functions of the daughterboard preamp were checked. The CH-1 of the SR785 seemed funky (I can't comprehensively tell yet how it was), so the measurement maybe unreliable.
For the replacement of AD797, I tested OP27 and TLE2027. TLE2027 is similar to OP27, but slightly faster, less noisy, and better in various aspects.
The replacement of the AD797 and whatever-film resistors with LTE2027 and thin-film Rs were straightforward for the I phase channel, while the stabilization of the Q phase channel was a struggle (no matter I used OP27 or TLE2027). It seems that the 1st stage has some kind of instability and I suffered from 3Hz comb up to ~kHz. But the scope didn't show obvious 3Hz noise.
After a quite bit of struggle, I could tame this strange noise by adjusting the feedback capacitor of the 1st stage. The final transfer functions and the noise levels were measured. (To be analyzed later)
----
Now the REFL11 LO cable was replaced from the soft low noise audio coax (Belden 9239) to jacketed solder-soaked coax cable (Belden 1671J - RG405 compatible). The original cable indicated the delay of -34.3deg (@11MHz, 8.64ns) and the loss of 0.189dB.
I took 80-inch 1671J cable and measured the delay to be ~40deg. The length was adjusted using this number and the resulting cable indicated the delay of -34.0deg (@11MHz, 8.57ns) and the loss of 0.117dB.
The REFL11 demod module was restored and the cabling around REFL11 and AS110 were restored, tightened, and checked.
----
I've removed the PD mon cables from the NI RF switch. The open ports were plugged with 50Ohm temirnators.
----
I ask commissioners to make the final check of the REFL11 performance using CDS. |
15922
|
Tue Mar 16 14:37:36 2021 |
Yehonathan | Update | BHD | SOS SmCo magnets Inspection | In the cleanroom, I opened the nickel-plated SmCo magnet box to take a closer look. I handled the magnets with tweezers. I wrapped the tips of the tweezers with some Kapton tape to prevent scratching and magnetization.
I put some magnets on a razor blade and took some close-up pictures of the face of the magnets on both sides. Most of them look like attachment 1.
Some have worn off plating on the edges. The most serious case is shown in attachment 2. Maybe it doesn't matter if we are going to sand them?
I measure the magnetic flux of the magnets by just attaching the gaussmeter flat head to the face of the magnet and move it around until the maximum value is reached.
For envelope #1 out of 3 the values are: (The magnet ordering is in attachment 3):
Magnet # |
Max Magnetic Field (kG) |
1 |
2.57 |
2 |
2.54 |
3 |
2.57 |
4 |
2.57 |
5 |
2.55 |
6 |
2.61 |
7 |
2.55 |
8 |
2.52 |
9 |
2.64 |
10 |
2.58
|
Going to continue tomorrow with the rest of the magnets. I left the magnet box and the gaussmeter under the flow bench in the cleanroom. |
15921
|
Mon Mar 15 20:40:01 2021 |
rana | Configuration | Computers | installed QTgrace on donatella for dataviewer | I installed QTgrace using yum on donatella. Both Grace and XMgrace are broken due to some boring fight between the Fedora package maintainers and the (non existent) Grace support team. So I have symlinked it:
controls@donatella|bin> sudo mv xmgrace xmgrace_bak
controls@donatella|bin> sudo ln -s qtgrace xmgrace
controls@donatella|bin> pwd
/usr/bin
I checked that dataviewer works now for realtime and playback. Although the middle click paste on the mouse doesn't work yet. |
15920
|
Mon Mar 15 20:22:01 2021 |
gautam | Update | ASC | c1rfm model restarted | On Friday, I felt that the ASC performance when the PRFPMI was locked was not as good as it used to be, so I looked into the situation a bit more. As part of my ASC model revamp in December, I made a bunch of changes to the signal routing, and my suspicion was that the control signals weren't even reaching the ETMs. My log says that I recompiled and reinstalled the c1rfm model (used to pipe the ASC control signals to the ETMs), and indeed, the file was modified on Dec 21. But for whatever reason, the C1RFM.ini (=Dolphin receiver since the ASC control signals are sent to this model over the Dolphin network from the c1ioo machine which hosts the C1:ASC- namespace, and RFM sender to the ETMs, but this path already existed) file never picked up the new channels. Today, I recompiled, re-installed, and restarted the models, and confirmed that the control signals actually make it to the ETMs. So now we can have the QPD-based ASC loops engaged once again for the PRFPMI lock. The CDS system did not crash 🎉 . See Attachments #1-3.
I checked the loop performance in the POX/POY locked config by first deliberately misaligning the ETMs, and then engagin the loops - seems to work (Attachment #4). The loop shapes have to be tweaked a bit and I didn't engage the integrators, hence the DC pointing wasn't recovered. Also, added a line to the script that turns the ASC loops on to set limits for all the loops - in the testing process, one of the loops ran away and I tripped the ETMY watchdog. It has since been recovered. I SDFed a limit of 100cts just to be on the conservative side for model reboot situations - the value in the script can be raised/lowered as deemed necessary (sorry, I don't know the cts-->urad number off the top of my head).
But the hope is this improves the power buildup, and provides stability so that I can begin to commission the AS WFS system a bit. |
15919
|
Mon Mar 15 08:55:45 2021 |
Paco, Anchal | Summary | training | | [Paco, Anchal]
- Found IMC locked upon arrival.
- Since "allegra" was set up as an additional workstation, we tried using it but discovered the monitor ist kaput. For the sake of debugging, we tested VGA and DVI inputs and even the monitor lying around (also labeled "allegra") with no luck. So <ssh> it is for now.
IMC Input sensing matrix
- Rana joined us and asked us to use Rossa for now so that we can sit socially distantly.
- Attaching some intermediate results on our analysis as pdf and zip file containing all the codes we used.
- We used channels C1:SUS-MC1_USSEN_OUTPUT (16 Hz channels) and so on which might not be the correct way to do it as Rana pointed out today, we should have used channels like C1:SUS-MC1_SENSOR_UL etc.
- During the input matrix calculation, we used the method of TF estimate (as mentioned in 4886) to calculate the sensor matrix and inverted it and normalized all rows with the maximum absolute value element (we tried few other ways of normalization with no better results either).
- We found the peak frequencies by fitting lorentzian to the sensor data rotated by the current input matrix in the system. We also tried doing this directly on the sensor data (UL for POS, UR for PIT, LR for YAW and SD for SIDE as this seemed to be the case in the old matlab codes) but with no different results.
- The fitted peak frequencies, Q and amplitude values are in fittedPeakFreqs.yml in the attached zip.
|
15918
|
Fri Mar 12 21:15:19 2021 |
gautam | Update | LSC | coronaversary PRFPMi | Attachment #1 - proof that the lock is RF only (A paths are ALS, B paths are RF).
Attachment #2 - CARM OLTF.
Some tuning can be done, the circulating power can be made ~twice as high with some ASC. The vertex is still on 3f control. I didn't get any major characterization done tonight but it's nice to be back here, a year on i guess. |
15917
|
Fri Mar 12 19:44:31 2021 |
gautam | Update | LSC | Delay line | I may want to use the delay line phase shifter in 1Y2 to allow remote actuation of the REFL11 demod phase (for the AO path, not the low bandwidth one). I had this working last Feb, but today, I am unable to remotely change the delay. @Koji, it would be great if you could fix this the next time you are in the lab - I bet it's a busted latch IC or some such thing. I did the non-invasive tests - cable is connected, control bits are changing (at least according to the CDS BIO indicators) and the switch controlling remote/local switching is set correctly. The local switching works just fine.
In the meantime, I will keep trying - I am unconvinced we really need the delay line. |
15916
|
Fri Mar 12 18:10:01 2021 |
Anchal | Summary | Computer Scripts / Programs | Installed cds-workstation on allegra | allegra had fresh Debian 10 installed on it already. I installed cds-workstation packages (with the help of Erik von Reis). I checked that command line caget, caput etc were working. I'll see if medm and other things are working next time we visit the lab. |
15915
|
Fri Mar 12 13:48:53 2021 |
gautam | Summary | SUS | Coil Rs & Ls for PRM/BS/SRM | I didn't repeat Koji's measurement, but he reports the expected ~3.2mH per coil on all the BS and PRM coils.
Quote: |
ugh. sounds bad - maybe a short. I suggest measuring the inductance; thats usually a clearer measurement of coil health
|
|
15914
|
Fri Mar 12 13:01:43 2021 |
rana | Summary | SUS | Coil Rs & Ls for PRM/BS/SRM | ugh. sounds bad - maybe a short. I suggest measuring the inductance; thats usually a clearer measurement of coil health |
15913
|
Fri Mar 12 12:32:54 2021 |
gautam | Summary | SUS | Coil Rs & Ls for PRM/BS/SRM | For consistency, today, I measured both the BS and PRM actuator balancing using the same technique and don't find as serious an imbalance for the BS as in the PRM case. The Oplev laser source is common for both BS and PRM, but the QPDs are of course distinct.
BTW, I thought the expected resistance of the coil windings of the OSEM is ~13 ohms, while the BS/PRM OSEMs report ~1-2 ohms. Is this okay?
Quote: |
- All the PRM coils look well-matched in terms of the inductance. Also, I didn't find a significant difference from BS coils.
|
|
15912
|
Fri Mar 12 11:44:53 2021 |
Paco, Anchal | Update | training | IMC SUS diagonalization in progress | [Paco, Anchal]
- Today we spent the morning shift debugging SUS input matrix diagonalization. MC stayed locked for most of the 4 hours we were here, and we didn't really touch any controls. |
15911
|
Fri Mar 12 11:02:38 2021 |
gautam | Update | CDS | cds reboot | I looked into this a bit today morning. I forgot exactly what time we restarted the machines, but looking at the timesyncd logs, it appears that the NTP synchronization is in fact working (log below is for c1sus, similar on other FEs):
-- Logs begin at Fri 2021-03-12 02:01:34 UTC, end at Fri 2021-03-12 19:01:55 UTC. --
Mar 12 02:01:36 c1sus systemd[1]: Starting Network Time Synchronization...
Mar 12 02:01:37 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Mar 12 02:01:37 c1sus systemd[1]: Started Network Time Synchronization.
Mar 12 02:02:09 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
So, the service is doing what it is supposed to (using the FB, 192.168.113.201, as the ntpserver). You can see that the timesync was done a couple of seconds after the machine booted up (validated against "uptime"). Then, the service is periodically correcting drifts. idk what it means that the time wasn't in sync when we check the time using timedatectl or similar. Anyway, like I said, I have successfully rebooted all the FEs without having to do this weird time adjustment >10 times.
I guess what I am saying is, I don't know what action is necessary for "implementing NTP synchronization properly", since the diagnostic logfile seems to indicate that it is doing what it is supposed to.
More worryingly, the time has already drifted in < 24 hours.
Quote: |
I want to emphasize the followings:
- FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time.
- The other RT machines are not synchronized to NTP time.
- My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error.
- For now, we have to use "date" command to match the local RTC time to the FB's time.
- So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.
|
|
15910
|
Fri Mar 12 03:28:51 2021 |
Koji | Update | CDS | cds reboot | I want to emphasize the followings:
- FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time.
- The other RT machines are not synchronized to NTP time.
- My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error.
- For now, we have to use "date" command to match the local RTC time to the FB's time.
- So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.
|
15909
|
Fri Mar 12 03:23:37 2021 |
Koji | Update | BHD | DO card (DO-32L-PE) brought from WB | I've brought 4 DO-32L-PE cards from WB for BHD upgrade for Jon. |
15908
|
Fri Mar 12 03:22:45 2021 |
Koji | Update | General | Gaussmeter in the electronics drawer | For magnet strength measurement: There is a gaussmeter in the flukes' drawer (2nd from the top). It turns on and reacts to a whiteboard magnet. |
15907
|
Fri Mar 12 03:08:23 2021 |
Koji | Summary | SUS | Coil Rs & Ls for PRM/BS/SRM | Summary
Per Gautam's request, I've checked the coil resistances and inductances.
- PRM/BS/SRM coils were tested.
- All the PRM coils look well-matched in terms of the inductance. Also, I didn't find a significant difference from BS coils.
- Pin 1 of the feedthru connectors is shorted to the vacuum chamber.
- A discovery was that: The SRM DSUB pinouts are mirrored compared to the other suspensions.
Method
A DSUB25 breakout was directly connected to the flange (Attachment 1).
The impedance meter was nulled every time the measurement range and type (R or L) were changed.
Result
Feedthru connector: PRM1
Pin1 - flange: R = 0.8Ω
Pin11-23 / R = 1.79Ω / L=3.21mH
Pin 7-19 / R = 1.82Ω / L=3.22mH
Pin 3-15 / R = 1.71Ω / L=3.20mH
|
Feedthru connector: BS1
Pin1 - flange: R = 0.5Ω
Pin11-23 / R = 1.78Ω / L=3.26mH
Pin 7-19 / R = 1.63Ω / L=3.30mH
Pin 3-15 / R = 1.61Ω / L=3.29mH
|
Feedthru connector: SRM1
Pin1 - flange: R = 0.5Ω
Pin11-24 / R = 18.1Ω / L=3.22mH
Pin 7-20 / R = 18.8Ω / L=3.25mH
Pin 3-16 / R = 20.3Ω / L=3.25mH
|
Feedthru connector: PRM2
Pin1 - flange: R = 0.6Ω
Pin11-23 / R = 1.82Ω / L=3.20mH
Pin 7-19 / R = 1.53Ω / L=3.20mH
Pin 3-15 / R = N/A
|
Feedthru connector: BS2
Pin1 - flange: R = 0.6Ω
Pin11-23 / R = 1.46Ω / L=3.27mH
Pin 7-19 / R = 1.54Ω / L=3.24mH
Pin 3-15 / R = N/A
|
Feedthru connector: SRM2
Pin1 - flange: R = 0.7Ω
Pin11-24 / R = N/A
Pin 7-20 / R = 18.5Ω / L=3.21mH
Pin 3-16 / R = 19.1Ω / L=3.25mH
|
Observation
The SRM pinouts seem mirrored compared to the others. In fact, these two connectors are equipped with mirror cables (although they are unshielded ribbons) (Attachment 2).
The SRM sus is located on the ITMY table. There is a long in vacuum DSUB25 cable between the ITMY and BS tables. I suspect that the cable mirrors the pinout and this needs to be corrected by the in-air mirror cables.
I went around the lab and did not find any other suspensions which have the mirror cable.
WIth the BHD configuration, we will move the feedthru for the SRM to the one on the ITMY chamber. So I believe the situation is going to be improved.
|
15906
|
Thu Mar 11 20:18:00 2021 |
gautam | Update | LSC | High bandwidth POY | I repeated the high bandwidth POY locking experiment.
- The "Q" demod output (SMA) was routed to the common mode board (it appears in the past I used the LEMO "MON" output instead but that shouldn't be a meaningful change).
- As usual, slow actuation --> ETMY, fast actuation --> IMC error point.
- Loop UGF measurement suggests that bandwidth ~25kHz, with ~25 degrees phase margin. Anyway the lock was pretty stable.
One thing I am not sure is - when looking at the in-loop error point spectra, the Y-arm error point did not get suppressed to the CM board's sensing noise floor - I would have thought that with the huge amount of gain at ~16 Hz, the usual structure we see in the spectra between 10-30Hz would be completely squished. Need to think about if this is signalling something wrong, because the loop TF measurements seemed as expected to me.
1020pm: plots uploaded. As I made the plot of the spectrum, I realized that I don't have the calibration for the Y-arm error point into displacement noise units, so it's in unphysical units for now. But I think the comment about the hump around 16 Hz not being crushed to some sort of flat electronics noise floor. For the TF plots, when the loop gain is high, this IN1/IN2 technique isn't the best (due to saturation issues) but I don't think there's anything controversial about getting the UGF this way, and the fact that the phase evolves as expected when the various gains are cranked up / boosts enabled makes me think that the CM board is itself just fine.
10am 12 March: i realized that the "Y-arm error point" plotted below is not the true error point - that would be the input to the CM board (before boosts etc), which we don't monitor digitally. The spectra are plotted for the CM_SLOW input which already has some transfer function applied to it. In the past, I routed the LEMO "MON" connector on the demod board to the CM board input, and hence, had the usual SMA outputs from the demod board going to the digital system. I hypothesize that plotting the spectra for that signal would have showed this expected suppression to the electronics noise floor.
In summary, on the basis of this test, I don't see any red flags with the CM board. |
15905
|
Thu Mar 11 18:46:06 2021 |
gautam | Update | CDS | cds reboot | Since Koji was in the lab I decided to bite the bullet and do the reboot. I've modified the reboot script - now, it prompts the user to confirm that the time recognized by the FEs are the same (use the IOP model's status screen, the GPSTIME is updated live on the upper right hand corner). So you would do sudo date --set="Thu 11 Mar 2021 06:48:30 PM UTC" for example, and then restart the IOP model. Why is this necessary? Who knows. It seems to be a deterministic way of getting things back up and running for now so we have to live with it. I will note that this was not a problem between 2017 and 2020 Oct, in which time I've run the reboot script >10 times without needing to take this step. But things change (for an as of yet unknown reason) and we must adapt. Once the IOPs all report a green "DC" status light on the CDS overview screen, you can let the script take you the rest of the way again.
The main point of this work was to relax the data rate on the c1lsc model, and this worked. It now registers ~3.2 MB/s, down from the ~3.8 MB/s earlier today. I can now measure 2 loop TFs simultaneously. This means that we should avoid adding any more DQ channels to the c1lsc model (without some adjustment/downsampling of others).
Quote: |
Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash.
|
|
15904
|
Thu Mar 11 14:27:56 2021 |
gautam | Update | CDS | timesync issue? | I have recently been running into hitting the 4MB/s data rate limit on testpoints - basically, I can't run DTT TF and spectrum measurements that I was able to while locking the interferometer, which I certainly was able to this time last year. AFAIK, the major modification made was the addition of 4 DQ channels for the in-air BHD experiment - assuming the data is transmitted as double precision numbers, i estimate the additional load due to this change was ~500KB/s. Probably there is some compression so it is a bit more efficient (as this naive calc would suggest we can only record 32 channels and I counted 41 full rate channels in the model), but still, can't think of anything else that has changed. Anyway, I removed the unused parts and recompiled/re-installed the models (c1lsc and c1omc). Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash. For documentation I'm also attaching screenshot of the schematic of the changes made.
Anyway, the main point of this elog is that at the compilation stage, I got a warning I've never seen before:
Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 13 s in the future
make[1]: warning: Clock skew detected. Your build may be incomplete.
This prompted me to check the system time on c1lsc and FB - you can see there is a 1 minute offset (it is not a delay in me issuing the command to the two machines)! I am suspecting this NTP action is the reason. So maybe a model reboot is in order. Sigh |
15903
|
Thu Mar 11 14:03:02 2021 |
gautam | Update | LSC | AO path | There is some evidence of weird saturation but the gain balancing (0.8dB) and orthogonality (~89 deg) for the daughter board on the REFL11 demod board that generates the AO path error signal seem reasonable. This board would probably benefit from the AD797-->Op27 and thick-film-->thin film swap but i don't think this is to blame for being unable to execute the RF transition. |
15902
|
Thu Mar 11 08:13:24 2021 |
Paco, Anchal | Update | SUS | IMC First Free Swing Test failed due to typo, restarting now | [Paco, Anchal]
The triggered code went on at 5:00 am today but a last minute change I made yesterday to increase number of repititions had an error and caused the script to exit putting everything back to normal. So as we came in the morning, we found the mode cleaner locked continuously after one free swing attempt at 5:00 am. I've fixed the script and ran it for 2 hours starting at 8;10 am. Our plan is to get some data atleast to play with when we are here. If the duration is not long enough, we'll try to run this again tomorrow morning. The new script is running on same tmux session 'MCFreeSwingTest' on Rossa
10:13 the script finished and IMC recovered lock.
Thu Mar 11 10:58:27 2021
The test ran succefully with the mode cleaner optics coming back to normal in the end of it. We wrote some scripts to read data and analyze it. More will come in future posts. No other changes were made today to the systems. |
15901
|
Thu Mar 11 02:10:06 2021 |
Koji | Summary | BHD | BHD Platform vertical dimentions | Stephen and I discussed the nominal heights of the BHD platform components.
- The beam height from the stack is 5.5"
- The platform height is 1.5" and the thickness of 0.4", according to the VOPO suspension, which we want to be compatible with.
- Thus the beam height on the BHD platform is 4".
- The VOPO platform has a minimum 0.1" gap from the installation surface when it is suspended.
- When the BHD platform is fixed on the table, we'll use positioners that are fixed on the stack table. Then the BHD platform is fixed on the positioner rather than fixing the entire platform on the stack. This leaves us the option to suspend the platform in the future. The number of the positioners is TBD.
- Looking at the head size for 1/4-20 socket head screws, It'd be nice to have the thickness of 0.5" for the positioners. This makes the thin part of the stiffener to be 0.6" in thickness.
- The numbers are nominal for the initial design and subject to the change along with FEA simulations to determine the resonant frequency of the body modes.
|
15900
|
Thu Mar 11 01:45:42 2021 |
gautam | Update | LSC | PRFPMi |
- PRM satellite box indeed seems to have been the culprit - shortly after I swapped it to the SRM, its shadow sensors went dark. I leave the watchdog tripped.
- I still was unable to realize the RF only IFO
- Clearly my old settings don't work, so I tried to go about it systematically. First, try and transition CARM to RF, leave DARM on ALS.
- As usual, I can realize the state were the arm powers are ~100, and the two paths are blended.
- But I'm not able to completely turn off the CARM_A path without blowing the lock.
Pity really, I was hoping to make it much further tonight. I think I'll have to go back to the high BW POX/POY lock, and also check out the conversion efficiency / noise of the daughter board on the REFL11 demod board. Compared to before my work on the RF source, the demod phase for the PRMI lock using REFL11 as an error signal has basically necessitated a change of the digital demod phase by 180 degrees - so I made the appropriate polarity changes in the CM_SLOW and AO paths (the assumption is that CARM in REFL11 would require the same change in digital demod phase, and I think this is a reasonable assumption - indeed, with the arm powers somewhat stable ~100, if I look at the PDH signal in REFL11 I and Q, it does seem to show up largely in the I quadrature (pre digital phase rotation). Anyway, with so many weird effects (wonky PRM suspension, strange PRMI sensing etc etc, who knows what's going on. This will take a systematic effort.
I defer the electronics characterization for the daytime (if I feel like I need it tomorrow I'll do it, else. Koji has said he can do it on Friday).
Quote: |
I was unable to fully hand off control from ALS-->RF, I suspect I may be using the wrong sign on the AO path (or some such other sub-optimal CM board settings). I'll hook up the SR785 and take some TFs tomorrow, that should give more insight into what's what.
|
|
15899
|
Wed Mar 10 19:58:27 2021 |
gautam | Update | LSC | SR785 hooked up to CM board | In preparation for later today evening. The TT alignment wasn't visibly disturbed. |
15898
|
Wed Mar 10 17:35:47 2021 |
gautam | Update | SUS | Spooky action at a distance | As I am sitting in the control room, the PRM suspension watchdog tripped again. This time, there is clearly no seismic activity. Yet, the BS suspension also shows a slight disturbance at the same time as the PRM. ITMY shows no perturbation though. My best hypothesis here is that the problem is electrical. In Attachment #1, you can see that all of the Sensors go to -6000 cts (whut?) for ~30 seconds. Zooming in to that segment in Attachment #2, it would appear that the light detected by the LED changed dramatically (went dark?) on all 5 coils. The 4 face coils have the same time constant but the side has a different one, but in any case, this level of light change in half a second is clearly not physical. Then the watchdog trips because this huge apparent motion elicits a kick from the damping loops.
The plots I attach are for the DQed sensor channels, so there is some digital filtering involved. But I confirmed that the signal doesn't go negative if I disable the input to the filter module. So it would seem that the voltage input to the ADC really chanegd polarity, seems unphysical. Could be Satellite Box or whitening electronics I suppose - I think we can exclude bad cabling, as that would just lead to the signals going to 0, whereas it would appear here that they did really change sign (confirmed by looking at the ULPDmon channel, which is digitized by Acromag, which reports -10 V at the time of glitch). But why should the BS care about the PRM electronics going wonky?
In addition to an exorcist, we need functioning electronics!
This optic has been hampering my locking attempts all evening. I switched the PRM and SRM satellite boxes, but then I remembered PRM has the Al foil "hats" to attenuate scattered light. of course the Al foil is conducting and can short the OSEM leads. I put some kapton pieces in between OSEM and foil to try and mitigate this issue but I suppose over time it could have slipped, and is making some intermittent contact, shorting PD anode and cathode (that would explain the PD reporting -10 V instead of some physical value).
If this is the problem we would need a vent to address it. In the daytime I'll measure L and R of the coils to see if the actuator imbalance I reported is also due to the same problem... |
15897
|
Wed Mar 10 15:35:25 2021 |
Paco, Anchal | Summary | IMC | IMC free swinging experiment set to trigger at 5:00 am | A tmux session named "MCFreeSwingTest" will run on Rossa. This session is running script scripts/SUS/freeSwingMC.py (also attached) which will trigger at 5:00 am to impart 30000 counts kick to MC1, MC2, and MC3 after shutting PSL shutter and disabling the MC autolocker. It will let them freely swing for 1050 sec and will repeat 15 times to allow some averaging. In the end, it will undo all the changes it does and switches on autolocker on IMC. The script is set to restore any changes in case it fails at any point or a Ctrl-C is detected. |
15896
|
Wed Mar 10 15:29:58 2021 |
Anchal | Summary | IMC | IMC free swinging prep | No we didn't fix the issue. We'll post some screenshots tomorrow. From "sitemap>Shutter>PSL" we meant in Shutter medm window, we clicked on the PSL close button. As pointed later, it switches C1:AUX-PSL_ShutterRqst while the PSL shutter switch on Lock MC medm screen switches C1:PSL-PSL_ShutterRqst. We were not sure if this was intentional, so we didn't change anything. |
15895
|
Wed Mar 10 15:00:16 2021 |
gautam | Summary | IMC | IMC free swinging prep | Did you fix this issue? It is helpful to post a screenshot of the offending MEDM screen in addition to witticisms. The elog says "sitemap>Shutter>PSL" but I can't find PSL under the dropdown for shutters from Sitemap.
# Moving on to IMC suspensions characterization:
- Closed the PSL shutter, to our suprise, the MC was still locked. We thought this would take away any light from IMC but it doesn't. Maybe the IFO Overview needs to show the schematic in a way where this doesn't happen: "No light from any laser entering the MC but it still is locked with a resonating field inside."
|
|
15894
|
Wed Mar 10 11:55:22 2021 |
gautam | Update | SUS | PRM suspension suspect | The procedure is that the optic is kicked to excite it, and allowed to ring down for ~1ksec, with damping turned off. The procedure is repeated 15 times for some averaging.
Attachment #1 - sensor spectra from yesterday.
Attachment #2 - peaks using the naive diagonalization matrix from yesterday.
Attachment #3 - Data from ~1 year ago.
The y-axis in all plots is labelled as "cts/rtHz" but these are the DQed channels, which come after a "cts2um" CDS filter - so if that filter is accurate, them the y-axes may be read as um/rtHz.
I wonder if the September 2020 earthquake somehow damaged the PRM suspension, as this experiment would suggest that the problem is not only with the actuation. The data was gathered with the neutral position of the PRM (between kicks) being well aligned for PRMI, and the DC values of all the shadow sensors in this position is close to half-light (~1V, except for side which was more like 4V). Hard to say what exactly is happening since only the PIT DoF has the weird asymmetric peak shape instead of the expected Lorentzian - I would have thought that a damaged wire or broken magnet would affect all 4 DoFs but the F.C. spring experience on ETMY showed that anything is possible. |
15893
|
Wed Mar 10 11:46:22 2021 |
Paco, Anchal | Summary | IMC | IMC free swinging prep | [Paco, Anchal]
# Initial State
- MC is locked. The PRM monitor shows some oscillations.
- POP monitor shows light flashing once in a while.
- AS monitor shows one beam along with some other flashing beam around it.
- PRM Watchdog is tripped and shutdown. Everything else is normal except for overload on SRM OpLevs.
- Donatella got a mouse promotion
# Reenabling PRM watchdog:
- The custom reEnablePRMWatchdog.py has been deleted.
- Tried enabling the coil outputs manually and switching watchdog to Normal.
- Again saw large fluctuations like yesterday.
- Probably still the same issue of how current calculated actuations to the coils is in range -600 to -900 and gives and impulse to the optics when suddenly turned on.
- Waiting for PRM to damp down a little.
- Today we plan to change the position bias on PRM C1:SUS-PRM_POS_OFFSET instead of changing biases in pitch and yaw.
- Changing C1:SUS-PRM_POS_OFFSET from 0 to +/- 100 without enabling the coils, it seems upper and lower coils are anticorrelated with just changing the position. So going back to changing pitch.
- Changing C1:SUS-PRM_PIT_OFFSET from 0 -> 780. Switched on watchdog to normal.
- PRM damped down. OpLev errors are also within range.
- Enabled both OpLevs.
# Try locking Y-Arm
- IFO>CONFIGURE>YARM>Restore YARM (POY) using Donatella. See a bunch of python error messages in the call complaining about unable to find some python 2 files. Closed it with Ctrl-C after a stuck state.
- Tried running it on Pianosa, the script ran without error but Y-Arm didn't lock.
# Try locking X-Arm
- IFO>CONFIGURE>XARM>Restore XARM (POX) on Donatella. Again a bunch of OSError messages. Donatella is not configured properly to run scripts.
- Tried running it on Piasnosa, the script ran without error but X-Arm didn't lock.
- This might mean that both arms are misaligned or the BS/PRM is misaligned.
- Moving around C1:SUS-PRM_PIT_OFFSET and C1:SUS-PRM_YAW_OFFSET in order to see if the transmitted light is misalgined. Both arms are set to acquire lock if possible. No luck.
# Hypothesis: The Arm cavity is not aligned within itself (ITM-ETM)
- Will try to lock X-Arm with green light while tuning the ETMX. Hopefully the BS and ITM are aligned so that once we align ETMX to get a green lock, the IR will also lock from the other side.
- Running IFO>CONFIGURE>XARM>Restore XARM (ALS) on Pianosa. No lock, moving forward with tunning ETMX pitch and yaw offsets. Nothing changed. Brought back to same values.
[Rana joined, Anchal moved to Rossa from Pianosa]
# Moving on to IMC suspensions characterization:
- Closed the PSL shutter, to our suprise, the MC was still locked. We thought this would take away any light from IMC but it doesn't. Maybe the IFO Overview needs to show the schematic in a way where this doesn't happen: "No light from any laser entering the MC but it still is locked with a resonating field inside."
- Shutting IMCR shutter (hoping that would unlock the IMC), still nothing happend.
- Tried shutting PSL shutter from Rossa, nothing happened to MC lock still.
- Closed shutter IOO>Lock MC> Close PSL and this unlocked the IMC. Found out that this shutter channel is C1:PSL-PSL_ShutterRqst while the one from the sitemap>Shutter>PSL changes C1:AUX-PSL_ShutterRqst. Some clarification on these medm screens would be nice.
- Disabled the MC autolocked from IOO>Lock MC screen (C1:IOO-MC_LOCK_ENABLE).
- Checked the scripts/SUS/freeswing.py to understand how kick is delivered and optic is left to swing freely.
- Next, we are looking at the C1SUS_MC1 screen to understand what channels to read during data acquisition.
- In sensor matrix, we see INMON for each sensor which is probably raw counts data from the OSEMs. Rana mentioned that OSEM data comes out in units of microns. These are C1:SUS-MC1_ULSEN_OUTPUT (and so on for UR, LL, LR, SD).
- In prep for finishing, recovered Autolocker by first opening the PSL mechanical shutter, then re-enabling the Autolocker. The IMC lock didn't immediately recover, and we saw some fuzz on the PSL-FSS_FAST trace, so we closed the shutter again, waited a minute, then re-opened it and MC caught its lock.
|
15892
|
Wed Mar 10 00:32:03 2021 |
gautam | Update | LSC | PRFPMi | The interferometer can nearly be locked again. I was unable to fully hand off control from ALS-->RF, I suspect I may be using the wrong sign on the AO path (or some such other sub-optimal CM board settings). I'll hook up the SR785 and take some TFs tomorrow, that should give more insight into what's what. With the arms held off resonance, the PRMI acquires lock nearly instantly (REFL165 I for PRCL, REFL165 Q for MICH), and can stay locked nearly indefinitely, which is what I need so I can get the RF lock going. However the sensing matrix (for vertex DoFs, arms held off resonance) still makes no sense to me. The MICH loop has ~50 Hz UGF and the PRCL loop ~150 Hz. I think the MICH loop shape can be optimized a little for better low frequency suppression, but this isn't the show-stopper at the moment. For record-keeping, the ALS performance was excellent and other subsystems were nominal tonight. |
15891
|
Tue Mar 9 18:49:28 2021 |
Yehonathan | Update | SUS | OSEM testing for SOSs | 29 Good OSEMs, of which 1 is questionable (089) with PD voltage of 1.5V, 5 need some work (pigtailing, replace/remove/add screws). We have 4 pigtails. Schematics.
20 OK OSEMs (Slightly off-centered LED spot), of which 3 need some work (pigtailing, replace/remove/add screws).
13 Bad OSEMS (Way off-centered LED spot)
2 Defunct OSEMs
-------
Ed: KA
Good: 23 complete OSEMs + 5 good ones, which need soldering work (there are 4 pigtails and take one from a defunct OSEM).
OK: Use good 7 OSEMs for the sides. And keep some functional OSEMs as spares.
|
15890
|
Tue Mar 9 16:52:47 2021 |
Jon | Update | CDS | Front-end testing | Today I continued with assembly and testing of the new front-ends. The main progress is that the IO chassis is now communicating with the host, resolving the previously reported issue.
Hardware Issues to be Resolved
Unfortunately, though, it turns out one of the two (host-side) One Stop Systems PCIe cards sent from Hanford is bad. After some investigation, I ultimately resolved the problem by swapping in the second card, with no other changes. I'll try to procure another from Keith Thorne, along with some spares.
Also, two of the three switching power supplies sent from Livingston (250W Channel Well PSG400P-89) appear to be incompatible with the Trenton BPX6806 PCIe backplanes in these chassis. The power supply cable has 20 conductors and the connector on the board has 24. The third supply, a 650W Antec EA-650, does have the correct cable and is currently powering one of the IO chassis. I'll confirm this situation with Keith and see whether they have any more Antecs. If not, I think these supplies can still be bought (not obsolete).
I've gone through all the hardware we've received, checked against the procurement spreadsheet. There are still some missing items:
- 18-bit DACs (Qty 14; but 7 are spares)
- ADC adapter boards (Qty 5)
- DAC adapter boards (Qty 9)
- 32-channel DO modules (Qty 2/10 in hand)
Testing Progress
Once the PCIe communications link between host and IO chassis was working, I carried out the testing procedure outlined in T1900700. This performs a series checks to confirm basic operation/compatibility of the hardware and PCIe drivers. All of the cards installed in both the host and the expansion chassis are detected and appear correctly configured, according to T1900700. In the below tree, there is one ADC, one 16-ch DIO, one 32-ch DO, and one DolphinDX card:
+-05.0-[05-20]----00.0-[06-20]--+-00.0-[07-08]----00.0-[08]----00.0 Contec Co., Ltd Device 86e2
| +-01.0-[09]--
| +-03.0-[0a]--
| +-08.0-[0b-15]----00.0-[0c-15]--+-02.0-[0d]--
| | +-03.0-[0e]--
| | +-04.0-[0f]--
| | +-06.0-[10-11]----00.0-[11]----04.0 PLX Technology, Inc. PCI9056 32-bit 66MHz PCI <-> IOBus Bridge
| | +-07.0-[12]--
| | +-08.0-[13]--
| | +-0a.0-[14]--
| | \-0b.0-[15]--
| \-09.0-[16-20]----00.0-[17-20]--+-02.0-[18]--
| +-03.0-[19]--
| +-04.0-[1a]--
| +-06.0-[1b]--
| +-07.0-[1c]--
| +-08.0-[1d]--
| +-0a.0-[1e-1f]----00.0-[1f]----00.0 Contec Co., Ltd Device 8632
| \-0b.0-[20]--
\-08.0-[21-2a]--+-00.0 Stargen Inc. Device 0101
\-00.1-[22-2a]--+-00.0-[23]--
+-01.0-[24]--
+-02.0-[25]--
+-03.0-[26]--
+-04.0-[27]--
+-05.0-[28]--
+-06.0-[29]--
\-07.0-[2a]--
Standalone Subnet
Before I start building/testing RTCDS models, I'd like to move the new front ends to an isolated subnet. This is guaranteed to prevent any contention with the current system, or inadvertent changes to it.
Today I set up another of the Supermicro servers sent by Livingston in the 1X6 test stand area. The intention is for this machine to run a cloned, bootable image of the current fb1 system, allowing it to function as a bootserver and DAQ server for the FEs on the subnet.
However, this hard disk containing the fb1 image appears to be corrupted and will not boot. It seems to have been sitting disconnected in a box since ~2018, which is not a stable way to store data long term. I wasn't immediately able to recover the disk using fsck. I could spend some more time trying, but it might be most time-effective to just make a new clone of the fb1 system as it is now. |
15889
|
Tue Mar 9 15:22:56 2021 |
Koji | Summary | SUS | PRM suspension | I just saw the PRM watchdog tripped at ~15:20 local (23:20UTC). I restored the PRM but I saw only the side watchdog tripped.
Again at 15:27
17:55 I found the PRM was oscillating while the watchdogs were not tripped. I turned off the OPLEV servos and this made the PRM calmed down. But I didn't turn on the OPLEVs for the past two trips. How were the OPLEVs turned on???
Ah, I'm sorry, I missed the line that Gautam was running the free-swinging test on the PRM.
The two kicks starting from 23:08:50 and from 23:26:31 were spoiled. Did it make the measurement completely waisted?
|
15888
|
Tue Mar 9 15:19:03 2021 |
Koji | Update | SUS | OSEM testing for SOSs | How were the statistics of them? i.e. # of Good OSEMs, # of OK OSEMs, etc... |
15887
|
Tue Mar 9 14:37:26 2021 |
gautam | Summary | SUS | PRM suspension | The PRM got tripped ~5AM this morning. The cause is unclear - the seismometer reports elevated activity ~10 minutes before the ringdown starts (as judged using the OSEMs). But the other optics didn't seem to receive as much of an impulse (I only show the BS sensors here as it sits on the same stack as the PRM). Anyway it certainly wasn't me trying to make life difficult for the morning team.
I was able to restore the damping with reEnableWatchdogs.py. I am now running some suspension tests on the PRM by letting it swing freely so please let that finish. I plan to attempt some locking this evening.
Quote: |
[Paco, Anchal]
- Upon arrival, MC is locked, and we can see light in MON5 (PRM) (usually dark).
|
|
15886
|
Tue Mar 9 14:30:22 2021 |
Yehonathan | Update | SUS | OSEM testing for SOSs | I finished ranking the OSEMS on the OSEM wiki page.
I also moved the OSEM data folder from /home/export/home to /users/public_html and created a soft link instead. I have done the same for the 40m_TIS folder that I uploaded there a while ago. |
15885
|
Tue Mar 9 12:41:29 2021 |
Koji | Summary | Electronics | Investigation on the invacuum Dsub cables | I believe the aLIGO style invac dsub cables and the conventional 40m ones are incompatible.
While the aLIGO spec is that Pin1 (in-vac) is connected to the shield, Pin13 (in-vac) is the one for the conventional cable. I still have to check if Pin13 is really connected to the shield, but we had trouble before for the IO TTs https://nodus.ligo.caltech.edu:8081/40m/7864.
(At least one of the existing end cables did not show this Pin13-chamber connection. However, the cables OMC/IMC chambers indicated this feature. So the cables are already inhomogenious.)
- Which way do we want to go? Our electronics are updated with aLIGO spec (New Sat amp, OMC electronics, etc), so I think we should start making the shift to the aLIGO spec.
- Attachment Top: The new coil drivers can be used together with the old cables using a custom DB25 cable (in-air).
- Attachment Mid: The combination of the conventional OSEM wiring and the aLIGO in-vac cable cause the conflict. The pin1 which is connected to the shield is used for the PD bias.
- Attachment Bottom: This can be solved by shifting the OSEMs by one pin.
Notes:
o The aLIGO cables have 12 twisted pair wires, but paired signals do not share a twisted pair.
--- No. This can't be solved by rotating the connectors.
o This modification should be done only for the new suspension.
--- In principle, we can apply this change to any SOSs. However, this action involves the vent. We probably want to install the new electronics for the existing suspensions before the vent.
o ^- This means that we have to have two types of custom DB25 in-air cables.
--- Each cable should handle "Shield wire" from the sat amp correctly.
Related Links:
Active TT Pin Issue
https://nodus.ligo.caltech.edu:8081/40m/7863
and the thread
Hacky solution
https://nodus.ligo.caltech.edu:8081/40m/7869
Photo
https://photos.google.com/u/1/album/AF1QipOEDi7iBdS4EHcpM7GBbv9l6FiJx-Tkt1I2eSFA
Active TT Pin Swapping (December 21, 2012)
TT Wiring Diagram (Wiki)
https://wiki-40m.ligo.caltech.edu/Suspensions/Tip_Tilts_IO |
15884
|
Tue Mar 9 10:57:06 2021 |
Paco, Anchal | Summary | IMC | XARM lock and POX spectra | [Paco, Anchal]
- Upon arrival, MC is locked, and we can see light in MON5 (PRM) (usually dark).
# XARM locking
- Read through "XARM POX" script (path='/cvs/cds/rtcds/caltech/c1/burt/c1configure/c1configureXarm')
- Before running the script, we noticed the PRM watchdog is down, so we manually repeat the procedure from last time, but see more swinging even though the watchdog is active.
- Run a reEnablePRMWatchdogs.py script (a copy of reEnableWatchdogs.py with optics=['PRM']), which had the same effect.
- We manually disable the watchdog to recover the state we first encountered, and wait for the beam in MON5 to come to rest.
- The question is; is it fine to lock Xarm with PRM watchdog down?
- To investigate this, we look at the effect of the offset on the unwatchdog-PRM.
- Manually change 'PRM_POS_OFFSET' to 200, and -800 (which is the value used in the script) with no effect on the PRM swinging.
- Moving on, run IFO > CONFIGURE > ! (X Arm) > RESTORE XARM (XARM POX), and ... success.
# MC-POX noise spectra
- With XARM locked, open diaggui and take spectra for C1:LSC-POX11_I_ERR_DQ, C1:LSC-POX11_Q_ERR_DQ, C1:IOO-MC_F_DQ
- Lost XARM lock while we were figuring out unit conversions...
- Assuming 2.631e-13 m/counts (6941) and using 37.79 m (arm length), 1064.1 nm wavelength, we get a calibration factor of 2.631e-13 * c / (2*L*lambda) ~ 0.9809 Hz/count
- (FAQ?, how to find/compute/measure the correct calibration factors?)
- Relock XARM, retake spectra. Attachment 1 has plots for POX11_I/Q_ERR_DQ spectrum (cts/rtHz, we couldn't find relevant calibration) and MC_F_DQ in (Hz/rtHz from referring to 15576, we couldn't get the units to show on y scale.)
# MC-POY noise spectra (attempt)
- Now, run IFO > CONFIGURE > ! (Y Arm) > RESTORE YARM (YARM POY), and XARM locks (why?)
- Could PRM watchdog being down be the cause?
- Try C1ASS > (YARM) ! More Scripts > ON, and looked at YARM PIT/YAW striptool.
- C1ASS > (YARM) ! Freeze Outputs, then OFF
- Go back to IFO > CONFIGURE > ! (Y Arm) > Align YARM (ASS ON: Unfreeze), try running this then Freeze, then OFF Zero Outputs.
- Try RESTORE YARM (POY) again, still not working.
- Try RESTORE YARM ALS, then try again after opening the shutter, but also fail to lock AUX.
- Is the PRM WD behind some evil misalignment? Will move forward with XARM bc it is happy.
# ARM locking
- Attempted the IFO > CONFIGURE > ! (X Arm) > RESTORE Xarm (XARM ALS) but green failed to lock and we lost XARM lock.
- Try to recover XARM lock... success. It's nice to have a (repeatable) checkpoint.
- Attempt YARM lock. Not successful. It just seems like the lock Triggers are not raised (misalignment?)
- From C1SUS_ETMY, try changing the bias "C1:SUS-ETMY_YAW_OFFSET" manually to reduce the OPLEV_YERROR. Changed from -47 to -57.
- Retry YARM lock script... no luck
- From C1SUS_PRM, try changing the bias "C1:SUS-PRM_PIT_OFFSET" manually to reduce OPLEV errors. Changed from 34 to 22 with no effect, then realized the coil outputs are disabled because the WD is down...
- So we do the following BIAS changes "C1:SUS-PRM_PIT_OFFSET" = 34 > 770 and "C1:SUS-PRM_YAW_OFFSET" = 134 > -6
- Enable all Coil Outputs, turn WD to Normal, turn OPLEVs ON, (this time the beam does not swing like crazy).
- Fine tune BIASes "C1:SUS-PRM_PIT_OFFSET" = 770 > 805 and "C1:SUS-PRM_YAW_OFFSET" = -6 > 65
- Saw YARM locking briefly, then unlocking, but we stopped once the OPLEV_ERRs no longer overloaded (from magnitudes > 50 to ~ 40).
- Retry YARM lock... no luck
- From C1SUS_ETMY, try changing the bias "C1:SUS-ETMY_PIT_OFFSET" from -1 to 6.
Stop for the day. Leave XARM locked, MC locked. |
15883
|
Mon Mar 8 22:01:26 2021 |
gautam | Update | LSC | More PRMI | There are still many mysteries remaining - e.g. the MICH-->PRCL contribution still can't be nulled. But for now, I have the settings that keep the PRMI locked fairly robustly with REFL55I/Q or REFL165I/Q (I quadrature for PRCL, Q for MICH in both cases), see Attachment #1 and Attachment #2 respectively. For the 1f locking, the REFL55 digital demod phase was fine-tuned to minimize the frequency noise (generated by driving MC2) coupling to the Michelson readout (as the Michelson is supposed to be immune) - the coupling was measured to be ~60dB larger at the PRCL error point than MICH. There was still nearly unity coherence between my MC2 drive and the MICH error point demodulated at the drive frequency, but I was not able to null it any better than this. With the PRMI (ETMs misaligned) locked on the 1f signals, I measured Attachment #1 and used it to determine the demod phase that would best enable REFL165_I to be a PRCL sensor. Rana thinks that if there is some subtle effect in the marginally stable PRC, we would not see it unless we do a mode scan (time consuming to set up and execute). So I'm just going to push on with the PRFPMI locking - let's see if the clean arm mode forces a clean TEM00 mode to be resonant in the PRC, and if that can sort out the lack of orthogonality between MICH/PRCL in the 1f sensors (after all, we only care about the 3f signals in as much as they allow us to lock the interferometer). I'll try the PRMI with arms on ALS tomorrow eve.
I have no idea what to make of how the single frequency lines I am driving in MICH and PRCL show up in REFL11 and REFL33 - the signals are apparently completely degenerate (in optical quadrature). How this is possible even though the PRMI remains stably locked, POP22/POP110/AS110 report stable sideband buildup is not clear to me. |
15882
|
Mon Mar 8 20:11:51 2021 |
rana | Frogs | Computer Scripts / Programs | activate_matlab out of control on Megatron | there were a zillion processes trying to activate (this is the initial activation after the initial installation) matlab 2015b on megatron, so I killed them all. Was someone logged in to megatron and trying to run matlab sometime in 2020? If so, speak now, or I will send the out-of-control process brute squad after you! |
15881
|
Mon Mar 8 19:22:56 2021 |
rana | Summary | SUS | IMC suspension characterization | Herewith, I describe an adventure
- Balance the OSEM input matrix using the free swinging data (see prev elogs).
- Balance the coil actuation by changing the digital coil gains. This should be done above 10 Hz using optical levers, or some IMC readout (like the WFS). At the end of this process, you should put a pringle vector into the column of the SUS output matrix that corresponds to one of the SUS OSC/LOCKIN screens. Verily, the pringle excitation should produce no signal in MC_F or da WFS.
- use the Malik doc on the single suspension to design feed-forward filters for the SUS COIL filter banks. You can get the physical parameters using the design documents on DCC / 40m wiki and then modify them a bit based on the eigenfrequencies in the free swinging data.
- Model the 2x2 system which includes longitudinal and pitch motion. Consider how accurate the filters must be to maintain a cross-coupling of < 3% in the 0.5-2 Hz band.
- Is this decoupling forsooth still maintained when you close the SUS damping loops in the model? If not, why so?
- Make step response measurements of the damping loops and record/plot data. Use physical units of um/urad for the y-axes. How much is the step response cross-coupling?
- Consider the IMC noise budget: are the low pass filters in the damping loops low-passing enough? How much damping is demasiado (considering the CMRR of the concrete slab for seismic waves)?
- Can we use Radhika's AAA representation to auto-tune the FF and damping filters? It would be very slick to be able to do this with one button click.
gautam: For those like me who don't know what the AAA representation is: the original algorithm is here, and Lee claims his implementation of it in IIRrational is better, see his slides. |
15880
|
Mon Mar 8 17:09:29 2021 |
gautam | Update | SUS | PRM coil actuators heavily imbalanced | I realized I hadn't checked the PRM actuator as thoroughly as I had the others. I used the Oplev as a sensor to check the coil balancing, and I noticed that while all 4 coils show up with the expected 1/f^2 profile at the Oplev error point, the actuator gains seem imbalanced by a factor of ~5. The phase isn't flat because of some filters in the Oplev electronics I guess. The Oplev loops were disabled for the measurement, and the excitations were small enough that the beam stayed reasonably well centered on the QPD throughout. This seems very large to me - the values in the coil output filter gains lead me to expect more like a ~10% mismatch in the actuation strenghts, and similar tests on other optics in the past, e.g. ETMY, have yielded much more balanced results. I'm collecting some free-swinging PRM data now as an additional check. I verified that all the coils seem actuatable at least, by applying a 500 ct step at the offset of the coil output FM, and saw that the optic moved (it was such a test that revealed that MC1 had a busted actuator some time ago). If the eigenmode spectra look as expected, I think we can rule out broken magnets, but I suppose the magnets could still be not well matched in strength? |
15879
|
Mon Mar 8 12:54:54 2021 |
gautam | Update | Equipment loan | 40m-->Cryo |
- Busby box
- SR554 transformer preamplifier
|
15878
|
Mon Mar 8 12:40:35 2021 |
gautam | Summary | training | Investigate how-to XARM locking | For the arm locking, the "Restore Xarm (XARM POX)" script from the "IFO_CONFIGURE" MEDM screen should get you there (I just checked it and it works fine). It is worth getting a hang of the PDH signal chain (read what the script is doing and map it to the signal chain) so you get a feel for where there may be offsets, saturations, what the trigger logic is etc. The LSC overview screen is supposed to be pretty intuitive (if you think it can be improved, I'd love to hear it but please don't change it without documenting) and there are also the webviews of the simulink models (these are RO so feel free to click around, for the LSC the c1lsc model is the relevant one). |
15877
|
Mon Mar 8 12:01:02 2021 |
Paco, Anchal | Summary | training | Investigate how-to XARM locking | [Paco, Anchal]
- Started zoom stream; thanks to whoever installed it!
- Spent some time trying to understand how anything we did last thursday lead to the sensing matrix change, but still cannot figure it out.
- Tracking back on our actions, at ~10:30 we ran burt Restore with the 08:19/.*snap and in lack of a better suspect, we blame it on that action for now.
# ARM locking??
- Reading (not running) the scripts/XARM/lockXarm.py script and try to understand the workflow. It is pretty confusing that the result was to lock Yarm last time.
- It looks like this script was a copy of lockYarm.py, and was never updated (there's a chance we ran it for the first time last thursday)
- *Is there a script to lock the Arms?* Or should we write one? To write one, we first attempt a manual procedure;
1. No need to change RFPD InMTRX
2. All filters inputs / outputs are enabled
3. Outputs from XARM and YARM in the Output matrix are already going to ETMX and ETMY
- Maybe we can have the ARM lock engage by changing the MC directly?
4. Change C1:SUS-MC2_POS_OFFSET from -38 to -0, and enable C1:SUS-MC2_POS_OFFSET_ON
5. Manually scan MC2_POS_OFFSET to 250 (nothing happens), then -250, then back to -38 (WFS1 PIT and YAW changed a little, but then returned to their nominal values)
- Or maybe we need to provide the right gain...
6. Disabled C1:SUS-MC2_POS_OFFSET_ON (back to nominal state)
7. Look into manually changing C1:LSC-XARM_GAIN;
From the command line using python:
>> import epics
>> ch_name = 'C1:LSC-XARM_GAIN'
>> epics.caput(ch_name, 0.155) # nominal = 0.150
- Could be unrelated, but we noted a slow spike on C1:PSL-FSS_PCDRIVE (definitely from before we changed anything)
- Still nothing is happening
8. Changed the gain to 0.175, then back to 0.150, no effect... then 0.2, 0.3 ...
- Stop and check SUS_Watchdogs (should not have changed?) and everything remains nominal
- Revert all changes symmetrically.
- Could we have missed enabling FM1?
- Briefly lost MC lock, but it came back on its own (probably unrelated)
- Wrap it up for the day. In summary; no harm done to our knowledge. |
|