40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 304 of 357  Not logged in ELOG logo
ID Dateup Author Type Category Subject
  15278   Tue Mar 17 01:22:03 2020 gautamUpdateLSCLocking updates

Summary:

No real progress tonight - I made it a bunch of times to the point where CARM was RF only, but I never got to run a measurement to determine what the DARM_B loop gain should be to make the control fully RF.

Details:

  • Touched up PMC alignment.
  • There were very few BNC cables available at the rack near SW corner of the PSL table - the short BNC cables are NOT meant to be daisy chained to make long cables to run along the arm, I removed all those.
  • Restored SR785 at LSC rack for CARM TF measurements.
  • I was able to get the CARM UGF ~5 kHz, but everytime I was trying to run a DTT swept sine to measure the ratio of DARM_B_IN1 / DARM_A_IN1, the lock was lost - not sure if this is because of the excitation injected or something else.
  • I'll probably give this another shot Wednesday eve.
  15279   Wed Mar 18 21:43:26 2020 gautamUpdateVACMain vol pressure jump

There was a jump in the main volume pressure at ~6pm PDT yesterday. The cause is unknown, but the pressure doesn't seem to be coming back down (but also isn't increasing alarmingly).

I wanted to look at the RGA scans to see if there were any clues as to what changed, but looks like the daily RGA scans stopped updating on Dec 24 2019. The c0rga machine responsible for running these scans doesn't respond to ssh. Not much to be done until the lockdown is over i guess...

  15280   Wed Mar 18 22:10:41 2020 KojiUpdateVACMain vol pressure jump

I was in the lab at the time. But did not notice anything (like turbo sound etc). I was around ETMX/Y (1X9, 1Y4) rack and SUS rack (1X4/5), but did not go into the Vac region.

  15281   Thu Mar 19 03:33:28 2020 gautamUpdateLSCMore locking updates

Some short notes, more details tomorrow.

  1. I was able to make it to CARM on RF only ~10 times tonight.
  2. Highest stable circulating power was ~200 (recycling gain ~10) but the control scheme is still not finalized in terms of offsets etc.
  3. DARM to RF transition was never fully engaged - I got to a point where the ALS gain was reduced to <half its nominal value, but IMC always lost lock.
  4. CARM loop UGF of ~5 kHz was realized. I was also able to turn on a regular boost. But couldn't push the gain up much more than this. Should probably modify the boosts on this board, their corner frequencies are pretty high.
  5. The increased FSS flakiness post c1psl upgrade is definitely hurting this effort, there are periods of ~20-30mins when the IMC just wont lock.

Attachment #1 shows time series of some signals, from the time I ramp of ALS CARM control to a lockloss. With this limited set of signals, I don't see any clear indication of the cause of lockloss, but I was never able to keep the lock going for > a couple of mins.

Attachment #2 shows the CARM OLTF. Compared to last week, I was able to get the UGF a little higher. This particular measurement doesn't show it, but I was also able to engage the regular boost. I did a zeroth order test looking at the CM_SLOW input to make sure that I wasn't increasing the gain so much that the ADC was getting saturated. However, I did notice that the pk-to-pk error signal in this locked, 5kHz UGF state was still ~1000 cts, which seems large?

Attachment #3 shows the DTT measurement of the relative gains of DARM A and B paths. This measurement was taken when the DARM_A gain was 1, and DARM_B gain was 0.015. On the basis of this measurement, DARM_B (=AS55) sees the excitation injected 16dB above the ALS signal, and so the gain of the DARM_B path should be ~0.16 for the same UGF. But I was never able to get the DARM_B gain above 0.02 without breaking the lock (admittedly the lockloss may have been due to something else).

Attachment #4 shows a zoomed in version of Attachment #1 around the time when the lock was lost. Maybe POP_YAW experienced too large an excursion?

Some other misc points:

  • It was much quicker to acquire the PRMI lock with CARM held off resonance using the 1f signals rather than 3f - so I did that and then once the lock is acquired, transfer control to 3f signals (using CDS ramptime) before zeroing the CARM offset.
  • The whole process is pretty speedy - it takes <5mins to get to the CARM on RF only stage provided the PRMI lock doesn't take too long (the transition from POX/POY to ALS sequence takes <1min).
  • I am wondering what the correct way to set the offsets for the 3f error signals is? 
  • The arm buildup is strongly dependent on the DC alignment of the PRMI - the best buildups I got were when I tweaked the BS alignment after the CARM offset was zeroed.
  15282   Tue Mar 24 19:41:57 2020 gautamUpdateWienerSeismic feedforward for MCL

Summary:

I think the feedforward filters used for stabilizing MCL with vertex seismometers would benefit from a retraining (last trained in Sep 2015). 

Details:

I wanted to re-familiarize myself with the seismic feedforward methodology. Getting good stabilization of the PRC angular motion as we have been able to in the past will be a big help for lock acquisition. But remotely, it is easier to work with the IMC length feedforward (IMC is locked more often than the PRC). So I collected 2 hours of data from early Sunday morning and went through the set of steps (partially).

Attachment #1 shows the performance of a first attempt.

  • 1 hour of data was used as a training set, and another hour to validate the trained filter.
  • All the data was downsampled to 64 Hz.
  • The number of FIR filter taps was 32 seconds * 64 Hz. 
  • Going through some old elogs, there were a number of suggestions from various people about how the training should be done
    • There was a suggestion that pre-filtering the target signal by the (inverse) actuator TF (i.e. TF from MC2 drive to MCL) is beneficial, presumably because it gives the Wiener filter fitting fewer parameters to fit.
    • There was also suggestions that some frequency-dependent weighting of the target signal should be done (e.g. by bandpassing MCL between 0.1 Hz - 10 Hz) to emphasize subtraction in this band.
    • For this particular example, in my limited paramter space exploration, I found that neither of these measures had particularly significant impact.
  • In any case, the time-domain FIR filtering seems to approach the theoretical best possible performance (based on coherence information). 
  • I have not yet checked what the theoretical limit on subtraction will be based on the seismometer noise ASD.

Attachment #2 shows a comparison between the filter used in Attachment #1 and the filters currently loaded into the OAF system. 

  • In the band where significant subtraction is possible, there is some difference in the shape of the filter.
  • Why should this have changed? I guess there are multiple possibilities - seismometer recentering, signal chain changes, ...

Attachment #3 is the asd after implementing a time domain Wiener filter, while Attachment #4 is an actual measurement from earlier today - it's not quite as good as Attachment #3 would have me expect but that might also be due to the time of the day. 

Conclusions and next steps:

On the basis of Attachments #3 and #4, I'd say it's worth it to complete the remaining steps for online implementation: FIR to IIR fitting and conversion to sos coefficients that Foton likes (prefereably all in python). Once I've verified that this works, I'll see if I can get some data for the motion on the POP QPD with the PRMI locked on carrier. That'll be the target signal for the PRC angular FF training. Probably can't hurt to have this implemented for the arms as well.

While this set of steps follows the traditional approach, it'd be interesting if someone wants to try Gabriele's code which I think directly gives a z-domain representation and has been very successful at the sites.

* The y-axes on the spectra are labelled in um/rtHz but I don't actually know if the calibration has been updated anytime recently. As I type this, I'm also reminded that I have to check what the whitening situation is on the Pentek board that digitizes MCL.

  15283   Wed Mar 25 15:15:55 2020 gautamUpdateVACVacuum interlock code, N2 warning update

The email address in the N2 checking script wasn't right - I now updated it to email the 40m list if the sum of reserve tank pressures fall below 800 PSI. The checker itself is only run every 3 hours (via cron on c1vac).

Quote:

I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process

  15284   Thu Mar 26 17:41:18 2020 JonOmnistructureBHDBHD docs compilation

Since there has been a proliferation of BHD Google docs recently, I've linked them all from the BHD wiki page. Let's continue adding any new docs to this central list.

  15285   Thu Mar 26 22:31:34 2020 YehonathanUpdateCDSC1AUXEY wiring + channel list

I have made a wiring + channel list that need to be included in the new C!AUXEY Acromag.

It was mostly copied from C1AUXEX

I ignored the IPANG channels since it is going to be removed from the table.

  15286   Mon Mar 30 19:02:49 2020 ranaUpdateGeneraldonated cleanroom supplies to Hospitals

Yesterday evening I took nearly all of the masks, gloves, gowns, alcohol wipes, hats, and shoe covers. These were the ones in the cleanroom cabinets at the east end of the Y-arm, as well as the many boxes under the yarm near those cabinets.

This photo album shows the stuff, plus some other random photos I took around the same time (6-7 PM) of the state of parts of the lab.

  15287   Tue Mar 31 09:39:41 2020 gautamUpdateCDSFoton for shaped noise injections

I'd like to re-measure the transfer function from driving MC2 position to the MC_L_DQ channel (for feedforward purposes). Swept sine would be one option, but I can't get the "Envelope" feature of DTT to work, the excitation amplitude isn't getting scaled as specified in the envelope, and so I'm unable to make the measurement near 1 Hz (which is where the FF is effective). I see some scattered mentions of such an issue in past elogs but no mention of a fix (I also feel like I have gotten the envelope function to work for some other loop measurement templates). So then I thought I'd try broadband noise injection, since that seems to have been the approach followed in the past. Again, the noise injection needs to be shaped around ~1 Hz to avoid knocking the IMC out of lock, but I can't get Foton to do shaped noise injections because it doesn't inherit the sample rate when launched from inside DTT/awggui - this is not a new issue, does anyone know the fix?

Note that we are using the gds2.15 install of foton, but the pre-packaged foton that comes with the SL7 installation doesn't work either.

Update:

The envelope feature for swept-sine wasn't working because i specified the frequency grid in the wrong order apparently. Eric von Reis has been notified to include a sorting algorithm in future DTT so that this can be in arbitrary order. fixing that allows me to run a swept sine with enveloped excitation amplitude and hence get the TF I want, but still no shaped noise injections via foton 😢 

  15288   Tue Mar 31 23:35:50 2020 ranaUpdateCDSFoton for shaped noise injections

do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.

If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.

  15289   Tue Mar 31 23:54:57 2020 gautamUpdateCDSFoton for shaped noise injections

The problem is that foton does not inherit the model sample rate when launched from DTT/awggui. This is likely some shared/linked/dynamic library issue, the binaries we are running are precompiled presumably for some other OS. I've never gotten this to work since we changed to SL7 (but I did use it successfully in 2017 with the Ubuntu12 install).

Quote:

do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.

If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.

  15290   Wed Apr 1 00:51:41 2020 gautamUpdateWienerSlightly improved MCL FF

Summary:

Retraining the MCL filters resulted in a slight improvement in the performance. Compared to no FF, the RMS in the 0.5-5 Hz range is reduced by approximately a factor of 3

Details:

Attachment #1 shows my re-measurement of the MC2 position drive to MCL transfer function.

  • The measurement was made using DTT swept sine, with the amplitude enveloped appropriately to avoid knocking the IMC out of lock.
  • Coherence was >0.97 for all datapoints.
  • Fitting was done using Lee's IIRrational, with the weighting being the coherence. I think there are some features of the fitting I don't fully understand, but I wanted to try and do everything in python and for this simple fit, it came out nicely I think. 

Attachment #2 shows the IIR fits to the FIR filters calculated here

  • Again, IIRrational was used. 
  • In the frequency band where subtraction is possible, the fit is good.
  • But there is definitely room for improvement in the way this is done, for now, I did quite a bit "by eye" and tweaked the order of the filter and the minimum number of excess poles relative to zeros to get the AC coupling, but it'd be nice to make all of this iterative and quantitative (e.g. by minimizing a cost function).
  • One nice feature of IIRrational is that it directly gives me a formatted string I can paste into foton. The order of these fits were 22, so I split them into two 19+3 order filters to be compatible with the realtime system before loading the coefficients (the overall gain was allocated to a single filter arbitrarily, with the other filter in the pair set to have unity gain in the zpk representation).

Attachment #3 shows several MCL spectra.

  • Blue trace is the unsubtracted test dataset.
  • Red is the performance of the calculated FIR filter, but the filtering is done offline.
  • Gold is the performance of the IIR fit to the FIR filter, as shown in Attachment #2, applied offline to the test dataset.
  • Green is the calculated ASD of MCL from a ~1 hour stretch from earlier tonight, when I left the feedforward loop on. So this is an actual measurement of the online performacne of the filter.
  • Grey is the performance of the old filter loaded in the CDS system - the filtering is done using scipy, and the sos coefficients from the C1OAF.txt file.

Conclusions + next steps

  1. Retraining the filters has resulted in a slight improvement, especially at ~3 Hz.
  2. More tests need to be done to confirm that noise isn't being reinjected in the frequency bands where subtraction isn't possible (e.g. using arm cavities as OOL sensors).
  3. The online filter isn't quite as good as what we would expect from calculations (green trace is noisier than gold). Need to think about why this is.
  4. Why can't we get more subtraction at 1 Hz?
  5. Now that I have the infrastructure ready, I will attempt to revive the PRC angular FF loops, which was the whole point of this exercise. 
  15291   Thu Apr 2 15:53:01 2020 gautamUpdateASCPRMI 1f locked for collecting feedforward data

This afternoon, I kept the PRM locked for ~1hour and then measured transfer functions from the PRM angular actuators to the POP QPD spot motion for pitch and yaw between ~1pm and 4pm. After this work, the PRM was misaligned again. I will now work on the feedforward filter design.

  15292   Thu Apr 2 16:31:33 2020 JonUpdateCDSC1AUXEY wiring + channel list
Quote:

I have made a wiring + channel list that need to be included in the new C!AUXEY Acromag.

I used Yehonathan's wiring assignments to lay the rest of groundwork for the final slow controls machine upgrade, c1auxey. Actions completed:

  • Created an internal wiring diagram for assembling the Acromag chassis (log in with LIGO.ORG credentials to view/edit)
  • Created a new target directory on the network drive:
/cvs/cds/caltech/target/c1auxey1

The "1" will be dropped after the new system is permanently installed.

  • Populated the target directory with files:
    • modbusIOC.service - wraps the EPICS IOC as a systemd service
    • ETMYaux.env - defines the EPICS environment variables
    • ETMYaux.cmd - command file to set up the EPICS IOC
    • ETMYaux.sh - enables DAC outputs to the suspension (executed lastly)
  • Created the EPICS channel databases:
    • ETMYaux.db - migration of the existing database
    • c1auxey_state.db - contains logic for loopback monitoring of the IOC "alive" state (visible from Sitemap > CDS > Slow Controls Status)

Hardware-wise, this system will require:

  • 2 Acromag XT-1221 units (ADC)
  • 1 Acromag XT-1541 unit (DAC)
  • 1 Acromag XT-1111 unit (sinking BIO)

I know that we do have these quantities left on hand. The next steps are to set up the Supermicro host and begin assembling the Acromag chassis. Both of these activities require an in-person presence, so I think this is as far as we can advance this project for now.

  15293   Thu Apr 2 22:19:18 2020 KojiUpdateCDSC1AUXEY wiring + channel list

We want to migrate the end shutter controls from c1aux to the end acromags. Could you include them to the list if not yet?

This will let us remove c1aux from the rack, I believe.

 

  15294   Fri Apr 3 12:09:53 2020 JonUpdateCDSC1AUXEY wiring + channel list
Quote:

We want to migrate the end shutter controls from c1aux to the end acromags. Could you include them to the list if not yet?

This will let us remove c1aux from the rack, I believe.

Yehonathan's list does include C1:AUX-GREEN_Y_Shutter and I copied its definition from /cvs/cds/caltech/target/c1aux/ShutterInterlock.db into the new ETMYaux.db file.

I noticed ShutterInterlock.db still contains about a dozen channels. Some of them appear to be ghosts (like the C1:AUX-PSL_Shutter[...] set, which has since become C1:PSL-PSL_Shutter[...] hosted on c1psl) but others like C1:AUX-GREEN_X_Shutter appear to still be in active use.

  15295   Fri Apr 3 13:40:07 2020 JonUpdateBHDBHD front-end complication

I wanted to pass along a complication pointed out by K. Thorne re: our plan to use Gen1 (old) Dolphin IPC cards in the new real-time machines: c1bhd, c1sus2. The implication is that we may be forced to install a very old OS (e.g., Debian 8) for compatibility with the IPC card driver, which could lead to other complications like an incompatibility with the modern network interface.

Hardware is easy - you will also need a DX switch and the cables

As for the driver - the last update (version 4.4.5) was in 2016.  The notes on it say valid for Linux kernel 2.6 to 3.x.  This implies that it will not work with Linux kernel 4.x and greater

So - Gentoo with 3.0 kernel OK, SL7 (kernel 3.10)  - OK,   Debian 8 (kernel 3.16) - OK   

But Debian 9 (kernel 4.9),Debian 10 (kernel 4.19) - NOT OK

We have Gentoo with kernel 3.0  boot server, etc. [used in L1,H1 production right now, but not much longer] The hard part here will be making sure we have network drivers for the SuperMicro 5018-MR.

CDS was never able to get real-time builds to work well on Linux kernels from 3.2 on up until we got to Debian 9. This is not to say that the tricks and stripped-down RCG we found worked for real-time on Debian 9 and 10 won’t work on, say, Debian 8.  But we have not tried.

I have a query out to Dolphin asking:

  1. Have they done any testing of these old drivers on Linux kernel 4.x (e.g., Debian 9/10)?
  2. Is there any way to buy modern IPC cards for the two new machines and interface them with our existing Gen1 network?

I'll add more info if I hear back from them.

  15296   Fri Apr 3 17:15:53 2020 gautamUpdateASCPOP angular FF filters trained and tested

Summary:
Using the data I collected yesterday, the POP angular FF filters have been trained. The offline time-domain performance looks (unbelievably) good, online performance will be verified at the next available opportunity(see update).

Details:

The sequence of steps followed is the same as that done for the MCL FF filters. The trace that is missing from Attachment #1 is the measured online subtraction. Some rough notes:

  • The "target" channels for the subtraction are the POP QPD PIT/YAW signals, normalized by the QPD sum. For the time that the PRMI was locked yesterday, the QPD readouts suggested that the beam was well centered on the QPD, but the POP QPD (OT-301) doesn't give me access to individual quadrant signals so I couldn't actually verify this.
  • I used 64s impulse time on the FIR filter for training. Maybe this is too long, but anyways, the calculation only takes a few seconds even with 64^2 taps.
  • I found that the Levinson matrix algorithm sometimes failed for this particular dataset. I didn't bother looking too much into why this is happening, the brute force matrix inversion took ~4 times longer but still was only ~5 seconds to calculate the optimal filter for 20 mins of training data sampled at 64 Hz.
  • The actuator TF was measured with >0.9 coherence between 0.3 Hz - 10 Hz and fitted, and the fit was used for subsequent analysis. Fit is shown in Attachment #2.
  • FIR to IIR fitting took considerable tweaking, but I think I got good enough fits, see Attachments #3, #4. In fact, there may be some benifit to making the shape smoother outside the subtraction band but I couldn't get IIRrational to cooperate. Need to confirm that this isn't re-injecting noise.

Update Apr 5 1145pm:

  • Attachment #1 has now been updated to show the online performance. The comparison between the "test" and "validation" datasets aren't really apple-to-apple because they were collected at different times, but I think there's enough evidence here to say that the feedforward is helping.
  • Attachment #5 shows that the POP DC (= PRC intracavity buildup) RMS has been stabilized by more than x2. This signal wasn't part of the training process, and I guess it's good that the intracavity power is more stable with the feedforward on. Median averaging was used for the spectral densities, there were still some abrupt glitches during the time this dataset was collected.
  • The next step is to do the PRFPMI locking with all of these recently retuned feedforward loops engaged and see if that helps things.
Quote:

This afternoon, I kept the PRM locked for ~1hour and then measured transfer functions from the PRM angular actuators to the POP QPD spot motion for pitch and yaw between ~1pm and 4pm. After this work, the PRM was misaligned again. I will now work on the feedforward filter design.

  15297   Mon Apr 6 12:26:07 2020 ranaUpdateASCPOP angular FF filters trained and tested

that's pretty great performance. maybe you can also upload some code so that we can do it later too - or maybe in the 40m GIT

I wonder how much noise is getting injected into PRC length at 10-100 Hz due to this. Any change the PRC ERR?

  15298   Mon Apr 6 16:46:40 2020 gautamUpdateASCPOP angular FF filters trained and tested

I don't have a recent measurement of the optical gain of this config so I can't undo the loop, but in-loop performance doesn't suggest any excess in the 10-100 Hz band. Interestingly, there is considerable improvement below 10 Hz. Maybe some of this is reduced A2L noise because of the better angular stability, but there is also improvement at frequencies where the FF isn't doing anything, so could be some bilinear coupling. The two datasets were collected at approximately the same time in the evening, ~5pm, but on two different days.

Quote:

I wonder how much noise is getting injected into PRC length at 10-100 Hz due to this. Any change the PRC ERR?

  15299   Tue Apr 7 10:56:39 2020 JonUpdateBHDBHD front-end complication
Quote:

I have a query out to Dolphin asking:

  1. Have they done any testing of these old drivers on Linux kernel 4.x (e.g., Debian 9/10)?
  2. Is there any way to buy modern IPC cards for the two new machines and interface them with our existing Gen1 network?

Answers from Dolphin:

  1. No, and kernel 4.x (modern Linux) definitely will not work with the Gen1 cards.
  2. No, cards using different PCIe chipsets cannot be mixed.

Since upgrading every front end is out of the question, our only option is to install an old OS (Linux kernel < 3.x) on the two new machines. Based on Keith's advice, I think we should go with Debian 8. (Link to Keith's Debian 8 instructions.)

  15300   Tue Apr 7 15:30:40 2020 JonSummaryNoiseBudget40m noise budget migrated to pygwinc

In the past year, pygwinc has expanded to support not just fundamental noise calculations (e.g., quantum, thermal) but also any number of user-defined noises. These custom noise definitions can do anything, from evaluating an empirical model (e.g., electronics, suspension) to loading real noise measurements (e.g., laser AM/PM noise). Here is an example of the framework applied to H1.

Starting with the BHD review-era noises, I have set up the 40m pygwinc fork with a working noise budget which we can easily expand. Specific actions:

  • Updated the 40m fork to the latest pygwinc version (while preserving the commit history).
  • Added a directory ./CIT40m containing the 40m-specific noise budget files (created by GV).
  • Added an ipython notebook CIT40m.ipynb at the root level showing how to generate a noise budget.
  • Integrated our DAC and seismic noise estimators into pygwinc.
  • Marked the old 40m NB repo as obsolete (last commit > 2 yrs ago). Many of these noise estimates are probably stale, but I will work with GV to identify which ones can be migrated.

I set up our fork in this way to keep the 40m separate from the main pygwinc code (i.e., not added to as a built-in IFO type). With the 40m code all contained within one root-level directory (with a 40m-specific name), we should now always be able to upgrade to the latest pygwinc without creating intractable merge conflicts.

  15301   Mon Apr 13 15:28:07 2020 KojiUpdateGeneralPower Event and recovery

[Larry (on site), Koji & Gautam (remote)]

Network recovery (Larry/KA)

  • Asked Larry to get into the lab. 

  • 14:30 Larry went to the lab office area. He restarted (power cycled) the edge-switch (on the rack next to the printer). This recovered the ssh-access to nodus. 

  • Also Larry turned on the CAD WS. Koji confirmed the remote access to the CAD WS.

Nodus recovery (KA)

  • Apr 12, 22:43 nodus was restarted.

  • Apache (dokuwiki, svn, etc) recovered along with the systemctl command on wiki

  • ELOG recovered by running the script

Control Machines / RT FE / Acromag server Status

  • Judging by uptime, basically only the machines that are on UPS (all control room workstations + chiara) survived the power outage. All RT FEs are down. Apart from c1susaux, the acromag servers are back up (but the modbus processes have NOT been restarted yet). Vacuum machine is not visible on the network (could just be a networking issue and the local subnet to valves/pumps is connected, but no way to tell remotely).

  • KA imagines that FB took some finite time to come up. However, the RT machines required FB to download the OS. That made the RTs down. If so, what we need is to power cycle them.

  • Acromag: unknown state

The power was lost at Apr 12 22:39:42, according to the vacuum pressure log. The power loss was for a few min.

  15302   Mon Apr 13 16:51:49 2020 ranaSummaryDAQNODUS: rsyncd daemon / service set up

I just now modified the /etc/rsyncd.conf file as per Dan Kozak's instructions. The old conf file is still there with the file name appended with today's date.

I then enabled the rsync daemon to run on boot using 'enable'. I'll ask Dan to start the file transfers again and see if this works.

controls@nodus|etc> sudo systemctl start rsyncd.service
controls@nodus|etc> sudo systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
controls@nodus|etc> sudo systemctl status rsyncd.service
● rsyncd.service - fast remote file copy program daemon
   Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-04-13 16:49:12 PDT; 1min 28s ago
 Main PID: 4950 (rsync)
   CGroup: /system.slice/rsyncd.service
           └─4950 /usr/bin/rsync --daemon --no-detach

Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Started fast remote file copy program daemon.
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Starting fast remote file copy program daemon...

  15303   Tue Apr 14 23:50:06 2020 KojiUpdateGeneral40m power glitch recovery

[Koji / Gautam (Remote)]

Lab status

  • Gray Panel: The lab AC was off. Turned on all three (N/S, CTRL RM, E/W)
  • The control room AC was running.

Work stations

  • Control Room: All the control machines were running. We knew that nodus/chiara/fb were running
  • 1X6/7:
    • JETSTOR was making beeping sound. “Power #1 failed””power #2 failed”
    • Optimus & megatron were off -> turned on -> up and running now
  • 1X1/2:
    • Power cycled the netgear at the top of the IOO rack (maybe not necessary)
    • Turned on c1ioo -> up and running now
  • 1X4/5: Rebooted c1sus / c1lsc -> up and running now
  • 1X9: Rebooted c1iscex -> up and running now
  • 1Y4: Rebooted c1iscex -> up and running now

Vacuum status

  • Looked like everything was running as if it did not see the power glitch
  • TP1 normal: Set speed 33.6k rpm / Actual speed 33.6k rpm 
  • TP2 normal: 66k rpm / PTP2 16.0 mtorr
  • TP3 normal: 31k rpm / PTP3 45.4mtorr
  • P1 LOW / P2 1.7mtorr / CC2 1.1e-6 / P3 7.6e-2 / P4 LO
  • Annuli: 2.7~3torr
  • CC1 9.6e-6 / SUPER BEE 0.9mtorr

C1VAC recovery

  • c1vac was alive, but was isolated from the martian network
  • Checked the network I/F status with /sbin/ifconfig -a
    • eth0 had no IP
    • eth1 had the vac subnet IP (192.168.114.9)
  • Ran sudo /sbin/ifdown eth0 then  sudo /sbin/ifup eth0
  • The I/F eth0 started running and c1vac became visible from martian
  • Later checked the vacuum screen: The pressure values and valve statuses looked normal.
    The interlock state was “running”. The system state was “unrecognized”.

End RTS recovery 

  • The end slow machines (auxex and auxey) were already running
  • Restarting end RT models:
    • c1iscey -> rtcds start --all
    • c1iscex -> rtcds start --all
  • Confirmed that the models can dump the SUSs

Vertex RTS recovery

  • We wanted to use the reboot script. (/opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh)
  • c1susaux​​
    • To be safe, we wanted to bring c1susaux first.
    • c1susaux does not make the network I/Fs up automatically upon reboot.
      -> Connect an LCD display / keyboard / mouse to c1susaux
      -> Ran sudo /sbin/ifup eth0 and sudo /sbin/ifup eth1
    • Now c1susaux is visible from martian.
    • Login c1susaux and ran:  
      sudo systemctl start modbusIOC.service 
      -> c1susaux epics is up and running now
    • ...Meanwhile c1susaux lost its eth1 somehow. This made the slow values of 8 vertex sus all zero
      -> Ran sudo /sbin/ifdown eth1 and sudo /sbin/ifup eth1 again on c1susaux ->  this resolved the issue
  • c1psl
    • Login c1psl and ran:  
      sudo systemctl start modbusIOC.service 
      -> c1psl epics is up and running now
  • Prepared for the rebooting script
    • Ran /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh
    • Rebooting was done successfully. All the suspensions looked free and healthy.
    • Burtrestored c1susaux (used Apr 12 21:19 snapshot)

Hardware

  • PSL laser / Xend AUX laser / Yend AUX laser were off -> turned on
  • The PMC was immediately automatically locked.
  • The main marconi was off -> forgot to turn on
  • The end temp controllers for the SHG crystals were on but not enabled -> now enabled

RTS recovery ~ part 2

  • FB: FB status of all the RTS models were still red
  • Timing: c1x01/2/3/5 were 1 sec behind of FB and c1x04 was 2 sec behind
  • -> Remedy:  https://nodus.ligo.caltech.edu:8081/40m/14349
    • Software rebooting of FB
    • Manually start the open-mx and mx services using
    • sudo systemctl start open-mx.service 
    • sudo systemctl start mx.service
    • Check that the system time returned by gpstime matches the gpstime reported by internet sources. e.g. http://leapsecond.com/java/gpsclock.htm
    • Manually start the daqd processes using
      sudo systemctl start daqd_*
  • This made all the FB(FE) indicators green!
  • Ran the reboot script again -> All green!

IMC recovery

  • The IMC status was checked
  • No autolocker, but it could be manually locked. i.e. MC1/2/3 were not so much misalignment
  • Autolocker/Slow FSS recovery along with https://nodus.ligo.caltech.edu:8081/40m/15121
    • sudo systemctl start MCautolocker.service
    • sudo systemctl start FSSSlow.service
  • Both of them failed to run
  • Note by Gautam: The problem with the systemctl commands failing was that the NFS mount points weren’t mounted. Which in turn was because of the familiar /etc/resolv.conf problem. I added chiara to the namespace in this file, and then manually mounted the NFS mount points. This fixed the problem.
    Now the IMC is locked and the autolocker is left running.

Burt restore

  • Used Apr 12 21:19 snapshot
  • c1psl
  • c1alsepics/c1assepics/c1asxepics/c1asyepics
  • c1aux/c1auxex/c1auxey/
  • c1iscaux/c1susaux
  • This made REFL and AS beams back to the CCDs. As has small fringes.
  • Y arm has small IR flashes as well as green flashes.

JETSTOR recovery

  • JETSTOR was beeping. 
  • Shutdown megatron
  • Followed the instruction https://nodus.ligo.caltech.edu:8081/40m/13107
  • This stopped beeping. Waiting for JETSTOR to come up -> In a minute, JETSTOR display became normal and all disks showed green.
  • Bring megatron back up again

N2 bottle

  • The left N2 bottle was empty. The right one had 1500PSI.
  • Replaced the left bottle with the spare one in the room.
  • Now the left one 2680PSI and the right one 1400PSI.

Closing

  • Closed PSL/AUX laser shutters
  • Turned off the lights in the lab, CTRL room, and the office.

Remaining Issues

  • [done] MCAutoLocker / FSSSlow scripts are not running
  • The PRM alignment slider has no effect (although the PRM is aligned…) -> SLOW DAQ frozen???
  • JETSTOR is not mounted on megatron [gautam mounted Jetstor on megatron on 4/18 at 2pm]
  15304   Wed Apr 15 15:15:17 2020 ChubUpdateVACnitrogen cylinders delivered

Four nitrogen cylinders replaced the empties in the rack at the west entrance.  Additionally, Airgas will now deliver only once a week.  Let me know via email or text when the there are four empties in the rack and I'll order the next round.

  15305   Thu Apr 16 21:13:20 2020 JonUpdateBHDBHD optics specifications

Summary

I've generated specifications for the new BHD optics. This includes the suspended relay mirrors as well as the breadboard optics (but not the OMCs).

To design the mode-matching telescopes, I updated the BHD mode-matching scripts to reflect Koji's draft layout (Dec. 2019) and used A La Mode to optimize ROCs and positions. Of the relay optics, only a few have an AOI small enough for curvature (astigmatism) and most of those do not have much room to move. This reduced the optimization considerably.

These ROCs should be viewed as a first approximation. Many of the distances I had to eyeball from Koji's drawings. I also used the Gaussian PRC/SRC modes from the current IFO, even though the recycling cavities will both slightly change. I set up a running list of items like these that we still need to resolve in the BHD README.

Optics Specifications

At a glance, all the specifications can be seen in the optics summary spreadsheet.

LO Telescope Design

The LO beam originates from the PR2 transmission (POP), near ITMX. It is relayed to the BHD beamsplitter (and mode-matched to the OMCs) via the following optical sequence:

  • LM1 (ROC = +10 m, AOI 3°)
  • LM2 (Flat, AOI  45°)
  • MMT1 (Flat, AOI  5°)
  • MMT2 (ROC = +3.5 m, AOI  5°)

The resulting beam profile is shown in Attachment 1.

AS Telescope Design

The AS beam is relayed from the SRM to the BHD beamsplitter (and mode-matched to the OMCs) via the following sequence:

  • AS1 (ROC = +1.5 m, AOI  3°)
  • AS2 (Flat, AOI  45°)
  • Lens (FL = -125 mm)

A lens is used because there is not enough room on the BHD breadboard for a pair of (low-AOI) telescope mirrors, like there is in the LO path. The resulting beam profile is shown in Attachment 2.

  15306   Sat Apr 18 13:32:31 2020 ranaUpdateCamerasGigE w better NIR sensitivvity

There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.

Might allow for better scatter measurements - not that we need more signal, but it could allow us to use shorter exposure times and reduce blurring due to the wobbly beams.

  15307   Sat Apr 18 14:57:44 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

Ok, now I understand my foolishness. It should definitely be 1/sqrt(f^2+fp^2) .

Quote:
However, the data seem to fit well to 1/sqrt(f^2+fp^2) - electric field response - but not to 1/(f^2+fp^2) - intensity response. (Attachment 3).
  15308   Mon Apr 20 17:49:58 2020 gautamUpdateGeneralSome housekeeping
  • Empty N2 replaced. 
  • Logged back into zita and started the StripTool traces (even though we keep the TV off nowadays).
  • c1susaux acro-crate power cycled to re-enable PRM suspension control (all other vertex optics also now respond to slow bias voltage sliders being moved).
  • c1iscaux needed a hard reboot as it wasn’t seen on martian. I power cycled the crate for good measure.
  • Marconi turned back on with correct frequency/amplitude.
  • c0rga is now seen again on martian network. I re-enabled the RGA scanning so that it takes a scan every morning at 4am. 
  • The forepumps for TP2/TP3 are noisier than I remember. The former has ~10,000 hrs on the clock. How often does the tip seal replacement need to happen?
  • HV supplies for ASX/ASY PZTs re-energized.
  • IFO re-aligned for locking.
  • c1oaf and c1daf models restarted. c1oaf required the usual start/stop/start sequence to make the DAQ errors go away, and luckily the FE didn’t crash when the model was unloaded.
  • POX/POY/PRMI 1f carrier/green locking all was smooth.
  • For some reason, the PRC angular FF filters i trained no longer do anything good (but MCL is still good). collected 20mins of PRMI 1f locked data for investigations.
Update 21 Apr 2020 1200: Looking at Attachments #1 and #2, the spectra for motion sensed by the POP QPD does indeed look very different on Apr 6 vs Apr 20. Could be some interference from Oplev loop or maybe some EPICS values didn't get reset correctly, needs more investigation. It doesn't seem reasonable to me that the plant changes by so much (spectra were taken at similar times of the day, ~5pm).
Update 22 Apr 2020 1500: As suspected, the PRM oplev was disabled for whatever reason. Re-enabling it, I recovered the good performance from two weeks ago. ✅ 
  15309   Wed Apr 22 13:52:05 2020 gautamUpdateDetCharSummary page revival

Covid 19 motivated me to revive the summary pages. With Alex Urban's help, the infrastructure was modernized, the wiki is now up to date. I ran a test job for 2020 March 17th, just for the IOO tab, and it works, see here. The LDAS rsync of our frames is still catching up, so once that is up, we can start the old condor jobs and have these updated on a more regular basis.

  15310   Wed Apr 22 17:29:14 2020 gautamUpdatePSLFSS debugging attempts

Summary:

On Monday, I hooked up an AG4395 to the PMC error point (using the active probe). The idea was to take a spectrum of the PMC error point every time the FSS PC drive RMS channel indicated an excursion from the nominal value. An initial look at the results don't suggest that this technique is particularly informative. I'll have to think more about a workaround, but please share your ideas/thoughts if you have some.

Also, the feature in the spectrum at ~110 kHz makes me suspect some kind of loop instability. I'll measure the IMC loop OLG at the next opportunity.

Details:

  • The PMC servo bandwidth is ~2 kHz, so above this, the PMC error point should be a faithful monitor of the PSL frequency noise, provided the sensing noise is low enough.
  • The PMC error point sensing noise is ~100nV/rtHz (I'm monitoring this straight after the Minicircuits mixer+bandpass filter that we are using as a demodulator). This corresponds to ~2 Hz/rtHz, using the ~10 MHz/V PDH discriminant calibration from January. Seems consistent with this elog.
  • I was hoping to see if there was a particular frequency band in which the noise gets elevated, and if the crossover frequency is a few kHz and the IMC servo BW is ~110 kHz, I would have expected this to be in the 10-100 kHz region. Possibly my frequency resolution isn't good enough? But with the Agilent, doing a finer grid would mean a longer measurement time, in which case the IMC might lose lock before the measurement is done.
  • But, as shown in Attachment #1, there isn't any clear evidence, from the ~20 excursions that were recorded last night. The color of the line is meant to be indicative of the average value of the PC drive RMS channel in the measurement time.
  • A significant bottleneck in this whole process is that it takes ~1 minute to initiate the GPIB measurement, and download the data. The pseudo-code I used is:
    • While the IMC is locked, watch PCdrive RMS EPICS channel's "ALARM" state, which becomes non-zero when the PCdrive RMS exceeds 1 V (this is how it is defined in the EPICS db record right now).
    • Make sure this isn't a transient feature - I do this by waiting 5 seconds and checking that the ALARM flag is still flagged.
    • Initiate a AG4395 measurement over GPIB - I use the measurement span of 1 kHz - 1 MHz with a BW/span ratio of 0.1%, 5 averages.
    • Check that the IMC is still locked (if it got unlocked while the measurement was made, presumably the measurement is garbage).
  • Is there a better monitor of the laser frequency noise? I can imagine using POX/POY which I think have a lower electronics noise floor but I'm not sure if that's true at 100 kHz and having the arms locked in addition to the IMC seems more complicated...
  • Since we are planning a laser upgrade, is this worth spending more time on? I may leave the measurement running on pianosa in a tmux session while I'm not in the lab...
  15311   Thu Apr 23 09:52:02 2020 JonUpdateCamerasGigE w better NIR sensitivvity

Nice, and we should also permanently install the camera server (c1cam) which is still sitting on the electronics bench. It is running an adapted version of the Python 2/Debian 8 site code. Maybe if COVID continues long enough I'll get around to making the Python 3 version we've long discussed.

Quote:

There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.

  15312   Thu Apr 23 10:42:02 2020 ranaUpdatePSLFSS debugging attempts

I had set up the 4395 to do this automatically a few years ago, but it looked at the FSS/IMC instead. When the PCDRIVE goes high there is this excess around ~500 kHz in a broad hump.

But the IMC loop gain changes sometimes with alignment, so I don't know if its a loop instability or if its laser noise. However, I think we have observed PCDRIVE to go up without IMC power dropping so my guess is that it was true laser noise.

This works since the IMC is much more sensitive than PMC. Perhaps one way to diagnose would be to lock IMC at a low UGF without any boosts. Then the UGF would be far away from that noise making frequency. However, the PCDRIVE also wouldn't have much activity.

  15313   Fri Apr 24 00:26:59 2020 ranaSummaryPEML.A. EQ from Tuesday night
  15314   Thu Apr 30 07:29:01 2020 ChubUpdateVACN2 delivered.

Hi All,

The new nitrogen cylinders were delivered to the rack at the west entrance.  We only get one Airgas delivery per week during the stay-at-home order, but so far they've not let us down.

  15315   Fri May 1 01:49:55 2020 gautamUpdateALSASY commissioning

Summary:

It appears that the EY green steering PZTs have somehow lost their bipolar actuation range. I will check on them the next time I go to the lab for an N2 switch.

Details:

  • Yuki installed the EY green PZTs and did some initial setup of the RTCDS model. 
  • But we don't have a functional dither alignment servo yet, which is mildly annoying. So I thought I'll finally finish my SURF project.
  • There were several problems with the signal flow, MEDM screens etc.
  • I rectified these, and set up some operational scripts, burt snapshots etc in $SCRIPTS/ASY. The c1asy and c1als models were also modified, recompiled and restarted, everything appears to have come back online smoothly.
  • The LO frequencies/amplitudes, demod filter gains and demod phases were chosen to have a signal mostly in the _I quadrature of the demodulated signal when the alignment is slightly disturbed from optimal (monitored after the post-demod LPF).
  • While trying to close the integrator loops, I found that I appear to only have monopolar actuation ability (positive DAC output changes the alignment, negative DAC output does nothing).

Could be that the power outage busted something in the drive electronics. 

  15316   Fri May 1 22:44:17 2020 gautamUpdateALSASY M2 PZT damaged

I went to EY and saw that the HV power supply was only putting out 50 V and had hit the current limit of 10 mA (nominally, it should be 100 V, drawing ~7mA). This is definitely a problem that has come up after the power shutdown event, as when I re-energized the HV power supply at EY, I had confirmed that it was putting out the nominal values (the supply was not labelled with these nominal numbers so I had to label it). Or maybe I broke it while running the dither alignment tests yesterday, even though I never drove the PZTs above 50 Hz with more than 1000cts (= 300 mV * gain 5 in the HV amplifier = 1.5 V ) amplitude.

The problem was confirmed to be with the M2 PZT (YAW channel) and not the electronics by driving the M2 PZT with the M1 channels. Separately, the M1 PZT could be driven by the M2 channels. I also measured the capacitance of the YAW channels and found it to be nearly twice (~7 uF) of the expected 3 uF - this particular PZT is different from the three others in use by the ASX and ASY system, it is an older vintage, so maybe it just failed? 😔 

I don't want to leave 100 V on in this state, so the HV supply at EY was turned off. Good GTRY was recovered by manual alignment of the mirror mounts. If someone has a spare PZT, we can replace it, but for now, we just have to live with manually aligning the green beam often.

Quote:

Could be that the power outage busted something in the drive electronics. 

  15317   Sat May 2 02:35:18 2020 KojiUpdateALSASY M2 PZT damaged

Yes, we are supposed to have a few spare PI PZTs.

  15318   Tue May 5 23:44:14 2020 gautamUpdateASCIMC WFS

Summary:

I've been thinking about the IMC WFS. I want to repeat the sort of analysis done at LLO where a Finesse model was built and some inferences could be made about, for example, the Gouy phase separation b/w the sensors by comparing the Finesse sensing matrix to a measured sensing matrix. Taking the currently implemented output matrix as a "measurement" (since the IMC WFS stabilize the IMC transmission), I don't get any agreement between it and my Finesse model. Could be that the model needs tweaking, but there are several known issues with the WFS themselves (e.g. imbalanced segment gains).

Building the finesse model:

  • I pulled the WFS telescopes from Andres elogs/SURF report, which I think was the last time the WFS telescopes were modified.
  • The in-vacuum propagation distances were estimated from CAD diagrams.
  • According to my model, the Gouy phase separation between the two WFS heads is ~70 degrees, whereas Andres' a la mode simulations suggest more like 90 degrees. Presumably, some lengths/lenses are different between what I assume and what he used, but I continue the analysis anyway...
  • The appropriate power attenuations were placed in each path - one thing I noticed is that the BS that splits light between WFS1 and WFS2 is a 30/70 BS and not a 50/50, I don't see any reason why this should be (presumably it was to do with component availability). see below for Rana's comments.

Simulations:

  • The way the WFS servos are set up currently, the input matrix is diagonal while the output matrix encodes the sensing information.
  • In finesse, I measured the input matrix (i.e. response sensed in each sensor when an optic is dithered in angle). The length is kept resonant for the carrier (but not using a locking signal), which should be valid for small angular disturbances, which is the regime in which the error signals will be linear anyways.
  • Then I inverted the simulated sensing matrix so as to be able to compare with the CDS output matrix. Note that there is a relative gain scaling of 100 between the WFS paths and the MC2T QPD paths which I added to the simulation. I also normalized the columns of the matrix by the largest element in the column, in an attempt to account for the various other gains that are between the optical sensing and the digitizaiton (e.g. WFS demod boards, QPD transimpedance etc etc).
  • Attachment #1 shows the comparison between simulation and measurement. The two aren't even qualitatively similar, needs more thought...

Some notes about the WFS heads:

  • The transimpedance resistor is 1.5 kohms. With the gain stages, the transimpedance gain is nominally 37.5 kohms, and 3.75 kohms when the attenuation setting is engaged (as it is for 2/4 quadrants on each head).
  • Assuming a modulation depth of 0.1, the Johnson noise of the transimpedance resistor dominates (with the MAX4106 current noise a close second), and these heads cannot be shot noise limited when operating at 1 W input power (though of course the situation will change if we have 25 W input).
  • The heads are mounted at a ~45 deg angle, mixing PIT/YAW, but I assume we can just use the input matrix to rotate back to the natural PIT/YAW basis.

Update 215 pm 5/6: adding in some comments from Rana raised during the meeting:

  1. The transimpedance is actually done by the RLC network (L6 and C38 for CH 3), and not 1.5 kohms. It just coincidentally happens that the reactance is ~1.5 kohms at 29.5 MHz. Note that my LTspice simulation using ideal inductors and capacitors still predicts ~4pA/rtHz noise at 29.5 MHz, so the conclusion about shot noise remains valid I think... One option is to change the attenuation in this path and send more light onto the WFS heads.
    The transimpedance gain and noise are now in Attachment #2. I just tweaked the L values to get a peak at 29.5 MHz and a notch at twice that frequency. For this I assumed a photodiode capacitance of 225pF and the shown transimpedance gain has the voltage gain of the MAX4106 stages divided out. The current noise is input referred.
  2. The imbalanced power on WFS heads may have some motivation - it may be that the W/rad TF for one of the two modes we are trying to sense (beam plane tilt vs beam plane translation) is not equal, so we want more light on the head with weaker response.
  3. The 45 degree mounting of the heads is actually meant to decouple PIT and YAW.
  15319   Wed May 6 00:31:09 2020 gautamUpdateALSOptomechanics during CARM offset reduction

Summary:

The apparent increase in the ALS noise (witnessed in-loop, e.g. Attachment #2 here) during the CARM offset reduction may have an optomechanical origin. 

Details:

  • A simplified CARM plant was setup in Finesse - 3 mirror coupled cavity with PRM, ITM and ETM, 40m params for R/T/L used. 
  • For a sanity check, DC power buildup and coincident resonance of the PRC and arm cavity were checked. PRG and CARM linewidth also checks out, and scales as expected with arm losses.
  • To investigate possible optomechanical issues - I cut the input power to 300 mW (I estimate 600 mW incident on the PRM), set a PRG of ~20, to mimic what we have right now.
  • I drive the ITM at various CARM offsets, and measure the m/m transfer function to itself and the ETM.
  • Attachement #1 shows the results. 

Interpretation:

  • ericq had similar plots in his thesis, but I don't think the full implications of this effect were investigated, the context there was different.
  • The optomechanical resonance builds up at ~10 Hz and sweeps up to ~100 Hz as the CARM offset approaches zero, with amplification close to x100 at the resonance.
  • What this means is that the arm cavity is moving by up to 100x the ambient seismically driven dispalcements. 
  • The EX/EY uPDH servos have considerable gain at these frequencies, and so the AUX laser frequency can keep up with this increased motion (to be quantified exactly what the increase in residual is).
  • However, the ALS loop that maintains the frequency offset b/w the PSL and the AUX lasers is digitial, and only has ~20 dB gain at 30 Hz. - so the error signal for CARM control becomes noisier as we see.
  • I speculate that the multiple peaky features in the in-loop error signal are a result of some dynamical effects which Finesse presumably does not simulate.
  • The other puzzler is: this simulation would suggest that approaching the zero CARM offset from the other side (anti-spring) wouldn't have such instabilities building up. However, I am reasonably sure I've seen this effect approaching zero from both sides, though I haven't checked in the last month.
  • Anyways, if this hypothesis is correct, we can't really take advantage of the ~8pm RMS residual noise performance of the IR ALS system sadly, because of our 250g mirrors and 800mW input power
  • Possible workarounds:
    • High BW ALS - this would give us more gain at ~30 Hz and this wouldn't be a problem anymore really. But in my trials, I think I found that the IN2 gain on the CM board has to be inverted for this to work (the IN1 path and the IN2 path share a common AO path polarity, and we need the two paths to have the opposite polarity).
    • Cut the input power - this would reduce the optomechanical action, but presumably the vertex locking becomes noisier. In any case, this isn't really practical without some kind of motorized/remote-controlled waveplate for power adjustment. 

Update 415pm 5/6: Per the discussion at the meeting, I have now uploaded as Attachment #2 the force-->displacement (i.e. m/N) transfer functions. I now think these are appropriate units. For the ALS case, we could convert the m/N to Hz/N of extra frequency noise imprinted on the AUX laser due to the increased cavity motion. Is W/N really better here, since the mechanism is extra frequency noise on a beatnote, and there isn't really a PDH or DC error signal?

  15320   Thu May 7 09:43:21 2020 ranaUpdateASCIMC WFS

This is the doc from Keita Kawabe on why the WFS heads should be rotated.

  15321   Thu May 7 10:58:06 2020 gautamUpdateASCIMC WFS

OK so the QPD segments are in the "+" orientation when the 40m IMC WFS heads are mounted at 45 deg. I thought "+" was the natural PIT/YAW basis but I guess in the the LIGO parlance, the "X" orientation was considered more natural.

Quote:

This is the doc from Keita Kawabe on why the WFS heads should be rotated.

  15322   Fri May 8 14:27:25 2020 HangUpdateBHDNew SRC gouy phase

[Jon, Hang]

After updating the 40 m finesse file to incorporate the new SRC length (and the removal of SR2), we find that the current SRM radius curvature is fine. Thus a replacement of SRM is NOT required

Basically, the new one-way SRC gouy phase is 11.1 deg according to Finesse, which is very close to the previous value of 10.8 deg. Thus the transmode spacing should be essentially the same. 

In the first attached plot is the mode content calculated with Finesse. Here we have first offset DARM by 1m deg and misaligned the SRM by 10 urad. From the top to bottom we show the amplitude of the carrier fields, f1, and f2 sidebands, respectively. The red vertical line is the nominal operating point (thanks Koji for pointing out that we do signal recycling instead of extraction now). No direct co-resonance for the low-order TEM modes. (Note that the HOMs appeared to also have peaks at \phi_srm = 0. This is just because the 00 mode is resonant and thus the seed for the HOMs is greater. )

We can also consider a clean case without mode interactions in the second plot. Indeed we don't see co-resonances of high order modes. 

  15323   Sat May 9 17:01:08 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation
I took the phase maps of the 40m X arm mirrors and calculated what is the loss of a gaussian beam due to a single bounce. I did it by simply calculating 1 - (overlap integral)^2 where the overlap is between an input Gaussian mode (calculated from the parameters of the cavity. Waist ~ 3.1mm) and the reflected beam (Gaussian imprinted with the phase map). The phase maps were prepared using PyKat surfacemap class to remove a flat surface, spherical surface, centering, etc. (Attachments 3, 4)
 
I calculated the loss map (Attachments 1,2: ~ 4X4 mm for ITM, 3X3mm for ETM) by shifting the beam around the phase map. It can be seen that there is a great variation in the loss: some areas are < 10 ppm some are > 80 ppm.
 
For the ITM (where the beam waist is) the average loss is ~ 23ppm and for the ETM its ~ 61ppm due to the enlarged beam. The ETM case is less physical because it takes a pure gaussian as an input where in reality the beam first interacts with the ITM.
 
I plan to do some first-order perturbation theory to include the cavity effects. I expect that the losses will be slightly lower due to HOMs not being completely lost, but who knows.
 
  15324   Mon May 11 00:12:34 2020 gautamUpdateLSCRF only PRFPMI

Finally - Attachment #1. This plot uses 16 Hz EPICS data. All y-axes are uncalibrated for now, but TRX/TRY are normalized such that the POX/POY lock yields a transmission of 1. CARM UGF is only ~3 kHz, no boosts were turned on yet. 

Attachment #2 and Attachment #3 are phone photos of the camera images of the various ports. After some alignment work, the transmitted arm powers were ~200, i.e. PRG ~10. fwiw, this is the darkest i've ever seen the 40m dark port. c.f. 2016. Of course, the exposure time / ND filter / light levels could all have changed.

This work was possible during the daytime (~6pm PDT), but probably only because it was Sunday. The other rate limiting factor here is the franky terrible IMC duty cycle. TBH, I didn't honestly expect to get so far and ran out of time, but I think the next steps are:

  1. Turn on some sensing lines and calibrate CARM/DARM.
  2. Transition vertex control to 1f signals.
  3. Whiten DARM.
  4. Turn on some ASC for better power stabilization.
  5. Scan the CARM offset and check that we are truly on resonance
  6. Noise budget.

As usual, I would like to request that we don't change the IFO as far as possible until the BHD vent, i found it pretty difficult to get here.


Attachment #4 now shows the measured DARM OLTF when DARM is entirely on AS55_Q control. UGF is ~120 Hz and the phase margin is ~30 deg, seems okay for a first attempt. I'll now need to infer the OLTF over a wider range of frequencies by lining this measurement up with some model, so that I can undo the loop in plotting the DARM ASD.

  15325   Tue May 12 17:51:25 2020 ranaSummaryComputer Scripts / Programsupdated LESS syntax highlight on nodus

apt install source-highlight

then modified bashrc to point to /usr/share instead of /usr/bin

  15326   Tue May 12 18:16:17 2020 gautamUpdateLSCRelative importance of losses in the arm and PRC

Attachment #1 is meant to show that having a T=500ppm PR2 optic will not be the dominant contributor to the achievable recycling gain. Nevertheless, I think we should change this optic to start with. Here, I assume:

  • \eta_A denotes the (average) round trip loss per arm cavity (i.e. ITM + ETM). Currently, I guess this is ~100ppm.
  • Fixed 0.5% loss from mode mismatch between the CARM mode and the PRC mode (the x-axis does NOT include this number).
  • No substrates/AR coatings inside the cavity.
  • For the nominal case, let's say the intracavity loss sums to 100 ppm.
  • For the T=500ppm PR2, I assumed a total of 550 ppm loss in the PRC.

In relaity, I don't know how good the MM is between the PRC and the arms. All the scans of the arm cavity under ALS control and looking at the IR resonances suggest that the mode-matching into the arm is ~92%, which I think is pretty lousy. Kiwamu and co. claim 99.3% matching into the interferometer, but in all the locks, the REFL mode looks completely crazy, so idk

  15327   Tue May 12 20:16:31 2020 KojiUpdateLSCRelative importance of losses in the arm and PRC

Is \eta_A the roundtrip loss for an arm?

Thinking about the PRG=10 you saw:
- What's the current PR2/3 AR? 100ppm? 300ppm? The beam double-passes them. So (AR loss)x4 is added.
- Average arm loss is ~150ppm?

Does this explain PRG=10?

 

ELOG V3.1.3-