Highlight of the night: the DRFPMI was held at arm powers > 110 for 20 seconds. ALS feedback was still running though, but so was some nonzero REFL11 AO path action.
In short, time was spent finding the right FM trigger settings to keep the DRMI locked while CARM is fluctuating through resonance, what CARM offset to acquire DRMI lock at, order of operations of turning on AO / turning up overall CARM gain, etc.
Sadly, for the past hour or so, the DRMI has refused to stay locked for more than ~20 seconds, so I haven't been able to push things much further. This is a shame, since I'm very nearly at the equivalent point in the PRFPMI locking script where the ALS control is turned off completely.
Progress was made. CARM was stably locked on RF only. DARM was RF only for a few moments before I typed in a wrong number...
A change was made to the LSC model's triggering section to make the DRMI hold more reliably at zero CARM offset. Namely, the POPDC signal now has its absolute value taken before the trigger matrix. Even unwhitened, it occaisionally would somehow go negative enough to break the DRMI trigger.
AUX X laser was acting up again. As before, tweaking laser current is the temporary fix.
Please clarify: I wonder if you were at the zero offset for CARM and DARM or not.
Yes, this was at the full DRFPMI resonance.
Look upon this three second lock, ye Mighty, and rejoice!
At 10:02AM, the N2 Pressure fell below 60 PSI. The watch script saw this happen, but I did not recieve the email it is supposed to send
C1:Vac-P1_pressure reads 7e-4, which is the same as it has for the past ~2 days, so the V1 interlock worked fine.
I've put some fresh N2 into the system, and Bob will pop in over the weekend to check it. I'll stay on top of it until Steve gets back.
After consulting ELOGs and the 40m wiki, I reasoned it was ok to open the V1 to reconnect the turbo pump to the main IFO volume and VM1 to reconnect the RGA, and have now done so.
To get a better look at how to do fast ALS, I took some "Plant TF" measurements of the X arm.
Specifically, in single arm POX lock and the both Y TMs misaligned, I used the SR785 to inject into EXC B of the common mode board with the CM fast output gain and IMC IN2 gain both at 0dB, and looked at the transfer function of that excitation into the analog ALSX I and AS55 Q out-of-loop signals. (ALSX I tuned to a zero crossing via the delay line box as usual.)
My expectation was to see them only differ by the IR single arm cavity pole, which should be around 8-9kHz ( FSR/450 = 3.9MHz/450 ~ 8.6kHz). The green cavity pole at ~18k shouldn't show up since we're not touching the green light, and the IMC pole at ~3.8kHz shouldn't show up since this is well within the IMC loop bandwidth and we're actuating on its error point.
Instead, I see them differ by a double pole at 4.3kHz. (or zero, if you look at it the reciprocal way). Vectfit actually fits them as a slightly complex pair, with a Q of 0.53/ I imagine that the wiggles are due to the digital control loop.
My question is: why is there a double zero here? Where has my reasoning led me astray?
Ah, I understand it now! Since the additive offset path keeps the post-cavity frequency TF flat, the pre-cavity frequency must grow above the cavity pole, which is why ALS sees a zero.
Ok, so this means we want to apply two lowpasses to the ALS signal for use as fast CARM control, if we want it to be capable of scalar blending with REFL11: one at ~120Hz to imitate the CARM coupled cavity pole present in REFL11, and one at ~3.8kHz to undo the "IMC cavity zero" present in ALS.
At this point, I'm starting to prefer an active circuit to do this lowpassing; using LISO to check designs for two cascaded passive LPFs it looks like the ALS signal would have to be attenuated by a factor of ~20 at DC if we don't use resistors smaller than 1k, given the low input impedence of the CM board.
A few minutes ago, Gautam and I were poking around the IOO rack, looking at where he should power his frequency divider box, and what ADC innputs to use.
Looking at the mode cleaner signals, it looks like we may have jostled something in a good way. Weird.
Despite our best efforts, the grappa remains out of reach: the DRFPMI was not locked tonight.
We spent a fair amount of time with the AUX X laser, as it was glitching madly again.
DRMI was finicky until I found some more reliable triggering settings; namely aquiring with AS110Q, but after that transitioning the trigger to the same POP22+POPDC combo as PRCL and MICH. With this in place, the DRMI lock seems really indefinite no matter what CARM seems to do; or at least, I always lost lock due to CARM shenanigans after this.
The most frustrating part was the fact that I just couldn't cross over the AO path stably. It never "clicked" into high circulating power as it normally does (either in PRFPMI, or how it was last week). Various crossover filters and tweaks were attempted to no avail. Morning traffic starts soon, so we're calling it a night.
I've made a cascaded passive 2-pole pomona box for fast ALS use, using LISO to check that it'll give the right shape when hooked up to the CM board's input stage.
First stage is a 133Ohm + 10uF cap for ~120Hz LP, second is 1.15kOhm + 47nF cap for ~3.8kHz LP. The DC gain is ~0.75, which is much better than what I was doing before. The second stage would normally make a 2.9kHz LPF on its own, but the loading of the input stage moves the corner up.
It seems the 133 Ohm resistor is a reasonable load on the output AD829 of the ALS demod board (short-circuit output current of 32mA and a series output resistor of 499Ohm). To be able to use the digitized ALSX I and the lowpassed analog version simultaneously, I had to buffer the signal with a SR560 before the pomona box, otherwise the signals looked distorted. This isn't a good long-term solution. Maybe I can used the further-buffered differential output to drive the LPF+CM board.
The LISO files used to model the filter and CM board input stage, and fit the pole frequencies are attached.
I made some attempts to get the AO path going today, but I suspect this daytime noise is just too much; the PC drive seems too irritable
After some discussion at last week's 40m meeting, I increased the frequency of daqd trying to write out minute trends from hourly to every two hours.
This has eliminated the hourly crashes. daqd still crashes sometimes, but only a few times per day.
However, looking at the oplev summary pages that actually use the minute trends, it looks like they're only sporadically getting succesfully written out.
Also, I was having a lot of problems with the frontends' EPICS processes dying when I would try to update the SDF table. I rebuilt all of the frontends with RCG 2.9.6, which differs from the 2.9.4 that we had been running by SDF bugfixes and an RMS calculation bugfix. The SDF procedures are much more stable now.
I have not yet discovered anything broken by this chage, and the tests I made for the last upgrade were all fine; last weeks tiny DRFPMI lock was achieved after this change.
For real this time.
Fast ALS was still a problem tonight. I don't think high frequency ALS noise saturating the PC drive is the issue; I put two 10k poles before the CM board (shooting for just 2-3kHz bandwidth), and the PC drive levels would be stable and low up until the lockloss, which was always conincident with a step in the AO gain.
After working with that for a few hours, we turned back to our more standard locking attempts. First, we dither aligned the PRMI, and then centered the REFL beam on REFL11. It's hard to say for certain, but we may have been a little close to the edge of the PD. The only other thing that differed from Monday's attempts was using 6dB less AO gain when trying the up the overall gain.
The script now reliably breaks through to stable high powers, we had a handful of pure-RF locks tonight. The digital DARM gain needs tuning, and the CARM bandwidth still isn't at its final state, but these are very tractable. Off the top of my head, the way forward now includes:
Unrelated: I feel that the PRC angular FF may have deteriorated a bit. I'm leaving the PRC locked on carrier to collect data for wiener filter recalculation.
None of the links here seem to work. I forgot what the story is with our special apache redirect
The story is: we currently don't expose the whole /users/public_html folder. Instead, we are symlinking the folders from public_html to /export/home/ on nodus, which is where apache looks for things
So, I fixed the links on the Core Optics page by running:
controls@nodus|~ > ln -sfn /users/public_html/40m_phasemap /export/home/
Here is a longer lock, about 100 seconds RF only, from later that same night. The in-loop CARM and DARM error signals have the order of magnitude of 1nm per count.
From ~-150 to -103, we were fine tuning the ALS offsets to try and get close to the real CARM/DARM zero points then blending the RF CARM signal.
At -100, the CARM bandwidth increases to a few kHz and stabilizes the arm powers. By -81, the error signals are all RF. At -70, I turned on the transmon QPD servos, which brought the power up a bit.
If I recall correctly, lock was lost because I put waaaay too big of an excitation on DARM with the goal of running its UGF servo for a bit. The number I entered was appropriate for ALS, but most certainly too huge for AS55...
Gautam was working on his digital frequency counter stuff when the c1als model crashed. I had trouble bringing it back until I realized that, for reasons unknown to me, the safe.snap file that the model looks for at boot had been deleted. (This file lives in /target/c1als/c1alsepics/burt). I copied over this morning's version from Chiara's local backup.
At the sites, these files are under version control in the userapps svn repository, presumably symlinked into the target directory. We should definitely do something along these lines.
Last Friday, I installed some RF couplers on the green BBPDs' outputs, and sent them over to Gautam's frequency divider module. At first I tried 20dB couplers, but it seemed like not enough power was reaching the dividers to produce a good output. I could only find one 10dB coupler, and I stuck that on the X BBPD. With that, I could see some real signals come into the digital system.
I don't think it should be a problem to leave the couplers there during other activities.
For CARM and DARM, the A channels are used for the ALS signals, whereas the B channels are used for blending the RF signals.
For the DRMI, the A channels are used for the 1F signals, whereas the B channels are used for the 3F signals. The settings for transitioning to 1F after locking the DRFPMI have not yet been determined.
These settings are currently saved in the DRMI configurator, but the demod angles are set for DRFPMI lock, so the settings don't reliably work for misaligned arms.
The REFL33 element in SRCL_B is to reduce the PRCL coupling, was found empirically by tuning the relative gains with the arms misaligned and looking at excitation line heights. The offsets were found by locking the DRMI on 1F signals with arms misaligned, and taking the average value of these 3F error signals.
The CARM and DARM ALS settings are largely scripted by scripts/ALS/Transition_IR_ALS.py, which takes you from arms POX/POY locked to CARM and DARM ALS locked. The DRMI settings are usually restored from the IFO_CONFIGURE screen.
When arms are POX/POY locked, and the green beatnotes are appropriately configured, calling scripts/DRFPMI/carm_cm_up.sh initiates the following sequence of events:
When CARM and DARM are buzzing around true zero, powers maximized:
This is as far as we've taken the DRFPMI so far, but the CARM bandwidth is still only at a few kHz. Based on PRFPMI locking, the next steps will be:
Using a modified version of Hang's deMod_deCoup scripts, I tuned the MC2 coil output matrix to minimize the appearance of POS drive in the SUSPIT signal at 28Hz. Up until now, there was no F2P compensation. This reduced the force to pitch coupling at 28Hz by 8dB.
Old: POS -> 1 x UL, 1 x UR, 1 x LL, 1x LR
New: POS -> 1.1054 x UL, 1.1054 x UR, 0.8946 x LL, 0.8946 LR
I checked the MCL spectrum before and after this change with OAF on, this did not spoil the feedforward length subtraction in any noticible way.
The script lives in userapps/release/isc/c1/scripts/decoup, but I've symlinked it to /opt/rtcds/caltech/c1/scripts/decoup.
The script modification I made had to do with how the I and Q data is collected. Before, it was sporadically probing the I and Q FM output monitor EPICS values; I changed it to use the avg function of cdsutils, which calculates the mean and std from the 16kHz data and have seen it improve results by 1dB or so. I've been in touch with Jenne to propagate this to the sites.
[ericq, Gautam, Steve]
Following roughly the same procedure as ELOG 11354, c1vac1 and c1vac2 were rebooted. The symptoms were identical to the situation in that ELOG; c1vac1 could be pinged and telneted to, but c1vac2 was totally unresponsive.
The only change in the linked procedure was that we did not shut down the maglev. Since I unwittingly had it running for days without V4 open while Steve was away, we now know that it can handle shorter periods of time than that...
Upon reboot, many channels were readable again, unfortunately the channels for TP2 and TP3 are still blank. We were able to return to "Vacuum normal state," but because of unknowned communication problems with VM1's interlock, we can't open VM1 for the RGA. Instead we opened VM2 to expose the RGA to the main IFO volumn, but this isn't part of the "Normal" state definite, so things currently read "Undefined state".
D'oh. Good point.
Reverted for now; I'm thinking about doing laser pointer->MC2 QPD...
A handful of DRFPMI locks tonight, longest one was ~7 minutes.
EPICS/network latency has been a huge pain tonight. The locking script may hang between commands at an unstable place, or fail to execute commands altogether because it can't find the EPICS channel. This prevented or broke a number of locks.
I made some CARM OLG and crossover measurements, and found the AO gain for the right crossover freq (~100Hz) to be ~8dB different than what's in the PRFPMI script, which is weird. Right now, the CARM bandwidth / ability to turn on boosts is limited by the gain peaking in the IMC CLG due to the high-ish PC/PZT crossover frequency we're using.
Gautam turned on some sensing excitations during the last couple of locks, but they weren't on for very long before the lock loss. Hopefully I can pull out at least some angles from the data.
I'm also more convinced that the PRC angular FF needs retuning; there is more residual motion on the cameras than I'm used to seeing. I've taken more data that I'll use to recalculate a wiener filter tomorrow.
The PMC, ALSX beat and ITMX oplev all needed a reasonable pitch realignment tonight.
I've installed new filters for the T240 -> PRM static online angular feedforward that were trained after some of the recent changes to the signal chain of the relevant signals (i.e. the counts->velocity calibration that Rana did for the seismometers, and fixing the improper dewhitening of the POP QPD channels used as the Wiener target.)
Quickly trying them out now shows about the same level of performance as the previous ones, but the real performance I care about is during after-hours locking-time, so I'll take more measurements tonight to be posted here.
The length of DRFPMI lock did not increase much tonight, but we got a ~80 second sensing matrix measurement, and got the CARM bandwidth up to 10k with two boosts on.
NB: I did not measure the CARM loop gain at its excitation frequency, so the plotted sensing element is supressed by the CARM loop. However, this is still useful for gauging the size of the PRCL signal vs. the residual CARM fluctuations. The excitations are fairly closely spaced between 309 and 316 Hz.
For comparison, I'm also re-plotting the DRMI sensing measurement from a few weeks back taken at CARM offset of -4. We can see some change in the PRCL sensing, likely due to the CARM-coupled path. MICH/PRCL sadly looks pretty degenerate, but REFL55 looks more reasonable.
I think the main limitation tonight was SRC stability. Even before bringing CARM to zero offset, we would see occasional sharp dives in AS110 power. One lockloss happened soon after such an occurance, but I checked the values, and it was not sufficient to trigger the Schmitt trigger down; instead it may have been a real optical loss of signal. The SRCL OLTF looks sensible.
Tonight was kind of a wash.
We spent some time retaking single arm scans with Gautam's frequency counting code to confirm the linewidths he measured before his most recent round of code improvements. During this, ETMX was being its old fussy self, costing us gentle realignment time. For the time being, we started actuating on ITMX for single arm locks. Also, out of superstition, I changed the static position offset that had been at +1k for the last N months to -1k.
ETMX broke us out of a few DRFPMI lock trials as well, as did poor SRM alignment. I finally set up dither alignement settings for SRM in DRMI though, which helped (even in the arms-held-off-resonance situation). I still prefer doing the PRM/BS dither alignment in a carrier PRMI lock, because I think the SNR should be better than DRMI.
We know that the ETMX excursions can happen without length drive exciting them, but also that length drives certainly can excite them. For future locks, I'm going to try out avoiding ETMX drive altogether; the sites use a single ETM for their DARM actuation and let the CARM loop take care of the resultant cross coupling, so hopefully we can do the same without angering the mode cleaner.
Anyways, we didn't really ever make it far enough to do anything interested with the DRFPMI tonight
I've turned the ETMX oplev servos off for the time being. (At the input side, so that no scripts will accidently turn it on).
Thus, only the local damping is being applied, let's see if we see any kicks...
There is a new machine on the martian network: 32 cores and 128GB of RAM. Probably this is more useful for intensive number crunching in MATLAB or whatever as opposed to IFO control. I've set up some of the LIGO analysis tools on it as well.
A successor to Megatron, I dub it: OPTIMUS
During this time period, it looks like there was maybe one excursion. Here are the individual OSEM signals, which I think to be calibrated to microns.
When I'm doing other things, I'm going to intentionally leave the watchdog tripped, and see if anything happens.
- modulation depth = 0.390 +/- 0.062
There are two modulation frequencies that make it to the arm cavities, at ~11MHz and ~55MHz. Each of these will have their own modulation depth indepedent of each other. Bundling them together into one number doesn't tell us what's really going on.
During a ETMX kick that just occured, with only local damping on, the slow VMon channels didn't show any noticable change.
By looking at the oplev and Vmons before and after a step of 90 counts in SUS-ETMX_PIT_OFFSET, I observe:
UL: +1.53mV / urad
UR: -1.94mV / urad
LR: -1.54mV / urad
LL: +1.92mv / urad
The random error associated with these measurements is ~0.02mV
So, the ~7urad urad shift seen in my earlier post would mean a change of around 10mV in the Vmon signals, which isn't evident in the traces. So, this is possibly a piece of evidence in favor of a real mechanical shift, rather than an electronic glitch.
We found the PSL laser switched off. Looking at the wall StripTool, it looks like this happened about 4 hours ago. Gautam was working at and around the PSL table, and I suspect he accidently ran into the Big Red Button.
We turned the laser back on.
After running dither alignment for all mirrors, all oplevs were recentered. (Except ETMY, since we did that earlier today.)
Looking at Koji's template for OSEM signals, the ITMX UL sensor noise floor seems more in line with the LL sensor, though there continues to be more noise than in other mirrors.
Trending the sensor signals over the past 7 days, Koji's measurement looks to have been taken during a time when the UL sensor voltage had jumped down. Did someone squish the satellite box cable? I have not done so.
I think that the step right at the end is due to a new POS offset of -2k counts, which I think Koji put into place earlier today.
According to the wiki, the Vmax/2 values and the current values are:
To test the effect on EPICS latency, I've restarted daqd with modified ini files which disable all frame writing of 16Hz channels.
This happened at GPS:1131835955 aka Nov 17 2015 22:52:18 UTC
Last night, I started running a script written by Dave Barker that monitors a specified EPICS channel (in this case C1:IOO-MC_TRANS_SUM), to look for seconds in which it does not update the expected number of times. This is still running, so I will be able to compare the rate of EPICS slowdowns before and after this change.
I will revert back to the nominal state of things in a few hours, or until someone asks me to.
Back to nominal FB configuration at 1131857782, aka Nov 18 2015 04:56:05 UTC.
Weirdly, during this time, the script watching MC_TRANS_SUM from pianosa saw tons of freezes, but another instance watching LSC-TRY_OUT16 on optimus saw no freezes.
Steve and I inadvertently discovered that the c1iscey IO chassis doesn't have brackets to secure the cards where the ADC/DAC cables are connected, making them very easy to knock loose. All other IO chassis have these brackets. Pictures of c1iscey and c1lsc IO chassis to compare:
The /boot partition was filling up with old kernels. Nodus has automatic security updates turned on, so new kernels roll in and the old ones don't get removed.
I ran apt-get autoremove, which removed several old kernels. (apt is configured by default to keep two previous kernels around when autoremoving, so this isn't so risky)
Now: /dev/sda1 236M 94M 130M 42% /boot
/dev/sda1 236M 94M 130M 42% /boot
In principle, one should be able change a setting in /etc/apt/apt.conf.d/50unattended-upgrades that would do this cleanup automatically, but this mechanism has a bug whose fix hasn't propagated out yet (link). So, I've added a line to nodus' root crontab to autoremove once a week, Sunday morning.
Brackets for the c1iscey IO chassis cards have been installed. Now, I can't unseat the cards by wiggling the ADC or DAC cable.
COMSOL 5.1 has been installed at: /cvs/cds/caltech/apps/linux64/comsol51/bin/comsol
MATLAB 2015b has been installed at: /cvs/cds/caltech/apps/linux64/matlab15b/bin/matlab
This has not replaced the default matlab on the workstations, which remains at 2013a. If some testing reveals that the upgrade is ok, we can rename the folders to switch.
Gautam couldn't observe a Y green beatnote earlier, so we checked things out, fixed things up, and performance is back to nominal based on past references.
I checked the RF levels at the LSC LO distribution box, with the agilent scope and a handful of couplers. This was all done with the Marconi at +13dBm.
I only checked the channels that are currently in use, since the analyzer only measures 3 channels at a time, and rewiring involves walking back and forth to the IOO rack to make sure unpowered amps aren't driven, and I was getting hungry.
For the most part, the LO levels coming into the LSC demod boards are all around +1.5dBm (i.e. I measured around -18.0dBm out of the ZFDC-20-5 coupler, which has a nominal 19.5dB coupling factor)
The inputs piped over from the IOO rack, labeled as "+6dBm" were found to be 4.7dBm and 2.9dBm for 11Mhz and 55MHz, respectively.
The 2F signals were generally about 40dB lower, with two exceptions:
Here are the raw numbers I measured out of the couplers, all in dBm:
11MHz in: -14.8
55MHz in: -16.6
POP55: -18.8 (this port is used as the REFL55 LO)
One possible explanation of this behavior is simply poor centering of the AS beam on AS55 (whose DC level provides ASDC, if memory serves me correctly).
I misaligned ETMY, and moved ITMY through its current nominal alignment while looking at the POYDC and ASDC levels.
In both pitch and yaw, the nominal alignment is fairly close to the "plateau" in which the AS beam is fully within the PD active surface. I.e. it doesn't take much angular motion to start to lose part of the beam, and thus introduce a first order coupling of angle to power. (Look at the plateaus at around -2min and -0.5min, and where the rapidly changing oplev trace crosses zero)
Furthermore, POYDC seems to be in some weird condition where it is actually possible to increase the reported powerwhen misaligning in pitch, but somehow there is more angular coupling in this state.
In any case, I would advise that the POY11 and AS55 RFPDs have their spots recentered with optics in their nominal aligned states. In fact, given how we found REFL11 alingment to be less-than-ideal not so long ago, all of the RFPDs could probably use a checkup.
Since ETMX seems to have been on good behavior lately, we tried to fire the IFO back up.
We had a fair amount of trouble locking the DRMI with the arms held off resonance. For reasons yet to be understood, we discovered that the SRCL OLG looks totally bananas. It isn't possible to hold the DRMI for very long with this shape, obviously.
With the arms misaligned and the DRMI locked on 1F, the loop shape is totally normal. I haven't yet tried 3F locking with the arms misaligned, but this is a logical next step; I just need to look up the old demod angles used for this, since it wasn't quickly possible with the 3F demod angles that are currently set for the DRFPMI.
Somehow, the controls user account on donatella lost its membership to the sudoers group, which meant doing anything that needs root authentication was impossible.
I fixed this by booting up from a Linux install USB drive, mounting the HD, and running useradd controls sudo
useradd controls sudo
I had noticed for a while that the c1ioo frontend model had much higher variability than any of the other other 16k models, and would run longer than 60us multiple times an hour. This struck me as odd, since all it does is control the WFS loops. (You can see this on the Nov17 Summary page. (But somehow, the CDS tab seems broken since then, and I'm not sure why...))
This has happily now been solved! While poking around the model, I noticed that the MC2 transmission QPD data streams being sent over from c1mcs were using RFM blocks. This seemed weird to me, since I wasn't even aware that c1ioo had an RFM card. Since the c1sus and c1ioo frontends are both on the Dolphin network, I changed them to Dolphin blocks and voila! The cylce time now holds steady at 21usec.
Update: I think I figured out the problem with the CDS summary pages. Looking at the .err files in /home/40m/public_html/summary/logs on the 40m LDAS account showed that C1:FEC-33_CPU_METER wasn't found in the frame files. Indeed, this channel was commented out in c1/chans/daq/C0EDCU.ini. I enabled it and restarted daqd. Hopefully the CDS tab will come back soon...
I glanced at the summary pages and noticed that, since Friday around when we first loaded up the new BLRMS parts, daqd has crashing very frequently (few times per hour).
I'm going to comment out the c1pem lines from the daqd master file for tonight, and see if that helps.
Since removing c1pem from the daqd master file, daqd has not crashed. I suppose we're running into the stability issue that motivated us to disable some of the other models (IOPs, RFM, etc.) during the RCG upgrade.
A question to Jamie: although the new framebuilder prototype still had the same problem with trend writing, can it handle this higher testpoint/DQ channel load?
I've done a couple things to try and make nodus a little more secure. Some have worried that nodus may be susceptible to being drafted into a botnet, slowing down our operations.
1. I configured the ssh server settings to disallow logins as root. Ubuntu doesn't enable the root account by default anyways, but it doesn't hurt.
2. I installed fail2ban. Function: If some IP address fails to authenticate an ssh connection 3 times, it is banned from trying to connect for 10 minutes. This is mostly for thwarting mass brute force attacks. Looking at /var/log/auth.log doesn't indicate any of this kind of thing going on in the past week, at least.
3. I set up and enabled ufw (uncomplicated firewall) to only allow incoming traffic for:
I don't think there are any other ports we need open, but I could be wrong. Let me know if I broke something you need!
Here's something to ponder.
Our online MCL feedforward uses perpendicular vertex T240 seismometer signals as input. When designing a feedforward filter, whether FIR Wiener or otherwise, we posit that the PSD of the best linear subtraction one can theoretically achieve is given by the coherence, via Psub = P(1-C).
If we have more than one witness input, but they are completely uncorrelated, then this extends to Psub = P(1-C1)(1-C2). However, in reality, there are correlations between the witnesses, which would make this an overestimate of how much noise power can be subtracted.
Now, I present the actual MCL situation. [According to Ignacio's ELOG (11584), the online performance is not far from this offline prediction]
Somehow, we are able to subtract much more noise at ~1Hz than the coherence would lead you to believe. One suspicion of mine is that the noise at 1Hz is quite nonstationary. Using median [C/P]SDs should help with this in principle, but the above was all done with medians, and using the mean is not much different.
Thinking back to one of the metrics that Eve and Koji were talking about this summer, (std(S)/mean(S), where S is the spectrogram of the signal) gives an answer of ~2.3 at that peak at 1.4Hz, which is definitely in the nonstationary regieme, but I don't have much intution into just how severe that value is.
So, what's the point of all this? We generally use coherence as a heuristic to judge whether we should bother attempting any noise subtraction in the first place, so I'm troubled by a circumstance in which there is much more subtraction to be had than coherence leads us to believe. I would like to come up with a way of predicting MISO subtraction results of nonstationary couplings more reliably.
Based on calibration measurement I have done (elog 11785, 11831), I updated calibration factors of oplevs on medm screen as follows. Not to change loop gain oplev servo, I also changed oplev servo gain.
After making sure that the upper UGFs were properly in place, I saved these settings to the SDF files. Thanks Yutaro!