Wiener Filtering was applied on the data collected from the X-arm during the time: GPS time-996380715 (Aug 02, 2011. 21:25:00. PDT) to GPS time-996382215 (Aug 02, 2011. 21:50:00. PDT) for a duration of 1500 seconds. During this time the X-arm was locked, we checked it by acquiring data from channel C1:LSC-TRX_OUT_DQ .
The seismometers were near the beam splitter (guralp2) and near MC2 (guralp1).
Target data was obtained from channel C1:LSC-XARM_IN1_DQ.
Following graphs were obtained after applying the Wiener filter:
1.Seismic data acquired from Guralp1 (X and Y) and Guralp2 (X and Y) 2.Seismic data acquired from Guralp2 X 3.Seismic data acquired from Guralp2 Y
These graphs were obtained with srate = 2048 (sample rate) and N = 20000 (order of the filter).
Graph 1 is the best because the black (residual) line is below the red (target) line for low frequencies since we used seismic data from 4 channels. Graph 3 is the worst because we used seismic data from only one Y channel (Y axis of Guralp2) that is less related with the X-arm mirrors' motion since they are oriented orthogonally.
Last Friday (Jul 29) we reinstalled the blue breakout box, and changed the names of the C1:PEM channels. Elog Reference
We continued the work on the simulation ad applied wiener filter on the simulated ground motion, but the result is unsatisfactory, yet. We will post reasonable results soon.
We did wiener filtering for the first time on real data from the Xarm while it was locked. Elog Reference
We did the simulation of the stacks by defining a transfer function for one stack (green plot) and another similar transfer function for the other stack.
We simulated the ground motion by filtering a white noise with a low pass filter with a cutoff frequency at 10Hz. (blue plot) (the ground motion for the 2 stacks are completely uncorrelated)
We simulated the electronic white noise for the seismic measurements. (black plot)
We filtered the ground motion (without the measurements electronic noise) with the stack's transfer function and subtracted them to find the mirror response (red plot), which is the target signal for the wiener filter.
We computed the static wiener filter with the target signal (distance between the mirrors) and the input data (seismic measurements = ground motion + electronic noise).
We filtered the input and plotted the output (light blue plot).
We subtracted the target and the output to find the residual (magenta plot).
We didn't figure out why the residual is above the electronic noise only under ~6hz. We tried to increase and decrease the electronic noise and the residual follows the noise still only under ~6Hz.
It also shows that the residues are above the target at frequencies over 20Hz. This means that we are injecting noise here.
We tried to whiten the target and the input (using an high pass filter) to make the wiener filter to care even of higher frequencies.
The residues are more omogeneously following the target.
We also plotted the Wiener filter transfer function without making whitening and with making whitening. It shows that if we do whitening we inject no noise at high frequency. But we loose efficency at low frequencies.
We shouldn't care about high frequency, because the seismometers response is not good over 50Hz. So, instead of whitening, we should simply apply a low pass filter to the filter output to do not inject noise and keep a good reduction at low frequencies.
Tried the wiener filter with the TF from p.5900
Tried it out with the TFs from p.5900:
Adding a filter element that compensates the acutator TF makes the MC lose lock.
There is still a problem why GUR, STS signals are poorly coherent to MC_L. But at least we can see coherence at 2-5 Hz. It might be useful to do something with adaptive filtering because it does not work at all for a long time. We start with Wiener filtering. I still doubt that static filtering is useful. Adaptive filter output is linear to its coefficients, so why not to provide adaptive filter with a zero approximation equal to calculated Wiener filter coefficients. Then you automatically have Wiener filter ouput + adaptively control coefficients. But if Wiener filter is already present in the model, I tried to make it work. Then we can compare performance of the OAF with static filter and without it.
I started with GUR1_X and MC_F signals recorded 1 month ago to figure out how stable TF is. Will the same coefficients work now online? In the plot below offline Wiener filtering is presented.
This offline filtering was done with 7500 coefficients. This FIR filter was converted to IIR filter with the following procedure:
1. Calculate frequency responce of the filter. It is presented below.
2. Multiply this frequency response by a window function. This we need because we are interested in frequencies 0.1-20 Hz at this moment. We want this function to be > 1e-3 at ~0Hz, so that the DC component is filtered out from seismometer signal. From the other hand we also do not want huge signal at high frequenies. We know that this signal will be filtered with aggresive low-pass filterd before going to the actuator but still we want to make sure that this signal is not very big to be filtered out by the low-pass filter.
The window function is done in the way to be a differential function to be easier fitted by the vectfit3. Function is equal to 1 for 0.5 - 20 Hz and 1e-5 for other frequencies except neighbouring to the 0.5 and 20 where the function is cosine.
3. I've used vectfit3 software to approximate the product of the frequency response of the filter and window fucntion with the rational function. I've used 10 complex conjugate poles. The function was weighted in the way to make deviation as small as possible for interesting frequencies 0.5 - 10 Hz. The approximation error is big below 20 Hz where the window function is 1e-5 but at least obtained rational function does not increase as real function do at high frequencies.
I tried to make a foton filter out of this approximation but it turns out that this filter is too large for it. Probably there is other problem with this approximation but once I've split the filter into 2 separate filters foton saved it. Wiener21 and Wiener22 filters are in the C1OAF.txt STATIC_STATMTX_8_8 model.
I've tested how the function was approximated. For this purpose I've downloaded GUR and MC_F signals and filtered GUR singla with rational approximation of the Wiener filter frequency response. From power spectral density and coherence plots presented below we can say that approximation is reasonable.
Next, I've approximated the actuator TF and inverted it. If TF measured in p. 5900 is correct then below presented its rational approximation. We can see deviation at high frequencies - that's because I used small weights there using approximation - anyway this will not pass through 28 Hz low-pass filter before the actuator.
I've inverted this TF p->z , z->p, k->1/k. I've also added "-" sign before 1/k because we subtract the signal, not add it. I placed this filter 0.5Actuator20 to the C1OAF.txt SUS-MC2_OUT filter bank.
The next plot compares online measured MC_L without static filtering and with it. Blue line - with online Wiener filtering, red line - without Wiener filtering.
We can see some subtraction in the MC_L due to the static Wiener filtering in the 2-5 Hz where we see coherence. It is not that good as offline but the effect is still present. Probably, we should measure the actuator TF more precisely. It seems that there some phase problems during the subtraction. Or may be digital noise is corrupting the signal.
After reducing the digital noise I did offline Wiener filtering to see how good should be online filter. I looked at the MC_F and GUR1_X and GUR1_Y signals. Here are the results of the filtering. The coherence is plotted for MC_F and GUR1_X signals.
We can see the psd reduction of the MC_F below 1 Hz and at resonances. Below 0.3 Hz some other noises are present in the MC_F. Probably tilt becomes important here.
OAF is ready to be tested. I added AA and AI filters and also a highpass filter at 0.1 Hz. OAF workes, MC stays at lock. I looked at the psd of MC_F and filter output. They are comparable, filter output adapts for MC_F in ~10 minutes but MC_F does not go down too much. Determing the right gain I unlocked the MC, while Kiwamu was measuring something. Sorry about that. I'll continue tomorrow during the daytime.
A Matlab script to calculate Wiener filter coefficients and convert fir to iir is ready. Input is a file with zero mean witness and desired signals, output is a Foton zpk command to specify iir filter.
The plot shows comparison of offline fir , iir and online iir filtering. Spectrum below 4 Hz is still oscillating due to acoustic coupling, this is not a filtering effect. At 1 Hz actuator is badly compensated, more work should be done. Other then that online and offline filtering are the same.
We've subtracted acoustic noise from PMC using 1 EM 172 microphone. We applied a 10 Hz high-pass filter to PMC length signal and 100,200,300:30,30 to whiten the signal.We used ~10 minutes of data at 2048 Hz as we did not see much coherence at higher frequencies.
We were able to subtract acoustic noise from PMC length in the frequency range 10-700 Hz. In the range 30-50 Hz error signal is less by a factor of 10 then target signal.
I am trying to find what limits the reduction rate with wiener filtering.
I did some calculations below:
Reduction rate estimation by microphone noise
When the instrumental noise (noise in microphone) and noise injected to signal after the acoustic signal is injected exist, the noise cancellation rate is limited. (I will write a short document about it later.) I assumed that there is only instrumental noise and that the other noise in PMC is below enough, and calculated the cancellation rate. The instrumental noise is modeled according to the measurement before (ELOG).
The green line is the original PMC signal, the red one is PMC residual error, and the blue one is PMC residual error estimated by the cancelling rate.
Around 30 - 80 Hz, the wiener filtering seems to be already good enough. However, I do not know what limits the cancellation rate (such as 100 - 200 Hz).
I hypothesized that the wiener filter is not good because of some peaks or other noise. So I filtered the PMC signal and mic signal to see the difference.
The red line is wiener filter with no filters, the blue one is with filters (low pass, high pass, and notch).
The wiener filter seems to get smoother but the PMC residual error did not change at all.
I will attach a document which describes how the noise affect the wiener filter and the noise cancellation ratio.
And I re-estimate the SN ratio in the microphone (but still rough):
The yellow line is modeled signal level, and cyan line is modeled noise level.
Then, the estimated filtered residual noise is:
The noise is already subtracted enough below 80 Hz even though there is still coherence.
Above 300 Hz, the residual error is limited by other noise than acoustic noise since there is no coherence.
I am not sure about the region between 100-300 Hz, but I guess that we cannot subtract the acoustic noise because primary noise (see the document), such as a peak at 180 Hz, is so high.
In order to perform acoustic noise cancellation with MCL signal, I am trying to find sweet spots for microphones.
I set microphones at various places around MC chambers, and see how coherent microphones and MC signals are.
I had checked the half part of MC.
The acoustic noise around the MC2 chamber is most critical so far. I could subtract the signal and the sensitivity got 2 times better.
I will see the acoustic coupling from the other side of MC.
Last Thursday, I put the speaker and my laptop in the PSL table, and make triangular wave sound with the basic frequency of 40Hz, and Gaussian distributed sound.
(I create the sounds from my laptop using the software 'NHC Tone Generator' because I could not find the connector from BNC to speaker plug.)
And I measured the acoustic coupling in MCF signal. The all the 6 microphones were set in PSL table around PMC and PSL output optics.
The performance of the offline noise cancellation with wiener filter is below.
(The target signal is MCF and the witness signals are 6 microphones.)
I can see some effects on MCF due to the sound on PSL table. Though I can subtract some acoustic signal and there are no coherence between MCF signal and mic signals, still some acoustic noise remains.
This is maybe because of some non-linearity effects or maybe because we have other effective places for acoustic coupling measurement. More investigations are needed.
Also, I compared the wiener filter and the transfer function from microphones signal to MCF signal. They should be the same ideally.
(Left: Wiener filter, Right: Transfer function estimated by the spectrum. They are measured when the Gaussian sound is on.)
These are different especially lower frequencies than 50 Hz. The wiener filter is bigger at lower frequencies. I guess this adds extra noise on the MCF signal. (see the 1st figure.)
The wiener filter can be improved by filterings. But if so, I want to know how can we determine the filters. It is interesting if we have some algorithms to determine the filters and taps and so on.
The more investigations are also needed.
I have been searching for the way we can subtract signal better since I could see the acoustic coupling signal remains in the target signal even though there are no coherence between them.
I changed the training time which is used to decide wiener filter.
I have total 10 minutes data, and the wiener filter was decided using the whole data before.
(Right: the performance with the data when the triangular sound was created. Left: the performance with the data when the gaussian sound was created.)
I found that the acoustic signal can be fully subtracted above 40 Hz when the training time is short. This means the transfer functions between the acoustic signals and MCF signal change.
However, if the wiener filter is decided with short-time training, the performances at lower frequencies get worse. This is because wiener filter do not have enough low-frequency information.
So, I would like to find the way to combine the short-time training merit and long-time training merit. It should be useful to subtract the broad-band coupling noise.
In order to see the acoustic coupling on arm signals, I set 6 microphones and the speaker on the AP table. The microphones are not seismically isolated for now.
I have a signal generator under the AP table.
When I played the 43 Hz triangular wave sound, I could see some coherence between POY error signal and microphones even though there is no peak in POY.
Don't try to re-invent the mic mount: just copy the LIGO mic mount for the first version.
Yesterday, I made new mounts for microphones.
I glued a microphone on a pedestal. The cables are attached loosely so that its tension does not make any noise.
At the bottom of the mount, I attached the surgical tube forming a ring by double-side tape so that it damps the seismic vibration.
I made 6 mounts and these are all on the AS table now.
I took some data of XARM signal controlled by AS.
My plan is to find/set an upper limit on acoustic coupling noise in AS signal.
The acoustic noise can be estimated by the Wiener filter, but it is not accurate because it may see residual correlation between AS and microphone signals that should be 0 when the data is long enough.
I will find/set an upper limit by the analysis based on Neuman-Pearson criterion, that is analog of a stochastic GW background search.
If I can find the acoustic coupling noise should be below the shot noise, I am happy. If not, some improvements may be needed someday.
I have done a quicky offline Wiener filter to check how much PRM yaw motion we can subtract using a seismometer in the corner station. This work may be redundant since Koji got the POP beam shadow sensor feedback loop working on Friday night.
Anyhow, for now, I used the GUR2 channels, since GUR2 was underneath the ITMX chamber (at the north edge of the POX table). Note that Zach is currently borrowing this seismometer for the week.
I used GUR2_X, GUR2_Y and GUR2_Z to subtract from the PRM_SUSYAW_IN1 channel (the filename of the figure says "GUR1", but that's not true - GUR1 is at the Yend). All 4 of these channels had been saved at 2kHz, but I downsampled to 256 (I probably should downsample to something lower, like 64, but haven't yet). There is no pre-filtering or pre-weighting of the data, and no lowpass filters applied at the end, so I haven't done anything to remove the injected noise at higher freqs, which we obviously need to do if we are going to implement this online.
If I compare this to Koji's work (elog 8562), at 3.2Hz, he gets a reduction of 2.5x, while this gets 10x. At all other frequencies, Koji's work beats this, and Koji's method gets reduction from ~0.03Hz - 10Hz, while this is only getting reduction between 0.4Hz and 5Hz. Also, this does not include actuator noise, so the actual online subtraction may not be quite as perfect as this figure.
In order to do online static IIR Wiener filtering one needs to do the following,
1) Get data for FIR Wiener filter witnesses and target.
2) Measure the transfer function (needs to be highly coherent, ~ 0.95 for all points) from the actuactor to the control signal of interest (ie. from MC2_OUT to MC_L).
3) Invert the actuator transfer function.
4) Use Vectfit or (LISO) to find a ZPK model for the actuator transfer and inverse transfer functions.
5) Prefilter your witness data with the actuator transfer function, to take into account the actuator to control transfer function when designing the offline Wiener FIR filter.
6) Calculate the frequency response for each witness from the FIR coefficients.
7) Vectfit the frequency reponses to a ZPK model, this is the FIR to IIR Wiener conversion step.
8) Now, either, divide the IIR transfer function by the actuator transfer function or more preferably, multiply by the inverse transfer function.
9) Use Quack to make SOS model of the IIR Wiener filter and inverse transfer function product that goes into foton for online implementation.
10) Load it into the system.
The block diagram below summarizes the steps above.
I took some training data during Sunday night/Monday morning while the MCL MISO FF was turned on. We wanted to see how much residual noise was left in the WFS1/WFS2 YAW and PITCH signals.
The offline subtractions that can be achieved are:
I need to download data for these signals while the MCL FF is off in order to measure how much subtraction was achived indirectly with the MCL FF. In a previous elog:11472, I showed some offline subtractions for the WFS1 YAW and PITCH before any online FF was implemented either by me or Jessica. From the plots of that eLOG, one can clearly see that the YAW1 signal is clearly unchanged in the sense of how much seismic noise was mitigated indirectly torugh MCL.
Koji has implemented the FF paths (thank you based Koji) necessary for these subtractions to be implemented. The thing to figure out now is where we want to actually actuate and to measure the corresponding transfer functions. I will try to have either Koji or Eric help me measure some of these transfer functions.
Finally, I looked at the ARMS and see what residual seismic noise can be subtracted
I'm not too concerned about noise in the arms as if the WFS subtractions turn out to be promising then I expect for some of the arms seismic noise to go down a bit further. We also don't need to measure an actuator transfer function for arm subtractions, give that its essentially flat at low frequencies, (less than 50 Hz).
Here's something to ponder.
Our online MCL feedforward uses perpendicular vertex T240 seismometer signals as input. When designing a feedforward filter, whether FIR Wiener or otherwise, we posit that the PSD of the best linear subtraction one can theoretically achieve is given by the coherence, via Psub = P(1-C).
If we have more than one witness input, but they are completely uncorrelated, then this extends to Psub = P(1-C1)(1-C2). However, in reality, there are correlations between the witnesses, which would make this an overestimate of how much noise power can be subtracted.
Now, I present the actual MCL situation. [According to Ignacio's ELOG (11584), the online performance is not far from this offline prediction]
Somehow, we are able to subtract much more noise at ~1Hz than the coherence would lead you to believe. One suspicion of mine is that the noise at 1Hz is quite nonstationary. Using median [C/P]SDs should help with this in principle, but the above was all done with medians, and using the mean is not much different.
Thinking back to one of the metrics that Eve and Koji were talking about this summer, (std(S)/mean(S), where S is the spectrogram of the signal) gives an answer of ~2.3 at that peak at 1.4Hz, which is definitely in the nonstationary regieme, but I don't have much intution into just how severe that value is.
So, what's the point of all this? We generally use coherence as a heuristic to judge whether we should bother attempting any noise subtraction in the first place, so I'm troubled by a circumstance in which there is much more subtraction to be had than coherence leads us to believe. I would like to come up with a way of predicting MISO subtraction results of nonstationary couplings more reliably.
The puzzle continues...
I found some reference for computing "multicoherence," which should properly estimate the potential MISO subtraction potential in situations where the witness channels themselves have nontrivial coherence. Specifically, I followed the derivations in LIGO-P990002. The underlying math is related to principal component analysis (PCA) or gram-schmidt orthogonalization.
This produced the following results, wherein the Wiener subtraction is still below what the coherences predict.
I've attached the data and code that produced this plot.
The anticlimatic resolution to my subtraction confusion: Spectral leakage around 1Hz. Increasing the FFT length to 256 sec now shows that the FIR WF pretty much achieves the ideal subtraction.
If nothing else, it's good to have worked out how MISO coherence works.
Just not just pedagogical ! Freq domain MISO coherence based subtraction estimation is much faster than calculating MISO WF. And since each bin is independent of each other, this gives us an estimate of how low the noise can go, whereas the Wiener filter is limited by Kramers-Kronig. We should be able to use this on the L1 DARM channel to do the noise hunting as well as estimating the subtraction efficacy of the pseudo channels that you and Rory come up with.
If you can code up a noise hunter example using DARM + a bunch of aux channels, we could implement it in the summary pages code.
I've been banging my head against bilinear noise subtraction, and figured I needed to test things on some real hardware to see if what I'm doing makes sense.
I ran the ASS dither alignment on the Y arm, which ensures that the beam spots are centered on both mirrors.
I then drove ITMY in yaw with some noise bandpassed from 30-40 Hz. It showed the expected bilinear upconversion that you expect from angular noise on a centered beam, which you can see from 60-80 Hz below
I looked at the length signal, as the noise subtraction target, and the ITMY oplev yaw signal plus the transmon QPD yaw signal as witnesses.
There is some linear coupling to length, which means the the centering isn't perfect, and the drive is maybe large enough to displace it off center. However, the important part is the upconverted noise which is present only in the length signal. The QPD and oplev signals show no increased noise from 60-80Hz above the reference traces where no drive is applied
I then compared the multicoherence of those two angular witnesses vs. the multicoherence of the two (linear) witnesses plus their (bilinear) product. Including the bilinear term clearly shows coherence, and thereby subtraction potential, at the upconverted noise hump.
So, it looks like the way I'm generating the bilinear signals and calculating coherence in my code isn't totally crazy.
Not sure if someone is already on this, but the nodus DokuWikis are still down (AIC, ATF, Cryo, etc.)
DokuWiki Setup Error
The datadir ('pages') does not exist, isn't accessible or writable. You should check your config and permission settings. Or maybe you want to run the installer
The datadir ('pages') does not exist, isn't accessible or writable. You should check your config and permission settings. Or maybe you want to run the installer
As of this writing, the DokuWiki appears to be working.
As you and I suspected, it looks like this was a clusterwhoops with the permissions for the NFS mount. Let's recap what happened in the past 24 hours:
Despite the aforementioned mystery, Zach and I pressed on and tried to diagnose the permissions issue. We found that even if you are logged into nodus or pianosa or rossa or whatever as the controls user, the NFS mount saw us as the user nobody (in the group nogroup). If we created a file on the NFS mount, it was owned by nobody/nogroup. If we tried to modify a file on the NFS mount that was owned by controls/controls or controls/staff, we got a "permission denied" error, even if we tried with superuser privileges.
It turns out this has to do with the vagaries of NFS (scroll down to gotcha #4). We have all_squash enabled in /etc/exports , which means that no matter your username or group on nodus, rossa, pianosa, or harpischordsa, NFS coerces your UID/GID to chiara's nobody/nogroup. Anyway, the fix was to go into chiara's /etc/exports and change this
It looks like auth is broken on the AIC wiki (though working fine on ATF and Cryo). I did some poking around but can't see how anything we did could have broken it.
I went into local.php and changed $conf['useacl'] = 1; to $conf['useacl'] = 0; and it looks like the auth issue goes away (I've changed it back). This isn't a fix (we want to use access control), but it gives us a clue as to where the problem is.
I have been looking into the DokuWiki situation, with no great progress so far.
The problem is with authentication. When you try to access the wiki, you get: "User authentication is temporarily unavailable. If this situation persists, please inform your Wiki Admin."
As Evan points out, this goes away if you turn off ACL (Access Control Lists), but---as he also mentions---that just means that authentication is disabled, so the wiki becomes open. All signs point to this being a problem with the authentication mechanism. We use the 'authplain' plaintext method, where the usernames and hashed passwords are stored in a plaintext file.
Evan and I have both done plenty of testing with the config settings to see if the problem goes away. Even if you restore everything to default (but enable ACL using authplain and the existing user file), you still get an error.
I went as far as installing a brand new wiki on the web server, and, surprisingly, this also exhibits the problem. Interestingly, if I install an old enough version (2012-10-13), it works fine. After this revision, they transitioned their authentication methods from "backends" to "plugins", so this is a clue. As a side note, the other wikis on nodus (ATF and Cryo) are running earlier versions of DokuWiki, so they have no problems.
As it stood, our options were:
So, I created a temporary read-only version of the wiki using the aforementioned earlier DokuWiki release. It has a soft link to the real wiki data, but I deleted the user database so no one can log in and edit (I also disabled registration). It can be found at https://nodus.ligo.caltech.edu:30889/wiki_ro/.
I also created a backup of the real wiki as wiki_bak to avoid any potential disasters.
The simplest thing to do would be to just roll the wiki back to this working version, but that doesn't sound so smart. Clearly, it was working with the plugin structure before our snafu, and somehow our fixing everything else has broken it.
The PDFR system has been documented in the 40m wiki and all the relevant information about making changes and keeping it updated have been mentioned.
This pretty much wraps up my SURF 2014 project at the 40m lab.
There was still some residual permissions issue. This is now bypassed and so the ACL is ON and all seems to be back the way it was. I've tested that I can login and edit the wiki.
Some useless knowledge follows here. Please ignore.
After some hours of reading unhelpful DokuWiki blogs, I just put the backup wiki into the local disk on NODUS and then made a soft link to point to that from /users/public_html/wiki/. So this implies that the new NFS setup on chiara is different enough that it doesn't allow read/write access to the apache user on the NODUS/Solaris machine.
Mech Resonance Wiki
I've updated the wiki by trawling the elog for violin entries. Please keep it up to date so that we can make violin notches.
AIC Wiki updated to latest stable version of DokuWiki: 2017-02-19b "Frusterick Manners" + CAPTCHA + Upgrade + Gallery PlugIns
The most up to date pictures of the AP table and ETMX table that Steve took have been uploaded to the relevant page on the wiki. It seems like the wiki doesn't display previews of jpg images - by using png, I was able to get the thumbnail of the attachment to show up. It would be nice to add beam paths to these two images. The older versions of these photos were moved to the archive section on the same page.
ETMX table layout uploaded with beam paths to the wiki.
The pdf file is uploaded into the wiki.
LiYuan has kindly done some Total Integrating Sphere (TIS) measurements on ITMU01 and ITMU02. A summary of the measurement is attached. I uploaded the measurements and some analysis script to nodus at /home/export/home/40m_TIS. I created a Wiki page for the measurements and linked to it from the core optics page.
These TIS measurements look very similar to the TIS of the LIGO optics. Further analysis shows that the scatter loss is 10+/-1.7 ppm for ITMU01 and 8.6+/-0.4 ppm for ITMU02.
In this calculation, a gaussian beam the same size bouncing off the 40m ITMs is assumed to scatter from the mirrors. The error is calculated by moving the beam around randomly with STD of 1mm.
In LiYuan's setup, TIS is measured for scattering angles between 1 and 75 degrees. If we go further and assume that the scatter is Lambertian we can extrapolate that the total loss is 10.9+/-1.9 ppm for ITMU01 and 9.2+/-0.5 ppm for ITMU02.
These measurements complete the loss budget nicely since with the 6ppm loss predicted from the phase maps, the total loss in the arm cavities would be 6+10+10=26ppm which is very close to the 28ppm loss that was measured after the arm cavity optics were cleaned.
I updated the mime.local.conf file for the AIC Wiki so as to allow attachments with the .txz format. THis should be persistent over upgrades, since its a local file.
cosmic rays in cars
Came in, found all front-ends down.
Keyed a bunch of crates, no luck:
Requesting coeff update at 0x40f220 w/size of 0x1e44
No response from EPICS
Powered off/restarted c1dcuepics. Still no luck.
Powered off megatron. Success! Ok, maybe it wasn't megatron. I also did c1susvme1 and c1susvme2 at this time.
BURT restored to Nov 26, 8:00am
But everything is still red on the C0_DAQ_RFMNETWORK.adl screen, even though the front-ends are running and synced with the LSC. I think this means the framebuilder or the DAQ controller is the one in trouble--I keyed the crates with DAQCTRL and DAQAWG a couple of times, with no luck, so it's probably fb40m. I'm leaving it this way--we can deal with it tomorrow.
I found the red sea when I came in this morning.
I tried several things.
I'm now going to restart the single front -ends and burtgooey them if necessary.
Everything is back on.
Restarted all the front ends. As usual c1susvme2 was stubborn but eventually it came up.
I burt-restored all the front-ends to Nov 26 at 8am.
The mode cleaner is locked.
Taking a cue from entry 2346, I immediately went for the nuclear option and powered off fb40m. Someone will probably need to restart the backup script.
Backup script restarted.
Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.
linux1 and nodus and fb all appear to be on and answering their pings.
I'm going to leave it like this for the morning crew. If it
The monitors for allegra and rossa's seemed to be in a weird state after the power outage. I turned allegra and rossa on, but didn't see anything. However, I was after awhile able to ssh in. Power cycling the monitors did apparently got them talking with the computers again and displaying.
I had to power cycle the c1sus and c1iscex machines (they probably booted faster than linux1 and the fb machines, and thus didn't see their root and /cvs/cds directories). All the front ends seem to be working normally and we have damped optics.
The slow crates look to be working, such as c1psl, c1iool0, c1auxex and so forth.
Kiwamu turned the main laser back on.
Looks like there was a power outage.
I checked the vacuum system and judged there is no apparent issue.
The chambers and annulus had been vented before the power failure.
So the matters are only on the TMPs.
TP1 showed the "Low Input Voltage" failure. I reset the error and the turbine was lift up and left not rotating.
TP2 and TP3 seem rotating at 50KRPM and the each lines show low pressur (~1e-7)
although I did not find the actual TP2/TP3 themselves.