[Ian, Raj, Tega]
Here is the comparison between the results of Raj's python model and the transfer function measurement done on the plant model by Tega and me.
As You can see in the graphs there are a few small spots of disagreement but it doesn't look too serious. Next we will measure the signals flowing through the entire plant and controller.
For a nicer (and printable) version of these plots look in the zipped folder under Plots/Plant_TF_Individuals.pdf
I added mpmath to the quantization noise code. mpmath allows me to specify the precision that I am using in calculations. I added this to both the IIR filters and the State-space models although I am only looking at the IIR filters here. I hope to look at the state-space model soon.
I also added a new notebook which you can find HERE. This notebook creates a signal by summing two sine waves and windowing them.
Then that signal is passed through our filter that has been limited to a specific precision. In our case, we pass the same signal through a number of filters at different precisions.
Next, we take the output from the filter with the highest precision, because this one should have the lowest quantization noise by a significant margin, and we subtract the outputs of the lower precision filters from it. In summary, we are subtracting a clean signal from a noisy signal; because the underlying signal is the same, when we subtract them the only thing that should be left is noise. and since this system is purely digital and theoretical the limiting noise should be quantization noise.
Now we have a time series of the noise for each precision level (except for our highest precision level but that is because we are defining it as noiseless). From here we take a power spectrum of the result and plot it.
After this, we can calculate a frequency-dependent SNR and plot it. I also calculated values for the SNR at the frequencies of our two inputs.
This is the procedure taken in the notebook and the results are shown below.
Analysis of Results:
The first thing we can see is that the precision levels 256 and 128 bits are not shown on our graph. the 256-bit signal was our clean signal so it was defined to have no noise so it cant be plotted. The 128-bit signal should have some quantization noise but I checked the output array and it contained all zeros. after further investigation, I found that the quantization noise was so small that when the result was being handed over from mpmath to the general python code it was rounding those numbers to zero. To overcome this issue I would have to keep the array as a mpmath object the entire time. I don't think this is useful because matplotlib probably couldn't handle it and it would be easier to just rewrite the code in C.
The next thing to notice is sort of a sanity check thing. In general, low precision filters yield higher noise than high precision. This is a good quick sanity check. However, this does not hold true at the low end. we can see that 16-bit actually has the highest noise for most of the range. Chris pointed out that at low precisions that quantization noise can become so large that it is no longer a linearly coupled noise source. He also noted that this is prone to happen for low precision coefficients with features far below the Nyquist frequency like I have here. This is one explanation that seems to explain the data especially because this ambiguity is happening at 16-bit and lower as he points out.
Another thing that I must mention, even if it is just a note to future readers, is that quantization noise is input dependent. by changing the input signal I see different degrees of quantization noise.
Analysis of SNR:
One of the things we hoped to accomplish in the original plan was to play around with the input and see how the results changed. I mainly looked at how the amplitude of the input signal scaled the SNR of the output. Below I include a table of the results. These results were taken from the SNR calculated at the first peak (see the last code block in the notebook) with the amplitude of the given sine wave given at the top of each column. this amplitude was given to both of the two sine waves even though only the first one was reported. To see an example, currently, the notebook is set up for measurement of input amplitude 10.
As we can see from the table above the SNR does not seem to relate to the amplitude of the input. in multiple instances, the SNR dips or peaks in the middle of our amplitude range.
Tega and I have gone through the IIR Filter code and optimized it to make sure there aren't any areas that force high precision to be down-converted to low precision.
For the new biquad filter we have run into the issue where the gain of the filter is much higher than it should be. Looking at attachments 1 and 2, which are time series comparisons of the inputs and outputs from the different filters, we see that the scale for the output of the Direct form II filter shown in attachment 1 on the right is on the order of 10^-5 where the magnitude of the response of the biquad filter is on the order of 10^2. other than this gain the responses look to be the same.
I am not entirely sure how this gain came into the system because we copied the c code that actually runs on the CDS system into python. There is a gain that affects the input of the biquad filter as shown on this slide of Matt Evans Slides. This gain, shown below as g, could decrease the input signal and thus fix the gain. However, I have not found any way to calculate this g.
With this gain problem we are left with the quantization noise shown in Attachment 4.
I have controlled the state space filter to act with a given precision level. However, my code is not optimized. It works by putting the input state through the first state-space equation then integrating the result, which finally gets fed through the second state-space equation.
This is not optimized and gives us the resulting quantization noise shown in attachment 5.
However, the state-space filter also has a gain problem where it is about 85 times the amplitude of the DF2 filter. Also since the state space is not operating in the most efficient way possible I decided to port the code chris made to run the state-space model to python. This code has a problem where it seems to be unstable. I will see if I can fix it
I am trying to replicate the simulation done by Matt Evans in his presentation (see Attachment 1 for the slide in particular).
He defines his input as so he has two inputs one of amplitude 1 at 1 Hz and one of amplitude 10^-9 at 1/4th the sampling frequency in this case: 4096 Hz
For his filter, he uses a fourth-order notch filter. To achieve this filter I cascaded two second-order notch filters (signal.iirnotch) both with locations at 1 Hz and quality factors of 1 and 1e6. as specified in slide 13 of his presentation.
I used the same procedure outlined here. My results are posted below in attachment 2.
Analysis of results:
As we can see from the results posted below the results don't match. there are a few problems that I noticed that may give us some idea of what went wrong.
First, there is a peak in the noise around 35 Hz. this peak is not shown at all in Matt's results and may indicate that something is inconsistent.
the second thing is that there is no peak at 4096 Hz. This is clearly shown in Matt's slides and it is shown in the input spectrum so it is strange that it does not appear in the output.
My first thought was that the 4kHz signal was being entered at about 35Hz but even when you remove the 4kHz signal from the input it is still there. The spectrum of the input shown in Attachment 3 shows no features at ~35Hz.
The Input filter, Shown in attachment 4 shows the input filter, which also has no features at ~35Hz. Seeing how the input has no features at ~35Hz and the filter has no features at ~35Hz there must be either some sort of quantization noise feature there or more likely there is some sort of sampling effect or some effect of the calculation.
To figure out what is causing this I will continue to change things in the model until I find what is controlling it.
I have included a Zip file that includes all the necessary files to recreate these plots and results.
In preparation for the replacement of the suspension electronics that control the ETMY, I took measurments of the system excluding the CDS System. I took transfer functions from the input to the coil drivers to the output of the OSEMs for each sensor: UL, UR, LL, LR, and SIDE. These graphs are shown below as well as all data in the compressed file.
We also had to replace the oplev laser power supply down the y-arm. The previous one was not turning on. the leading theory is that it's failure was caused by the power outage. We replaced it with one Koji brought from the fiber display setup.
I also am noting the values for the OSEM DC output
In addition the oplev position was:
(KA ed) We only care about PERROR and YERROR (because P/YOUT are servo output)
Edit: corrected DC Output values
We looked at a few power supplies and we found one that was marked "CHECK IF THIS WORKS" in yellow. We found that the power supply worked but the indicator light didn't work. I tried a two other lights from other power supplies that did not work but they did not work. Koji ordered a new one.
I moved the network-enabled power strip from above the power supplies on rack 1y4 to below. Nothing was powered through the strip when I unplugged everything and I connected everything to the same port after.
paco helped me wire the ETMY 1Y4 rack. We wired the following (copied from Koji's email):
the cables we used:
I attached pictures below.
1) We attached the 30 coil driving cables to the vacuum feed through to the sat amp [40m wiki] they run along the cable tray then up and down into the rack.
2) we checked all DB and power cables. We found that the anti-imaging filter had a short and got very hot when plugged in. the back power indicator lights turned on fine but the front panel stayed off. We removed it and replaced it with the one that was on the test stand marked for the BHD. This means we need to fix the broken one and Koji mentioned getting another one.
3) we reassigned the ADC and DAC channels in the iscey model and the asy model. we committed a version before we made any changes.
4) Finally we tested the setup to make sure the ETM was being damped.
1) Measure the change of the PD gains and the actuator gains. See pervious elog
Now that the ETMSUS is back up and running I reran my measurements from the beginning of the process. The results below show a change in gain between the before and after measurements. I have given values of the low-frequency section below.
Average Gain difference from the TFs: 18.867 (excluding thee side change)
I also am noting the new values for the OSEM DC output: average gain increase: 9.004
In addition, the oplev position was:
All data and settings have been included in the zip file
From the average gain increase of the TFs which indicates the increase of the whole system and the increase in gain from the OSEM we can calculate the gain from the actuators.
18.867/9.004 = 2.09
thus the increase in gain on the actuator is about 2.09
EDIT: I updated the side TF with one with better SNR. I increased the excitation amplitude.
I updated the gain of the ct2um filter on the OSEMS for ETMY and decreased their gain by a factor of 9 from 0.36 to 0.04.
I added a filter called "gain_offset" to all the coils except for side and added a gain of 0.48.
together these should negate the added gain from the electronics replacement of the ETMY
I have received the new screwdriver bits that will work with the two electric screwdrivers we have. I have distributed them in the 40m. Some are in the electronics stations and some are in the toolbox in the lab. The new electric screwdriver (which looks like a drill but takes typical screwdriver bits) is in the room with the workshop. It is in the blue Makita box. For some reason, lots of the old bits were rounded because of incorrect use. I have thrown the unuseable ones out.
I also requested some screw extractors in case we need them. the one we have now is really big and may not work on smaller screws.
Continuing with the previous alignment that we stoped on friday, we re set up my heavily cleaned iPhone on FaceTime. The Phone alowed us to see the laser on the ITMX and center it on that optic.
The point of changing the gains was to return the system to its origional state. ie I wanted the over all gain of the physical components to be the same as when we started. From the CDS side of things nothing else should be changed. The damping filters should remain at their origional values. The cts2um filter was changed to counteract a change in the electronics (replacing them). These changes should cancel eachother out. As for the side control, on 3/4/22 koji reduced the output resistors for the 4 face OSEMs but did not change the the SD one. there fore the SD did not need the same adjustment as the others.
After Ian updated the cts2um filters for OSEM, shouldn't the damping gains be increased back by factor of 10 to previos values? Was the damping gain for SIDE ever changed? we found it at 250.
Can you explain why gain_offset filter was required and why this wasn't done for the side coil?
We continued to try and lock the IR laser to the x-arm. Last time we adjusted TT2 PR2 and PR3 to adjust the beam to hit the center of the ITMX. Today we continued to adjust the same optics to make the beam hit the center of the beam splitter and make sure the beam is going in the direction of the ITMY. Then removed a mirror that wasn't even bolted down. I put it on the front right corner of the hood with all the other optics (right in front of a suspended mirror). With this unknown mirror removed the path was clear and hit the center of ITMX. From there we went and opened the ETMX chamber and using the IR scope we adjusted ITMX to put the beam on the center of the ETMX. We continued adjusting ETMX and ITMX until we had three reflections. We stopped here because we realized we should set up POX. We think this because any adjustment to set up the POX would mess up the work we had done on the x arm alignment.
I mounted the particle counter over the BS chamber attached to the cable tray as seen in Attachment 1. The signal cable runs through an active 30ft cable to the 1x2 rack. the wire is labeled and runs properly through the cable tray. The particle counter is plugged in at the power strip attached near the cable tray. The power cord is also labeled.
I restarted the particle counter service in the c1psl computer in the /etc/systemd/system/ folder using the commands
sudo systemctl restart particleCounter
sudo systemctl status particleCounter
I cannged the usb hub assigned in the service file to ttyUSB0 which is what we saw the computer had named it.
Checking the channels from this elog show the same particle count as when testing with the buttons and checking the screen. It seems that the channels had been down but are now restarted.
[Ian, Paco, Tega]
Paco and I opened the ETMY and ITMY chamber to work on yesterdays efforts to lock the y arm. We temporarily in stalled a camera behind the ETMY to look at the transmission as we adjusted the ETMY and ITMY. We then moved on to setting up the POY. the beam was too large for the apature of the PD so we installed a lens in the beam path to decrease it.
Once that was installed we saw some flashing on the C1:LSC-POYCD_OUT channel. We also could see the flashing on the monitors in the control room. The flashing beam seemed to be in the middle of ITMY but was slightly to the right on the ETMY. From here we tried to walk the beam using PR3 and ETMY to move the beam to the center of the ETMY.
Tested the Nikon batteries for the camera. they are supposed to be 7V batteries but they don't hold a charge. I confirmed this with multi-meter after charging for days. Ordered new ones Nikon EN-EL9
[Ian, Paco, JC]
There is a strange smell in the 40m. It smells like a chemically burning smell maybe like a shorted component. I went around with the IR camera to see if anything was unusually hot but I didn't see anything. The smell seems to be concentrated at the vertex and down the y-arm
I want to take measurements of seismic noise at different places on Caltech's campus and in different buildings. I will try to use the accelerometer in my phone for this but first I must calibrate it (Against the 40m accelerometers).
I placed my iPhone 11 pro next to the seismometers at the 40m MC as seen in Attachment 1.
The calibration from the instrument was done using cts/rthz * 1V/16384cts * 1/ampgain * g/10V * 10m/s^2/g. The ampgain for all was 100.
Next, I took 100 seconds of data on both the iPhone and the three orthogonal Wilcoxon accelerometers.
The ASD for both of the total acceleration is shown in Attachment 2
The ASD for the individual directions acceleration is shown in Attachment 3
The coherence between the individual directions acceleration and the 40m's individual directions is shown in Attachment 4. For this calculation, the 40m data were downsampled to roughly match the phone's sample rate. This coherence is not very good. It should be higher. Because the phone and 40m sensors were picking up the same data as the phone. Because of this I also looked at the coherence between the individual 40m sensors.
In Attachment 5 I look at the coherence between the individual 40m sensors. This should give me a good idea of whether this is some other issue giving me mow coherence. This plot shows that the coherence between the individual 40m sensors is much better than between the phone and the 40m sensors.
Now I wanted to see what kind of data the iPhone could get from real-world tests. I placed it in a number of locations described below and plotted their ASDs in Attachment 6. The locations are thus:
Notice how at the low end the amplitudes follow the relative amplitudes I would expect. QIL and WBSH are the lowest then WB1H is noisier and 40m desk is the noisiest. However, this is only true up until about 0.5 Hz then they all overlap. Since I would expect the 40m desk should be much noisier at all frequencies I suspect that the phone accelerometer is not suitable for measurements higher than 0.5 Hz.
One possible problem with my measurement is that my phone was in a leather case. this may have damped out higher frequencies. Also, my phone was not weighed down or bolted to the floor. this stronger connection would make it better at detecting higher frequencies. I could repeat the experiment with no case and a weight on top of my phone.
Since I don't think the phone can give me accurate data above 0.5Hz for quiet environments. It may not be suitable for this task. It would seem that the right instrument is the Wilcoxon 731A but it requires an amplifier that I can't track down.
I included all the data and code in the zip file in attachment 7
We built a power supply for the accelerometer shown in Attachment 1 based on the diagram shown in the Wilcoxon manual and shown in attachment 2. We used a 9V power supply and a capacitor value of 680uF. We did not use a constant current diode.
When hooked up to an oscilloscope we saw vibrations from hitting our hands on the table but we did not see the same amplitude in the negative and positive directions. For example, when I held the accelerometer and moved it down you would see a dip then a peak as the accelerometer accelerated down then accelerated up when I stopped the down word movement. But weirdly when I did the opposite (moved the accelerometer up the same dip then a peak appeared. This is a little concerning because it should be the opposite. it should be a peak then a dip. This in addition to the seemingly decreased sensitivity in one direction make me think that the accelerometer is broken.
I labeled the box with "might be broken" before I returned it to the cryo lab.
I have designed new cable supports for the new ribbon cables running up the side of the tables in the vacuum chambers.
The clamps that I have designed (shown in basic sketch attachment 1) will secure the cable at each of the currently used cable supports.
The support consists of a backplate and a frontplate. The backplate is secured to the leg of the table using a threaded screw. The frontplate clamps the cable to the backplate using two screws: one on either side. Between two fascinating points, the cable should have some slack. This should keep the cable from being stiff and help reduce the transfer of seismic noise to the table.
It is possible to stack multiple cables in one of these fasteners. Either you can put two cables together and clamp them down with one faceplate or you can stack multiple faceplates with one cable between each faceplate. in this case the stack would go backplate then cable then faceplate then cable then the second faceplate. this configuration would require longer screws.
The exact specifics about which size screws and which size plates to use still have not been measured by me. But it will happen
I made the first pass at a tool to measure the quantization noise of specific filters in the 40m system. The code for which can be found here. It takes the input to the filter bank and the filter coefficients for all of the filters in the filter bank. it then runs the input through all the filters and measures the quantization noise at each instance. It does this by subtracting the 64-bit output from the 32-bit output. Note: the actual system is 64 bit so I need to update it to subtract the 64-bit output from the 128-bit output using the long double format. This means that it must be run on a computer that supports the long double format. which I checked and Rossa does. The code outputs a number of plots that look like the one in Attachment 1. Koji suggested formatting a page for each of the filters that is automatically generated that shows the filter and the results as well as an SNR for the noise source. The code is formatted as a class so that it can be easily added to the IFOtest repo when it is ready.
I tracked down a filter that I thought may have lower thermal noise than the one that is currently used. The specifics of this will be in the DCC document version 2 that I am updating but a diagram of it is found in attachment 2. Preliminary calculations seemed to show that it had lower quantization noise than the current filter realization. I added this filter realization to the c code and ran a simple comparison between all of them. The results in Attachment 3 are not as good as I had hoped. The input was a two-toned sin wave. The low-level broadband signal between 10Hz and 4kHz is the quantization noise. The blue shows the current filter realization and there shows the generic and most basic direct form 2. The orange one is the new filter, which I personally call the Aircraft Biquad because I found it in this paper by the Hughes Aircraft Company. See fig 2 in paper. They call it the "modified canonic form realization" but there are about 20 filters in the paper that also share that name. in the DCC doc I have just given them numbers because it is easier.
1) I need to make the review the qnoisetool code to make it compute the correct 64-bit noise.
a) I also want to add the new filter to the simulation to see how it does
2) Make the output into a summary page the way Koji suggested.
3) complete the updated DCC document. I need to reconcile the differences between the calculation I made and the actual result of the simulation.
P. P. Vaidyanathan wrote a chapter in the book "Handbook of Digital Signal Processing: Engineering Applications" called "Low-Noise and Low-Sensitivity Digital Filters" (Chapter 5 pg. 359). I took a quick look at it and wanted to give some thoughts in case they are useful. The experts in the field would be Leland B. Jackson, P. P. Vaidyanathan, Bernard Widrow, and István Kollár. Widrow and Kollar wrote the book "Quantization Noise Roundoff Error in Digital Computation, Signal Processing, Control, and Communications" (a copy of which is at the 40m). it is good that P. P. Vaidyanathan is at Caltech.
Vaidyanathan's chapter is serves as a good introduction to the topic of quantization noise. He starts off with the basic theory similar to my own document on the topic. From there, there are two main relevant topics to our goals.
The first interesting thing is using Error-Spectrum Shaping (pg. 387). I have never investigated this idea but the general gist is as poles and zeros move closer to the unit circle the SNR deteriorates so this is a way of implementing error feedback that should alleviate this problem. See Fig. 5.20 for a full realization of a second-order section with error feedback.
The second starts on page 402 and is an overview of state space filters and gives an example of a state space realization (Fig. 5.26). I also tested this exact realization a while ago and found that it was better than the direct form II filter but not as good as the current low-noise implementation that LIGO uses. This realization is very close to the current realization except uses one less addition block.
Overall I think it is a useful chapter. I like the idea of using some sort of error correction and I'm sure his other work will talk more about this stuff. It would be useful to look into.
One thought that I had recently is that if the quantization noise is uncorrelated between the two different realizations then connecting them in parallel then averaging their results (as shown in Attachment 1) may actually yield lower quantization noise. It would require double the computation power for filtering but it may work. For example, using the current LIGO realization and the realization given in this book it might yield a lower quantization noise. This would only work with two similarly low noise realizations. Since it would be randomly sampling two uniform distributions and we would be going from one sample to two samples the variance would be cut in half, and the ASD would show a 1/√2 reduction if using realizations with the same level of quantization noise. This is only beneficial if the realization with the higher quantization noise only has less than about 1.7 times the one with the lower noise. I included a simple simulation to show this in the zip file in attachment 2 for my own reference.
Another thought that I had is that the transpose of this low-noise state-space filter (Fig. 5.26) or even of LIGO's current filter realization would yield even lower quantization noise because both of their transposes require one less calculation.
Today, I installed the Wilcoxon accelerometers in the table near the end of the mode cleaner. I only set three of them up instead of all six. They were set up just as Rana suggeted we should have them properly set up, i.e. cables being tighten up, and a box on top to prevent any airflow introducing any disturbances. We are planning on running the huddle test on these guys once the upgrade? to the interferometer is done.
The cables were tightly clamped to the table as shown below, I used a thick piece of shock absorbing rubber to do this.
A small piece of thin rubber was used to hold each of the accelerometers tightly to the table in order not to damage them.
We had to borrow Megan's and Kate's piece of black foam in order to seal one of the sides properly, as the cable had to come out through somewhere. We didn't want to mess with drilling any holes into the box!
There was a small crack even after using the foam. I sealed it up with duck tape.
The box isn't perfect, so there were multiple cracks along the bottom and top of it that could potentially allow for air to flow to the inside. Eric suggested that we should be super careful this time and do it right, so every crack was sealed up with ducktape.
Finally, we needed something heavy to be placed on top of the box to hold everything well. We used Rana's baby to accomplish this goal.
Just kidding! Rana's baby is too delicate for our purposes. A layman box of heavy cables was used instead.
Updated: On Thursday/Friday (sorry for late elog) I was messing with Eric's Wilcoxon 731A accelerometer huddle test data that was taken without the box and cables being adjusted properly. Anyways, I performed the three cornered hat analysis as he had done but I also performed a six cornered hat method as well instead of permuting around in pairs of three accelerometers. The following plots of the ASD's show the results,
It is interesting to note the improvement at low frequencies when six accelerometers are used instead of six while at higher frequencies we can clearly see how the results are worst than the three hat results.
I decided to take a mean of all six accelerometers measured ground signal as well as that for their computed selfnoises, this is plotted below,
Notice the obvious improvement along the entire frequency band of the measurements when all accelerometers are used in the data analysis.
I also performed some Wiener filtering of this data. There was an obvious improvement in the results,
The mean of the signals is also plotted below, just as I did with the cornered hat methods,
I also compared the mean self noise of the accelerometers against the manufacturers calculated selfnoise that Rana put up on Github. Both methods are compared against what the manufacturer claims,
As expected the measured noise curves of the Wilcoxon is not as good as what the manufactures stated. This should improve once we redo the huddle test with a better experimental setup. We have some pretty interesting results with the six cornered hat method at around 5 Hz, it is surprisingly accurate given how rough the calculations seemed to be.
I have attached my code for reference: code_accel.zipselfnoise_allsix.png
SEE attachments for better plots of the six accelerometers...
Over the past few days, I've been thinking about how to workout the details conerning Rana's request about a 'map' of the vicinity of the 40m interferometer. This map will take the positions of N randomly placed seismic sensors as well as the signals measured by each one of them and the calculated cross correlations between the sensors and between the sensors and the test mass of interest to give out a displacement vector with new sensor positions that are close to optimum for better seismic (and Newtonian) noise cancellation.
Now, I believe that much of the mathematical details have been already work out by Jenne in her thesis. She explains that the quantity of interest that we wish to minimize in order to find an optimal array is the following,
where is the cross-correlation vector between the seismic detectors and the seismic (or Newtonian) noise, is the cross-correlation matrix between the sensors and is the seismic (or Newtonian) noise variance.
I looked at the paper that Jenne cited from which she obtained the above quantity and noted that it is a bit different as it contains an extra term inside the square root, it is given by
where the new term, is the matrix describing the self noise of the sensors. I think Jenne set this term to zero since we can always perform a huddle test on our detectors and know the self noise, thus effectively subtracting it from the signals of interest that we use to calculate the other cross correlation quantities.
Anyways, the quantity above is a function of the positions of the sensors. In order to apply it to our situation, I'm planning on:
1) Performing the huddle tests on our sensors, redoing it for the accelerometers and then the seismometers (once the data aquisition system is working... )
2) Randomly (well not randomly, there are some assumptions we can make as to what might work best in terms of sensor placement) place the sensors around the interferometer. I'm planning on using all six Wilcoxon 731A accelerometers, the two Guralps and the STS seismometer (any more?).
3) Measure the ground signals and use wiener filtering in order to cancel out their self noises.
4) From the measured signals and their present positions we should be able to figure out where to move the sensors in order to optimize subtraction.
i have also been messing around with Jenne's code on seismic field simulations with the hopes of simulating a version of the seismic field around the 40m in order to understand the NN of the site a little better... maybe. While the data aquisition gets back to a working state, I'm planning on using my simulated NN curve as a way to play around with sensor optimization before its done experimentally.
i have as well been thinking and learning a little bit about source characterization through machine learning methods, specially using neural networks as Masha did back in her SURF project on 2012. I have also been looking at Support vector machines. The reasons why I have been looking at machine learning algorithms is because of the nature of the everchanging seismic field around the interferometer. Suppose we find a pretty good sensor array that we like. How do we make sure that this array is any good at some time t after it has been found? If the array mostly deals with the usual seismic background (quiet) of the site of interest, we could incorporate machine learning techniques in order to mitigate any of the more random disturbances that happen around the sites, like delivery trucks, earthquakes, etc.
On Thursday, new huddle test data for the Wilcoxon 731A was aquired by Eric.
The difference between this new data and the previous data, is:
1) We used three accelerometers instead of six this time around.
2) We used a foam box, and clamped cables on the experimental set up as shown in the previous elog, http://nodus.ligo.caltech.edu:8080/40m/11389
I have analyzed the new data. Here I present my results.
The following plot shows the ASD's for the three accelerometers raw outputs as well as their error signals computed using the three cornered hat method,
As before, I computed the mean for the output signals of the accelerometers above as well as their mean self noise to get the following plot
Now, below I compare the new results with the results that I got from the old data,
Did the enclosure and cable clamping do much? Not really, according to the computed three hat results. Also, notice how much better, even if its a small improvement, we get from using six accelerometers and calculating their self noise by the six cornered hat method.
Now, I moved on to analyzing the same data with Wiener Filtering.
Here are again, the raw outputs, and the self noises of each individual accelerometer calculated using Wiener filtering,
The accelerometer in the Y direction is show a kind of funky signal at low frequncies. Why? Anyways, I calculated the mean of the above signals as I did for the three cornered hat method above to get the following, I also show the means of the signals computed with the old data using wiener filtering,
Is the enclosure really doing much? The Wiener filter that I applied to the huddle test old data gave me a much better, by an order of magnitude better self noise curve. Keep in mind that this was using SIX accelerometers, not THREE as we did this time. I want to REDO the huddle test for the WIlcoxon accelerometers using SIX accelerometers with the improved experimental setup to see what I get.
Finally, I compare the computed self noises above with what the manufacturer gives,
As I expected, the self noise using six accelerometers and Wiener filtering is the best I could work out. The three cornered hat method works out pretty well from 1 to 10 Hz, but the noise is just too much anywhere higher than 10 Hz. The enclosed, clamped, 3 accelerometer wiener filter result is an order of magnitude worse than the six accelerometer wiener filtered result, and two orders of magnitude worse than the three cornered hat method in the 1 to 10 Hz frequency band.
As I stated, I think we must performed the huddle test with SIX accelerometers and see what kind of results we get.
We took data for the mode cleaner a while ago, June 30th I believe. This data contained signals from the six accelerometers and the three seismometers. In here I have only focused on the seimometer signals as witnesses in order to construct Wiener filters for each of the three seismometer signals (x,y,z) and for the combined seismometers signal. The following plot showing the ASD's shows the results,
Wiener filtering works beautifully for the seismometers. Note that subtraction is best when we use all three seismometers as the witnesses in the Wiener filter calculation, as can be clearly seen in the first plot above.
Now, I used vectfit to conver the Wiener FIR filters for each seismometer to their IIR versions. The following are the bode plots for the IIR filters,
For the x-direction seismometer,
For the y-direction seismometer
And for the z-direction seismometer,
The IIR filters were computed using 5 zeros and 5 poles using vectfit. That was the maximum number of poles that I could use wihtout running into trouble with matrices being almost singular in Matlab. I still need to figure out how to deal with this issue in more detail as fitting the y-seismometer was a bit problematic. I think having a greater number of poles will make the fitting a bit easier.
(updateAfter Eric gave me feedback on my previous elog post, I went back and fixed some of the silly stuff I stated.
First of all, I have come to realized that it makes zero sense to plot the ASD's of the mode cleaner against the seismometer noise. These measurements are not only quite different, but elementary, they posess different units. I have focused my attention to the MCL being Wiener filtered with the three siesmometer signals.
One of the major improvements that I make in the following analysis is,
1) Prefiltering; a band pass filter from 1 to 5 Hz, in order to emphasize subtraction of the bump shown in the figure below.
2) I have used vectfit exclusively in the 1 to ~5 Hz range, in order to model the FIR filter properly, as in, the kind of subtraction that we care about. Limiting myself to the 1 - 5 Hz range has allowed me to play freeley with the number of poles, hence being able to fit the FIIR filter properly with an IIR rational transfer function properly,
The resulting ASD's are shown below, in blue we show the raw MCL output, in blac the Wiener filter (FIR) result, and finally in black, the resultant data being filtered with the calculated IIR Wiener filter.
Now, in the following plots I show the IIR Wiener filters for each of the three seismometers,
For the Y seismometer,
and for the Z seismometer,
The matlab code for this work is attached: code.zip
I generated the following plots from the two sets of huddle test data we have for the accelerometers.
Old data: 6 accelerometers, no cables clamped, no box, 55 mins worth of data.
New data: 3 accelerometers, cables clamped, foam box put on placed and completely sealed, 20 mins worth of data.
I made sure to use the same Impuse response time (6 sec) and sampling frequency (256 Hz), as well as every other parameter for the calculations.
Top left: The resultant self noise curve using the new data, there is definitely and improvement in the 0.5-2 Hz band.
Top right: Resultant self noise using the old data, for the first set of three accelerometers.
Bottom left: Old data result for the remaining three accelerometers.
Bottom right: Old data result, using all six accelerometers as witnesses instead.
In the last post concerning the self noise of the accelerometers, I mentioned the differences between the two data sets I was playing with. In order to give a more concrete analysis of the accelerometers self noise, we came to the conclusion that data taking time should be the same.
The plots below show the analysis for the following two datasets:
Old Data: 6 accelerometers, no cables clamped, no box, 55 mins worth of data.
Newer data: 3 accelerometers, cables clamped, foam box put on placed and completely sealed, 57.66 mins worth of data, (we had 20 mins of data in the previous data set).
Filter parameters were kept the same in all calculations, the only change that was added to the analysis was the detrending of the signals using the detrend function on Matlab, this improved the results as the plots show. I also plotted the error bars for the Wiener filtered result for reference as well as the manufactures claimed self noise.
After detrending the data and taking a longer dataset we can see the improvement brought upon by the foam box in the low frequency band of 0.5 - 10 Hz, as shown in the bottom left plot. There is a lot of noise that needs to be cancelled out from 10 Hz and on, which brings to our attention the plot on the bottom right corner. This plot uses the old data but uses all six accelerometers as witnesses, it also improved overall after having detrended the data, but what is peculiar about this plot is the fact that it manages to mitigate the higher frequency 10 - ~100 Hz band noise.
I have moved the MC1 accelerators and cable to the huddle test set up, in order to see how a six witness huddle test with the improved set up will do.
Here is a picture of the accelerometer set up,
Our motivation for doing this is to see if more witness signals used in the Wiener filter really does indeed improve subtraction, as it was seen from previous huddle results, specially in the region above 10 Hz.
I've have been talking a little bit with Steve about the seismometer enclosures.
We want to improve on the current stainless steel cans that cover the two Guralps at the end of the arms. In order to do this, we want to cover the interior of the cans with copper foil to improve the thermal conductivity of the enclosure to better control the temperature inside it. Ideally, we would want to copper plate the cans, but cost and difficulty has been an issue.
I have done some rough calculations and it seems that we need a copper layer of thickness being about a third that of the stainless steel can. This happens to be around 0.5-0.6 mm since we have 16 gauge (~1.6 mm) stainless steel cans.
After wrapping the cans interior with copper, we will insulate them with foam in order to improve its thermal inertia. We want to probably use the same foam that Megan has been using for her seismometer enclosure. I have yet to think about a heater, but something similar to Megans resistor thing would work only smaller. I would be placed inside the can, right on the center of its bottom in order to ditribute heat evenly.
I downloaded new accelerometer huddle test data from last night and analyzed it. This new data set is ~55 mins and uses the same Wiener filter parameters as previous huddle test analysis, the main difference being six accelerometers used in the Wiener filter and the improved experimental set up.
After computing the ASD for the self noise for each of the six accelerometers, (being witnessed by the remaining five), we get,
Now computing the mean of the above signals and the corresponding error bars gives the following result,
Comparing to prevoius huddle tests, I can note two trends on the Wiener subtraction:
1) When using six accelerometers, the subtraction above ~8 Hz drastically improves.
2) When using three accelerometers, there is better cancellation in the 1-5 Hz region, see http://nodus.ligo.caltech.edu:8080/40m/11442. This might have to do with how much more closer the accelerometers are to each other?
Jessica and I took 45 mins (GPS times from 1122099200 to 1122101950) worth of data from the following channels:
C1:IOO-MC_L_DQ (mode cleaner)
C1:LSC-XARM_IN1_DQ (X arm length)
C1:LSC-YARM_IN1_DQ (Y arm length)
and for the STS, GUR1, and GUR2 seismometer signals.
The PSD for MCL and the arm length signals is shown below,
I looked at the coherence between the arm length and each of the three seismometers, plot overload incoming below,
For the coherence between STS and XARM and YARM,
Finally for GUR2,
A few remarks:
1) From the coherence plots, we can see that the arm length signals are coherent with the seismometer signals the most from 0.5 - 50 Hz. This is most evident in the coherence with STS. I think subtraction will be most useful in this range. This agrees with what we see in the PSD of the arm length signals, the magnitude of the PSD starts increasing from 1 Hz and reaches a maximum at about 30 Hz. This is indicative of which frequencies most of the noise is present.
2) Eric did not remember which of GUR1 and GUR2 corresponded to the ends of XARM and YARM. So, I went to the end of XARM, and jumped for a couple seconds to disturb whatever Gurald was in there. Using dataviewer I determined it was GUR1. Anyways, my point is, why is GUR1 less coherent with both arms and not just XARM? Since it is at the end of XARM, I was expecting GUR1 to be more coherent with XARM. Is it because, though different arms, the PSD's of both arms are roughly the same?
3) Similarly, GUR2 shows about the same levels of coherence for both arms, but it is more coherent. Is GUR2 noisier because of its location?
We are done taking accelerator huddle test data. So I moved back all six accelerometers and cables to MC1 and MC2. I also relabel each of the accelerometers properly since the labels on them were confusing.
Eric downloaded MC2 to MCL transfer function data (H) as well as its inverse, MCL to MC2 (Hinv). He also downloaded new MCL and MC2 data.
I used vectfit to fit the MC2 to MCL transfer function,
The ZPK parameters for this fit were,
Zeros 1278.36719876674 + 0.00000000000000i
-100.753249679343 + 0.00000000000000i
-18.6014192997845 + 13.0294910760217i
-18.6014192997845 - 13.0294910760217i
Poles -1.11035771175328 + 7.03549674098987i
-1.11035771175328 - 7.03549674098987i
-18.8655320274072 + 0.00000000000000i
-690.294337433234 + 0.00000000000000i
Using the above vectfit model, I filtered the raw MC2 signal to get 'MCL'. The PSD's of the raw MCL data and the filtered MC2 result is shown below,
The lack of accuracy of the transfer function at replicating MCL at frequencies lower than 0.7Hz is expected, the vectfit model I generated fails to follow accurately the raw transfer function data. My question: Does it matter? My guess: Probably not. In order to mitigate seismic noise from the mode cleaner we are mainly concerened with the 1-3 Hz region.
I also used vectfit to fit the transfer function for MCL to MC2,
This one was harder to fit accurately for some reason, I could do it with four pairs of zeros and poles but it took some preweighting.
The ZPK parameters for the above fit were,
Zeros 0.173068278283995 + 0.00000000000000i
0.995140531040529 + 0.0268079821980457i
0.995140531040529 - 0.0268079821980457i
0.894476816129099 + 0.00000000000000i
Poles -19.9566906920707 + 18.0649464375308i
-19.9566906920707 - 18.0649464375308i
-109.275971483008 + 0.00000000000000i
-1791.88947801703 + 0.00000000000000i
Similarly, using this ZPK model, I filtered the MCL signal to get 'MC2'. I plotted the PSD for the MC2 signal and the filtered MCL to get,
Again, the lack of accuracy of the filtered MC2 at replicating MCL below 0.7 Hz and above 12 Hz is due to the inverse transfer function failing to converge in these ranges.
Rana pointed out that another way to mitigate seismic motion at in the mode cleaner would be to look at the YAW and PITCH output channels of the WFS sensors that control the angular alignment of the mode cleaner.
I downloaded 45 mins of data from the following two channels:
And did some quick offline Wiener filtering with no preweighting, the results are shown in the PSD's below,
I'm quite surprised at the Wiener subtraction obtained for the YAW signal, it required no preweighting and there is about an order of magnitude improvement in our region of interest, 1-3 Hz. The PIT channel didn't do so bad either.
In order to do online static IIR Wiener filtering one needs to do the following,
1) Get data for FIR Wiener filter witnesses and target.
2) Measure the transfer function (needs to be highly coherent, ~ 0.95 for all points) from the actuactor to the control signal of interest (ie. from MC2_OUT to MC_L).
3) Invert the actuator transfer function.
4) Use Vectfit or (LISO) to find a ZPK model for the actuator transfer and inverse transfer functions.
5) Prefilter your witness data with the actuator transfer function, to take into account the actuator to control transfer function when designing the offline Wiener FIR filter.
6) Calculate the frequency response for each witness from the FIR coefficients.
7) Vectfit the frequency reponses to a ZPK model, this is the FIR to IIR Wiener conversion step.
8) Now, either, divide the IIR transfer function by the actuator transfer function or more preferably, multiply by the inverse transfer function.
9) Use Quack to make SOS model of the IIR Wiener filter and inverse transfer function product that goes into foton for online implementation.
10) Load it into the system.
The block diagram below summarizes the steps above.
I'm attaching a SISO IIR Wiener filter here for reference purposes that will go online either tonight or tomorrow evening. This is a first test to convince myself that I can get this to work, MISO IIR filters are close to being ready and will soon be employed.
This Wiener filter uses the STS-X channel as a witness and MCL as target. The bode plot for the filter is shown below,
The performance of the FIR and IIR Wiener filters and the ammount of subtraction achive for MCL is shown below,
Output from quack to be loaded with foton: filter.zip
Last night we finally got some online subtraction going. The filter used is described in the post this eLOG is @eLOG 11488.
The results were as follow:
The filter worked as expected when subtracting noise out of MCL,
There is about a factor of 6 subtraction at the ~3Hz resonant peak. The static IIR filter predicted a factor of 6-7 subtraction of this peak as well.
The 1.2 Hz resenonant feature improved by a factor of 3. This should improve quite drastically when I implement the y-channel of the T240 seismo.
There is some high frequency noise being injected, not very noticeable, but present.
We then took a look at the power in the MC when the filter was on,
The power being transmitted in the cavity was not as stable as with the feedforward on. We believe that the filter is not at fault for this as Eric mentioned to me that the MC2 actuator lacked some sort of compensation that I need to understand a bit better.
YARM was then locked when the filter was on and we took a look at how it was doing. There was stationary sound arising from the locking of the YARM, leading us to believe that the filter might have injected some noise in the signal. IT DID.
The filter injected nasty high frequency noise at YARM from 11 Hz and on. This is to be expected since the filter did not roll off to zero at high frequencies. Implementing a 1/f rolloff should mitigate some of the injected noise.
Also, as one can see above, subtraction by around a factor of 2 or so, was induced by the mode cleaner feedforward subtraction.
In my previous elog:11492, I stated that in order to improve the subtraction and reduce the injection of high frequency noise we want the filter's magnitude to have a 1/f rolloff.
I implemented this scheme on the filter SISO filter previously analyzed. The results are shown below.
The filters bode plot:
The nice 1/f rollof is the main change here. Everything else remained pretty much the same.
The predicted FIR and IIR subtractions:
Everything looks right but that hump at 8 Hz. I used 8 pairs of poles/zeros to get this subtraction.
The online MCL subtraction:
This looks better than I expected. One has to keep in mind that I ran this at 1 AM. I wonder how well this filter will do during the noisier hours of the day. The RMS at high frequencies doesn't look great, there will definitely be noise being injected into the YARM signal at high frequencies.
Measuring the YARM signal:
There is still noise being injected on YARM but it is definitely much better than the previous filter. I'm thinking about doing some IIR subtraction on the arms now to see if I can get rid of the noise that is being injected that way, but before I embark on that quest I will rething my prefiltering.
The plot below shows the ratio of the unfiltered versus filtered ASDs for the FIR and IIR subtraction predictions as well as for the measured online IIR subtraction. Positive dB means better subtraction.
Last night I performed some MISO FF on MCL using the T240-X and T240-Y as witnesses. Here are the results:
Training data + Predicted FIR and IIR subtraction:
Online subtraction results:
Last night, I also worked on a "better" (an improvement, I thought) of the MISO Wiener filter (T240-X and T240-Y witnesses) IIR filter. The FF results are shown below:
Although the predicted FIR and IIR results are "better" than the previous MISO filter, the subtraction performance for this filter is marginally better if not worse (both peak at 15 dB, in slightly different regions).
Since I will need to do transfer function measurements in order to implement FF for the arms and the MC2's yaw and pitch channels, I decided to practice this by replicating the transfer function measurement Eric did for MC2 to MCL. I followed his procedure and the data that I aquired for the TF looked as shown below,
About five minutes of data were taken (0.05 Hz resolution, 25 averages) by injecting noise from 1 to 100 Hz. The TF coherence looked as below,
The mode cleaner FF static filtering is by no means done. More work has to be done in order to succefuly implement it, by the means of fine tuning the IIR fit and finding better MISO Wiener filters.
I have begun to look at implementing FF to the YARM cavity for several reasons.
1) Even if the mode cleaner FF is set up as best as we can, there will still be seismic noise coupling into the arm cavities.
2) YARM is in the way of the beam path. When locking the IFO, one locks YARM first, then XARM. This means that it makes sense to look at YARM FF first rather than XARM.
3) XARM FF can't be done now since GUR2 is sketchy.
I'm planning on using this eLOG entry to document my Journey and Adventures (Chapter 2: YARM) to the OPTIMAL land of zero-seismic-noise (ZSN) at the 40m telescope.
I took data from 1123495750 to 1123498750 GPS time (Aug 13 at 3AM, 50 mins of data) for C1:LSC-YARM_OUT_DQ, and all T240 and GUR1 channels.
Here is the PSD of the YARM_OUT, showing the data that I will use to train the FIR filter:
Coherence plots for YARM and all channels of T240 and GUR1 sesimometers are shown below. This will help determine what regions to preweight the best before computing FIR filter. They also show how GUR1 is back to work compared to those of elog:11457.
Plotte below are the resultant subtractions for YARM using different witness configurations,
The best subtraction happens with all the channels of both the GUR1 and T240 seismometers, but one gets just as good subtraction without using the z channels as witnesses.
Also, why is the T240 seismometer better at subtracting noise for YARM compared to what GUR1 alone can acomplish? Using only the X and Y channels for the T240 gave the third best subtraction(purple trace).
My plan for now is as follows:
1) Measure the transfer function from the ETMY actuator to the YARM control signal
2) Collect data for YARM when FF for MCL is on in order to see what kind of subtractions can be done.
In my last post I calculated the different subtractions (offline) that could be done to YARM alone just to get a sense of what seismometers were better witnesses for the Wiener filter calculation.
In this eLOG I show what subtractions can be done when the MCL has FF on (as well as Eric's PRC FF), with the SISO filter described on elog:11496.
The plot below shows what can be done offline,
What is great about this results is that the T240-X and T240-Y channels are plenty enough to mitigate any remaining YARM seismic noise but also to get rid of that nasty peak at 55 Hz induced by the MCL FF filter.
The caviat, I haven't measured the TF for the ETMY actuator to YARM control signal. I need to do this and recompute the FIR filters with the prefiltered witnesses in order to move on to the IIR converions and online FF!