ID |
Date |
Author |
Type |
Category |
Subject |
42
|
Wed Oct 31 23:55:17 2007 |
waldman | Other | OMC | QPD tests | The 4 QPDs for the OMC have been installed in the 056 at the test setup. All 4 QPDs work and have medm screens located under C2TPT. The breadboard mounted QPDs are not very well centered so their signal is somewhat crappy. But all 4 QPDs definitely see plenty of light. I include light and dark spectra below. QPDs 1-2 are table-mounted and QPD 2 is labeled with a bit of blue tape. QDPs 3-4 are mounted on the OMC. QPD3 is the near field detector and QPD4 is the far field. In other words, QPD3 is closest to the input coupler and QPD4 is farthest.
Included below are some spectra of the QPDs with and without light. For QPDs 1 & 2, the light source is just room lights, while 3&4 have the laser in the nominal OMC configuration with a few mWs as source. The noise at 100 Hz is about 100 microvolts / rtHz. If I recall correctly, the QPDs have 5 kOhm transimpedance (right Rich?) so this is 20 nanoamps / rtHz of current noise at the QPD. |
Attachment 1: QPD_SignalSpectrum.pdf
|
|
Attachment 2: QPD_SignalSpectrum.gif
|
|
8358
|
Tue Mar 26 17:32:30 2013 |
Chloe | Update | | QPD/ECDL Progress | I built the summing/subtracting circuit on the breadboard, and hooked this up with one of the other QPDs I found (image of setup attached). I wasn't able to get this to read the correct signals when testing with a laser pointer after a couple of hours of troubleshooting... I will hopefully get this working in the next day or 2...
I'm going to read up on ECDL stuff for Tara tonight and hopefully figure out what sort of laser diode we should purchase, since I'm meeting with Tara tomorrow. experimenting |
Attachment 1: IMG-20130326-00245.jpg
|
|
1928
|
Wed Aug 19 17:11:33 2009 |
Jenne | Update | IOO | QPDs aligned |
Quote: |
If Rob/Yoichi say the alignment is now good, the we absolutely must center the IOO QPDs and IP POS and IP ANG and MC TRANS today so that we have good references.
|
IOO_QPD_POS, IOO_QPD_ANG, MC_TRANS, IP_POS, IP_ANG have all been centered.
Also, the MCWFS have been centered.
I'm now working on making sure beam is hitting all of the RF PDs around. |
10206
|
Tue Jul 15 21:43:28 2014 |
Jenne | Update | LSC | QPDs back | I removed the dumps in front of the trans QPDs. The Yend QPD needed re-normalization, so I did that. |
12217
|
Mon Jun 27 15:47:17 2016 |
Steve | Update | General | QPR clean room gloves | I got some QPR Nitrile gloves. They are LIGO approved.White nitrile gloves are naturally anti-static- 109 ohms
Their touch not as good as laytex gloves but try to use them.
|
10993
|
Tue Feb 10 02:55:29 2015 |
Jenne | Configuration | Modern Control | Quacky filters | I discovered that somehow my Wiener filters that show up in Foton are not the same as what I have in Matlab-land.
I have plotted each of my 3 filters that I'm working with right now (T-240 X, Y and Z for PRCL Pitch) at several stages in the filter creation process. Each plot has:
- Blue trace is the Wiener filter that I want, after the vectfit process.
- Green trace is the frequency response of the SOS filters that are created by autoquack (really, quack_to_rule_them_all, which is called by autoquack). The only thing that happens in matlab after this is formatting the coefficients into a string for writing to foton.
- Red trace is the exported text file from foton.
It's not just a DC gain issue - there's a frequency dependent difference between these filters. Why??
It's not frequency warping from changing between analog and digital filters. The sample rate for the OAF model is 2048Hz, so the effect is small down at low frequencies. Also, the green trace is already discretized, so if it were freq warping, we'd see it in the green as well as red, which clearly we don't.
Has anyone seen this before? I'm emailing my seismic friends who deal in quack more than I do (BLantz and JKissel, in particular) to see if they have any thoughts.
Also, while I'm asking questions, can autoquack clear filters? Right now I'm overwriting old filters with zpk([],[],1)'s, which isn't quite the same. (I need this because the Wiener code needs more than one filter module to fit all of the poles I need, and it decides for itself how many FMs it needs by comparing the length of the poles vector with 20. If one iteration needs 4 filter modules, but the next iteration only wants 3, I don't want to leave in a bogus 4th filter.
Here are the plots:



(The giant peak at ~35Hz in the Z-axis fiilter is what tipped me off that something funny was going on) |
3949
|
Thu Nov 18 16:42:29 2010 |
Joonho Lee | Configuration | Electronics | Quad Video for PMCT, RCT, RCR fixed. | The far right monitor in the control room is now displaying IMCR, PMCT, RCR, RCT.
Please note that top left quad is displying PMCT even if the screen is labeled with PMCR.
Control room monitor #13 - #16 had been out of order since the last week.
(the monitor number is shown at : http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/VideoMUX )
I found that the connections between camera and the cable to the VIDEO MUX were missing so I connected them.
Initially, PMCT camera was sending its signal to the small monitor on the PSL table.
I splitted the signal so that one signal is going to the small monitor and another is going to the VIDEO MUX.
The "PMCR" is shown on the screen #13 in the control room but it actually showing PMCT camera's signal.
This is a temporary VIDEO configuration. It will be upgraded as well when the whole VIDEO will be upgraded. |
2510
|
Tue Jan 12 13:24:50 2010 |
Haixing | Update | SUS | Quadrant Magnetic Levitation | I have tried to make the quadrant magnetic levitation work but unfortunately I did not succeed. During the experiment, I have made
the following changes to the circuit and setup:
(1) I added small resistors (6K) in parallel to R11, R23, R35 and R47 (indicated in the following schematics) to increase
the control bandwidth from 20 Hz to 80 Hz.

(2) I changed RLED1, RLED2, RLED3, RLED4 from 2.2K to 1.5K in the LED driver such that the current of the
LED increases and in turn the displacement sensitivity increases.
(3) I changed R51 and R51 (in the differencing block for PD1 and PD2) from 5K to 1 K. Correspondingly,
I increased R52 and R54 from 5K to 50K. This changes increase the gain in the differential control by a
factor of 50, which compensates the gain loss after increasing the control bandwidth.
(4) The trim pots in the coil drive block have the following values: 100K for pot1 and pot4, 1K fro pot 2 and pot3.
To increase the gain, I replaced R17, R30, R31, R41 by 102 Om resistors (we run out of 100 Om chip resistors.)
(5) I wrapped the OSEM flags by plastic tubes to block the light from the LED more efficiently. This also removed
the changes of the photocurrent in the photodetector when the levitated plate moves horizontally.
Possible issues that cause the setup not working at the moment:
(1) The feedback gain could be probably still not enough. During the experiment, I can't feel any force changes when the
flags crossing the zero point. The error signals and control signal has the right sign.
(2) The levitated weight may be not enough and the working point is below the extremum of the DC attracting force.
This could give rise to a large negative spring which requires unreasonable feedback gain to compensate.
(3) The DC attracting force between the magnets are differing each other too much (mismatch) and can't
be compensated by the coil driving force.
(4) The control bandwidth may be still not large enough. Initially my design was 100 Hz, but I forgot to divide
the angular frequency by 2pi and the resulting circuit has a bandwidth of 20 Hz. Later I increase it up to 80 Hz
by changing the resistors as mentioned before.
(5) The polarization of the coil could have a wrong sign. I have checked with Gauss meter, but still the absence
of zero-point crossing force change makes me worry about this.
For continuation of this work, I will finish writing my document and summarize all the results and outline what
needs to be done in the future. If everything goes well, I will be back in June and can spend more time on this
experiment. I can also work with the summer students together on this project. |
16348
|
Mon Sep 20 15:42:44 2021 |
Ian MacMillan | Summary | Computers | Quantization Code Summary | This post serves as a summary and description of code to run to test the impacts of quantization noise on a state-space implementation of the suspension model.
Purpose: We want to use a state-space model in our suspension plant code. Before we can do this we want to test to see if the state-space model is prone to problems with quantization noise. We will compare two models one for a standard direct-ii filter and one with a state-space model and then compare the noise from both.
Signal Generation:
First I built a basic signal generator that can produce a sine wave for a specified amount of time then can produce a zero signal for a specified amount of time. This will let the model ring up with the sine wave then decay away with the zero signal. This input signal is generated at a sample rate of 2^16 samples per second then stored in a numpy array. I later feed this back into both models and record their results.
State-space Model:
The code can be seen here
The state-space model takes in the list of excitation values and feeds them through a loop that calculates the next value in the output.
Given that the state-space model follows the form
and ,
the model has three parts the first equation, an integration, and the second equation.
- The first equation takes the input x and the excitation u and generates the x dot vector shown on the left-hand side of the first state-space equation.
- The second part must integrate x to obtain the x that is found in the next equation. This uses the velocity and acceleration to integrate to the next x that will be plugged into the second equation
- The second equation in the state space representation takes the x vector we just calculated and then multiplies it with the sensing matrix C. we don't have a D matrix so this gives us the next output in our system
This system is the coded form of the block diagram of the state space representation shown in attachment 1
Direct-II Model:
The direct form 2 filter works in a much simpler way. because it involves no integration and follows the block diagram shown in Attachment 2, we can use a single difference equation to find the next output. However, the only complication that comes into play is that we also have to keep track of the w(n) seen in the middle of the block diagram. We use these two equations to calculate the output value
, where w[n] is ![\omega[n]=x[n] - a_1 \omega [n-1] -a_2 \omega[n-2]](https://latex.codecogs.com/gif.latex?%5Comega%5Bn%5D%3Dx%5Bn%5D%20-%20a_1%20%5Comega%20%5Bn-1%5D%20-a_2%20%5Comega%5Bn-2%5D)
Bit length Control:
To control the bit length of each of the models I typecast all the inputs using np.float then the bit length that I want. This simulates the computer using only a specified bit length. I have to go through the code and force the code to use 128 bit by default. Currently, the default is 64 bit which so at the moment I am limited to 64 bit for highest bit length. I also need to go in and examine how numpy truncates floats to make sure it isn't doing anything unexpected.
Bode Plot:
The bode plot at the bottom shows the transfer function for both the IIR model and the state-space model. I generated about 100 seconds of white noise then computed the transfer function as

which is the cross-spectral density divided by the power spectral density. We can see that they match pretty closely at 64 bits. The IIR direct II model seems to have more noise on the surface but we are going to examine that in the next elog
|
Attachment 1: 472px-Typical_State_Space_model.svg.png
|
|
Attachment 2: Biquad_filter_DF-IIx.svg.png
|
|
Attachment 3: SS-IIR-TF.pdf
|
|
16355
|
Wed Sep 22 14:22:35 2021 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | Now that we have a model of how the SS and IIR filters work we can get to the problem of how to measure the quantization noise in each of the systems. Den Martynov's thesis talks a little about this. from my understanding: He measured quantization noise by having two filters using two types of variables with different numbers of bits. He had one filter with many more bits than the second one. He fed the same input signal to both filters then recorded their outputs x_1 and x_2, where x_2 had the higher number of bits. He then took the difference x_1-x_2. Since the CDS system uses double format, he assumes that quantization noise scales with mantissa length. He can therefore extrapolate the quantization noise for any mantissa length.
Here is the Code that follows the following procedure (as of today at least)
This problem is a little harder than I had originally thought. I took Rana's advice and asked Aaron about how he had tackled a similar problem. We came up with a procedure explained below (though any mistakes are my own):
- Feed different white noise data into three of the same filter this should yield the following equation:
, where is the power spectrum of the output for the ith filter, is the noise filtered through an "ideal" filter with no quantization noise, and is the power spectrum of the quantization noise. Since we are feeding random noise into the input the power of the quantization noise should be the same for all three of our runs.
- Next, we have our three outputs:
, , and that follow the equations:



From these three equations, we calculate the three quantities: , , and which are calculated by:



from these quantities, we can calculate three values: , , and since these are just estimates we are using a bar on top. These are calculated using:



using these estimates we can then estimate using the formula:

we can average the three estimates for to come up with one estimate.
This procedure should be able to give us a good estimate of the quantization noise. However, in the graph shown in the attachments below show that the noise follows the transfer function of the model to begin with. I would not expect this to be true so I believe that there is an error in the above procedure or in my code that I am working on finding. I may have to rework this three-corner hat approach. I may have a mistake in my code that I will have to go through.
I would expect the quantization noise to be flatter and not follow the shape of the transfer function of the model. Instead, we have what looks like just the result of random noise being filtered through the model.
Next steps:
The first real step is being able to quantify the quantization noise but after I fix the issues in my code I will be able to start liking at optimal model design for both the state-space model and the direct form II model. I have been looking through the book "Quantization noise" by Bernard Widrow and Istvan Kollar which offers some good insights on how to minimize quantization noise. |
Attachment 1: IIR64-bitnoisespectrum.pdf
|
|
16360
|
Mon Sep 27 12:12:15 2021 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | I have not been able to figure out a way to make the system that Aaron and I talked about. I'm not even sure it is possible to pull the information out of the information I have in this way. Even the book uses a comparison to a high precision filter as a way to calculate the quantization noise:
"Quantization noise in digital filters can be studied in simulation by comparing the behavior of the actual quantized digital filter with that of a refrence digital filter having the same structure but whose numerical calculations are done extremely accurately."
-Quantization Noise by Bernard Widrow and Istvan Kollar (pg. 416)
Thus I will use a technique closer to that used in Den Martynov's thesis (see appendix B starting on page 171). A summary of my understanding of his method is given here:
A filter is given raw unfiltered gaussian data then it is filtered and the result is the filtered data thus we get the result: where is the raw noise filtered through an ideal filter and is the difference which in this case is the quantization noise. Thus I will input about 100-1000 seconds of the same white noise into a 32-bit and a 64-bit filter. (hopefully, I can increase the more precise one to 128 bit in the future) then I record their outputs and subtract the from each other. this should give us the Quantization error :

and since because they are both running through ideal filters:


and since in this case, we are assuming that the higher bit-rate process is essentially noiseless we get the Quantization noise .
If we make some assumptions, then we can actually calculate a more precise version of the quantization noise:
"Since aLIGO CDS system uses double precision format, quantization noise is extrapolated assuming that it scales with mantissa length"
-Denis Martynov's Thesis (pg. 173)
From this assumption, we can say that the noise difference between the 32-bit and 64-bit filter outputs: is proportional to the difference between their mantissa length. by averaging over many different bit lengths, we can estimate a better quantization noise number.
I am building the code to do this in this file |
16361
|
Mon Sep 27 16:03:15 2021 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | I have coded up the procedure in the previous post: The result does not look like what I would expect.
As shown in Attachment1 I have the power spectrum of the 32-bit output and the 64-bit output as well as the power spectrum of the two subtracted time series as well as the subtracted power spectra of both. unfortunately, all of them follow the same general shape of the raw output of the filter.
I would not expect quantization noise to follow the shape of the filter. I would instead expect it to be more uniform. If anything I would expect the quantization noise to increase with frequency. If a high-frequency signal is run through a filter that has high quantization noise then it will severely degrade: i.e. higher quantization noise.
This is one reason why I am confused by what I am seeing here. In all cases including feeding the same and different white noise into both filters, I have found that the calculated quantization noise is proportional to the response of the filter. this seems wrong to me so I will continue to play around with it to see if I can gain any intuition about what might be happening. |
Attachment 1: DeltaNoiseSpectrum.pdf
|
|
16362
|
Mon Sep 27 17:04:43 2021 |
rana | Summary | Computers | Quantization Noise Calculation Summary | I suggest that you
- change the corner frequency to 10 Hz as I suggested last week. This filter, as it is, is going to give you trouble.
- Put in a sine wave at 3.4283 Hz with an amplitude of 1, rather than white noise. In this way, its not necessary to do any subtraction. Just make PSD of the output of each filter.
- Be careful about window length and window function. If you don't do this carefully, your PSD will be polluted by window bleeding.
|
16366
|
Thu Sep 30 11:46:33 2021 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | First and foremost I have the updated bode plot with the mode moved to 10 Hz. See Attachment 1. Note that the comparison measurement is a % difference whereas in the previous bode plot it was just the difference. I also wrapped the phase so that jumps from -180 to 180 are moved down. This eliminates massive jumps in the % difference.
Next, I have two comparison plots: 32 bit and 64bit. As mentioned above I moved the mode to 10 Hz and just excited both systems at 3.4283Hz with an amplitude of 1. As we can see on the plot the two models are practically the same when using 64bits. With the 32bit system, we can see that the noise in the IIR filter is much greater than in the State-space model at frequencies greater than our mode.
Note about windowing and averaging: I used a Hanning window with averaging over 4 neighbor points. I came to this number after looking at the results with less averaging and more averaging. In the code, this can be seen as nperseg=num_samples/4 which is then fed into signal.welch |
Attachment 1: SS-IIR-Bode.pdf
|
|
Attachment 2: PSD_32bit.pdf
|
|
Attachment 3: PSD_64bit.pdf
|
|
16481
|
Wed Nov 24 11:02:23 2021 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | I added mpmath to the quantization noise code. mpmath allows me to specify the precision that I am using in calculations. I added this to both the IIR filters and the State-space models although I am only looking at the IIR filters here. I hope to look at the state-space model soon.
Notebook Summary:
I also added a new notebook which you can find HERE. This notebook creates a signal by summing two sine waves and windowing them.
Then that signal is passed through our filter that has been limited to a specific precision. In our case, we pass the same signal through a number of filters at different precisions.
Next, we take the output from the filter with the highest precision, because this one should have the lowest quantization noise by a significant margin, and we subtract the outputs of the lower precision filters from it. In summary, we are subtracting a clean signal from a noisy signal; because the underlying signal is the same, when we subtract them the only thing that should be left is noise. and since this system is purely digital and theoretical the limiting noise should be quantization noise.
Now we have a time series of the noise for each precision level (except for our highest precision level but that is because we are defining it as noiseless). From here we take a power spectrum of the result and plot it.
After this, we can calculate a frequency-dependent SNR and plot it. I also calculated values for the SNR at the frequencies of our two inputs.
This is the procedure taken in the notebook and the results are shown below.
Analysis of Results:
The first thing we can see is that the precision levels 256 and 128 bits are not shown on our graph. the 256-bit signal was our clean signal so it was defined to have no noise so it cant be plotted. The 128-bit signal should have some quantization noise but I checked the output array and it contained all zeros. after further investigation, I found that the quantization noise was so small that when the result was being handed over from mpmath to the general python code it was rounding those numbers to zero. To overcome this issue I would have to keep the array as a mpmath object the entire time. I don't think this is useful because matplotlib probably couldn't handle it and it would be easier to just rewrite the code in C.
The next thing to notice is sort of a sanity check thing. In general, low precision filters yield higher noise than high precision. This is a good quick sanity check. However, this does not hold true at the low end. we can see that 16-bit actually has the highest noise for most of the range. Chris pointed out that at low precisions that quantization noise can become so large that it is no longer a linearly coupled noise source. He also noted that this is prone to happen for low precision coefficients with features far below the Nyquist frequency like I have here. This is one explanation that seems to explain the data especially because this ambiguity is happening at 16-bit and lower as he points out.
Another thing that I must mention, even if it is just a note to future readers, is that quantization noise is input dependent. by changing the input signal I see different degrees of quantization noise.
Analysis of SNR:
One of the things we hoped to accomplish in the original plan was to play around with the input and see how the results changed. I mainly looked at how the amplitude of the input signal scaled the SNR of the output. Below I include a table of the results. These results were taken from the SNR calculated at the first peak (see the last code block in the notebook) with the amplitude of the given sine wave given at the top of each column. this amplitude was given to both of the two sine waves even though only the first one was reported. To see an example, currently, the notebook is set up for measurement of input amplitude 10.
|
0.1 Amplitude of input |
1 Amplitude |
100 Amplitude |
1000 Amplitude |
4-bit SNR |
5.06e5 |
5.07e5 |
5.07e5 |
5.07e5 |
8-bit SNR |
5.08e5 |
5.08e5 |
5.08e5 |
5.08e5 |
16-bit SNR |
2.57e6 |
8.39e6 |
3.94e6 |
1.27e6 |
32-bit SNR |
7.20e17 |
6.31e17 |
1.311e18 |
1.86e18 |
64-bit SNR |
6.0e32 |
1.28e32 |
1.06e32 |
2.42e32 |
128-bit SNR |
unknown |
unknown |
unknown |
unknown |
As we can see from the table above the SNR does not seem to relate to the amplitude of the input. in multiple instances, the SNR dips or peaks in the middle of our amplitude range.
|
Attachment 1: PSD_IIR_all.pdf
|
|
16482
|
Wed Nov 24 13:44:19 2021 |
rana | Summary | Computers | Quantization Noise Calculation Summary | This looks great. I think what we want to see mainly is just the noise in the 32 bit IIR filtering subtracted from the 64 bit one.
It would be good if Tega can look through your code to make sure there's NO sneaky places where python is doing some funny casting of the numbers. I didn't see anything obvious, but as Chris points out, these things can be really sneaky so you have to be next level paranoid to really be sure. Fox Mulder level paranoia.
And, we want to see a comparison between what you get and what Denis Martynov put in an appendix of his thesis when comparing the Direct Form II, with the low-noise form (also some slides from Matt Evans on thsi from a ~decade agoo). You should be able to reproduce his results. He used matlab + C, so I am curious to see if it can be done all in python, or if we really need to do it in C.
And then...we can make this a part of the IFOtest suite, so that we point it at any filter module anywhere in LIGO, and it downloads the data and gives us an estimate of the digital noise being generated. |
16492
|
Tue Dec 7 10:55:25 2021 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | [Ian, Tega]
Tega and I have gone through the IIR Filter code and optimized it to make sure there aren't any areas that force high precision to be down-converted to low precision.
For the new biquad filter we have run into the issue where the gain of the filter is much higher than it should be. Looking at attachments 1 and 2, which are time series comparisons of the inputs and outputs from the different filters, we see that the scale for the output of the Direct form II filter shown in attachment 1 on the right is on the order of 10^-5 where the magnitude of the response of the biquad filter is on the order of 10^2. other than this gain the responses look to be the same.
I am not entirely sure how this gain came into the system because we copied the c code that actually runs on the CDS system into python. There is a gain that affects the input of the biquad filter as shown on this slide of Matt Evans Slides. This gain, shown below as g, could decrease the input signal and thus fix the gain. However, I have not found any way to calculate this g.

With this gain problem we are left with the quantization noise shown in Attachment 4.
State Space:
I have controlled the state space filter to act with a given precision level. However, my code is not optimized. It works by putting the input state through the first state-space equation then integrating the result, which finally gets fed through the second state-space equation.
This is not optimized and gives us the resulting quantization noise shown in attachment 5.
However, the state-space filter also has a gain problem where it is about 85 times the amplitude of the DF2 filter. Also since the state space is not operating in the most efficient way possible I decided to port the code chris made to run the state-space model to python. This code has a problem where it seems to be unstable. I will see if I can fix it
|
Attachment 1: DF2_TS.pdf
|
|
Attachment 2: BIQ_TS.pdf
|
|
Attachment 4: PSD_COMP_BIQ_DF2.pdf
|
|
Attachment 5: PSD_COMP_SS_DF2.pdf
|
|
16498
|
Fri Dec 10 13:02:47 2021 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | I am trying to replicate the simulation done by Matt Evans in his presentation (see Attachment 1 for the slide in particular).
He defines his input as so he has two inputs one of amplitude 1 at 1 Hz and one of amplitude 10^-9 at 1/4th the sampling frequency in this case: 4096 Hz
For his filter, he uses a fourth-order notch filter. To achieve this filter I cascaded two second-order notch filters (signal.iirnotch ) both with locations at 1 Hz and quality factors of 1 and 1e6. as specified in slide 13 of his presentation.
I used the same procedure outlined here. My results are posted below in attachment 2.
Analysis of results:
As we can see from the results posted below the results don't match. there are a few problems that I noticed that may give us some idea of what went wrong.
First, there is a peak in the noise around 35 Hz. this peak is not shown at all in Matt's results and may indicate that something is inconsistent.
the second thing is that there is no peak at 4096 Hz. This is clearly shown in Matt's slides and it is shown in the input spectrum so it is strange that it does not appear in the output.
My first thought was that the 4kHz signal was being entered at about 35Hz but even when you remove the 4kHz signal from the input it is still there. The spectrum of the input shown in Attachment 3 shows no features at ~35Hz.
The Input filter, Shown in attachment 4 shows the input filter, which also has no features at ~35Hz. Seeing how the input has no features at ~35Hz and the filter has no features at ~35Hz there must be either some sort of quantization noise feature there or more likely there is some sort of sampling effect or some effect of the calculation.
To figure out what is causing this I will continue to change things in the model until I find what is controlling it.
I have included a Zip file that includes all the necessary files to recreate these plots and results. |
Attachment 1: G0900928-v1_(dragged).pdf
|
|
Attachment 2: PSD_COMP_BIQ_DF2.pdf
|
|
Attachment 3: Input_PSD.pdf
|
|
Attachment 4: Input_Filter.pdf
|
|
Attachment 5: QuantizationN.zip
|
16836
|
Mon May 9 15:32:14 2022 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | I made the first pass at a tool to measure the quantization noise of specific filters in the 40m system. The code for which can be found here. It takes the input to the filter bank and the filter coefficients for all of the filters in the filter bank. it then runs the input through all the filters and measures the quantization noise at each instance. It does this by subtracting the 64-bit output from the 32-bit output. Note: the actual system is 64 bit so I need to update it to subtract the 64-bit output from the 128-bit output using the long double format. This means that it must be run on a computer that supports the long double format. which I checked and Rossa does. The code outputs a number of plots that look like the one in Attachment 1. Koji suggested formatting a page for each of the filters that is automatically generated that shows the filter and the results as well as an SNR for the noise source. The code is formatted as a class so that it can be easily added to the IFOtest repo when it is ready.
I tracked down a filter that I thought may have lower thermal noise than the one that is currently used. The specifics of this will be in the DCC document version 2 that I am updating but a diagram of it is found in attachment 2. Preliminary calculations seemed to show that it had lower quantization noise than the current filter realization. I added this filter realization to the c code and ran a simple comparison between all of them. The results in Attachment 3 are not as good as I had hoped. The input was a two-toned sin wave. The low-level broadband signal between 10Hz and 4kHz is the quantization noise. The blue shows the current filter realization and there shows the generic and most basic direct form 2. The orange one is the new filter, which I personally call the Aircraft Biquad because I found it in this paper by the Hughes Aircraft Company. See fig 2 in paper. They call it the "modified canonic form realization" but there are about 20 filters in the paper that also share that name. in the DCC doc I have just given them numbers because it is easier.
Whats next:
1) I need to make the review the qnoisetool code to make it compute the correct 64-bit noise.
a) I also want to add the new filter to the simulation to see how it does
2) Make the output into a summary page the way Koji suggested.
3) complete the updated DCC document. I need to reconcile the differences between the calculation I made and the actual result of the simulation. |
Attachment 1: SUS-ETMX_SUSYAW3_0.0.pdf
|
|
Attachment 2: LowNoiseBiquad2.pdf
|
|
Attachment 3: quant_noise_floor.pdf
|
|
17127
|
Fri Sep 2 13:30:25 2022 |
Ian MacMillan | Summary | Computers | Quantization Noise Calculation Summary | P. P. Vaidyanathan wrote a chapter in the book "Handbook of Digital Signal Processing: Engineering Applications" called "Low-Noise and Low-Sensitivity Digital Filters" (Chapter 5 pg. 359). I took a quick look at it and wanted to give some thoughts in case they are useful. The experts in the field would be Leland B. Jackson, P. P. Vaidyanathan, Bernard Widrow, and István Kollár. Widrow and Kollar wrote the book "Quantization Noise Roundoff Error in Digital Computation, Signal Processing, Control, and Communications" (a copy of which is at the 40m). it is good that P. P. Vaidyanathan is at Caltech.
Vaidyanathan's chapter is serves as a good introduction to the topic of quantization noise. He starts off with the basic theory similar to my own document on the topic. From there, there are two main relevant topics to our goals.
The first interesting thing is using Error-Spectrum Shaping (pg. 387). I have never investigated this idea but the general gist is as poles and zeros move closer to the unit circle the SNR deteriorates so this is a way of implementing error feedback that should alleviate this problem. See Fig. 5.20 for a full realization of a second-order section with error feedback.
The second starts on page 402 and is an overview of state space filters and gives an example of a state space realization (Fig. 5.26). I also tested this exact realization a while ago and found that it was better than the direct form II filter but not as good as the current low-noise implementation that LIGO uses. This realization is very close to the current realization except uses one less addition block.
Overall I think it is a useful chapter. I like the idea of using some sort of error correction and I'm sure his other work will talk more about this stuff. It would be useful to look into.
One thought that I had recently is that if the quantization noise is uncorrelated between the two different realizations then connecting them in parallel then averaging their results (as shown in Attachment 1) may actually yield lower quantization noise. It would require double the computation power for filtering but it may work. For example, using the current LIGO realization and the realization given in this book it might yield a lower quantization noise. This would only work with two similarly low noise realizations. Since it would be randomly sampling two uniform distributions and we would be going from one sample to two samples the variance would be cut in half, and the ASD would show a 1/√2 reduction if using realizations with the same level of quantization noise. This is only beneficial if the realization with the higher quantization noise only has less than about 1.7 times the one with the lower noise. I included a simple simulation to show this in the zip file in attachment 2 for my own reference.
Another thought that I had is that the transpose of this low-noise state-space filter (Fig. 5.26) or even of LIGO's current filter realization would yield even lower quantization noise because both of their transposes require one less calculation. |
Attachment 1: averagefiltering.pdf
|
|
Attachment 2: AveragingFilter.py.zip
|
1888
|
Tue Aug 11 23:55:04 2009 |
rana, rich | Summary | OMC | Quantum Efficiency and Dark Current measurements of eLIGO Photodiodes | Rich Abbott, Rana
Summary: We found that the 3mm InGaAs photodiodes from eGTRAN which are being used for the DC Readout in eLIGO are bad. The QE is ~50%. We will have to replace them ASAP.
Valera and Nic Smith have pointed out out a factor of ~2 discrepancy between the estimated power transmission to the dark port in H1 and L1. So we decided to measure the QE of the accused diodes.
The data of the QE and dark current are attached here.
We used a 1064 nm CrystaLaser (which does not have a very stable power output). We attenuated the light with an ND1.0 for all measurements.
The photocurrent is estimated by reading out the voltage across one leg of the differential drive of the DC PD preamp. The photocurrent goes across a 100 Ohm resistor and then through 2 gain of 1 stages to get to this testpoint, so the overall transimpedance gain is 100 Ohms for this measurement.
By far, the Ophir power meter is the biggest source of error. Its absolute calibration is only 5% and the variation across the sensor face is ~5%. There are some hot and not hot spots on the face which can make even more variation, but we tried to avoid these.
We also inserted the power meter very close to the time when we read the voltage, so that the photocurrent and power estimates are made within 10 seconds of each other. This should reduce the error from the laser's power fluctuations.
All diodes still had the glass case on. We measured the reflected power to be ~5-7% of the incident power. This reflected power is NOT accounted for in these estimates.
Punch line: The eGTRAN diodes that we currently use are definitely bad. The JDSU and EG&G 2mm diodes have a better QE. We should immediately purchase 3 mm versions and get them cut and measured to be ready for the Sep. 1 commissioning surge. |
Attachment 1: IMG_0135.png
|
|
3760
|
Fri Oct 22 03:37:56 2010 |
Kevin | Update | PSL | Quarter Wave Plate Measurements | [Koji and Kevin]
We measured the reflection from the PBS as a function of half wave plate rotation for five different quarter wave plate rotations. Before the measurement we reduced the laser current to 1 A, locked the PMC, and recorded 1.1 V transmitted through the PMC. During the measurements, the beam was blocked after the faraday isolator. After the measurements, we again locked the PMC and recorded 1.2 V transmitted. The current is now 2.1 A and both the PMC and reference cavities are locked.
I will post the details of the measurement tomorrow. |
3768
|
Sat Oct 23 02:25:49 2010 |
Kevin | Update | PSL | Quarter Wave Plate Measurements |
Quote: |
[Koji and Kevin]
We measured the reflection from the PBS as a function of half wave plate rotation for five different quarter wave plate rotations. Before the measurement we reduced the laser current to 1 A, locked the PMC, and recorded 1.1 V transmitted through the PMC. During the measurements, the beam was blocked after the faraday isolator. After the measurements, we again locked the PMC and recorded 1.2 V transmitted. The current is now 2.1 A and both the PMC and reference cavities are locked.
I will post the details of the measurement tomorrow.
|
I measured the reflected power from the PBS as a function of half wave plate rotation for five different quarter wave plate rotations.
The optimum angles that minimize the reflected power are 330° for the quarter wave plate and 268° for the half wave plate.
The following data was taken with 2.102 A laser current and 32.25° C crystal temperature.
For each of five quarter wave plate settings around the optimum value, I measured the reflected power from the PBS with an Ophir power meter. I measured the power as a function of half wave plate angle five times for each angle and averaged these values to calculate the mean and uncertainty for each of these angles. The Ophir started to drift when trying to measure relatively large amounts of power. (With approximately 1W reflected from the PBS, the power reading rapidly increased by several hundred mW.) So I could only take data near the minimum reflection accurately.
The data was fit to P = P0 + P1*sin^2(2pi/180*(t-t0)) with the angle t measured in degrees with the following results:
lambda/4 angle (°) |
t0 (°) |
P0 (mW) |
P1 (mW) |
chi^2/ndf |
V |
318 |
261.56 ± 0.02 |
224.9 ± 0.5 |
2016 ± 5 |
0.98 |
0.900 ± 0.001 |
326 |
266.07 ± 0.01 |
178.5 ± 0.4 |
1998 ± 5 |
16.00 |
0.918 ± 0.001 |
330 |
268.00 ± 0.01 |
168.2 ± 0.3 |
2119 ± 5 |
1.33 |
0.926 ± 0.001 |
334 |
270.07 ± 0.02 |
174.5 ± 0.4 |
2083 ± 5 |
1.53 |
0.923 ± 0.001 |
342 |
273.49 ± 0.02 |
226.8 ± 0.5 |
1966 ± 5 |
1.41 |
0.897 ± 0.001 |
where V is the visibility V = 1- P_max/P_min. These fits are shown in attachment 1. We would like to understand better why we can only reduce the reflected light to ~150 mW. Ideally, we would have V = 1. I will redo these measurements with a different power meter that can measure up to 2 W and take data over a full period of the reflected power. |
Attachment 1: fits.png
|
|
3776
|
Mon Oct 25 02:25:21 2010 |
Koji | Update | PSL | Quarter Wave Plate Measurements | Q1. Suppose the laser beam has a certain (i.e. arbitrary) polarization state but contains only TEM00. Also suppose the PSB is perfect (reflect all S and transmit all P). What results do you expect from your expereiment?
Q2. Suppose the above condition but the PBS is not perfect (i.e. reflects most of S but also small leakage of P to the reflection port.) How are the expected results modified?
Q3. In reality, the laser may also contain some thing dirty (e.g. deporarization in the laser Xtal, higher order modes in a certain polarization but different from the TEM00's one, etc). What actually is the cause of 170mW rejection from the PBS? Can we improve the transmitted power through the PBS?
Q4. Why is the visibility for the lambda/4 with 330deg better than the one with 326deg? Yes, as I already explained to Kevin, I suppose it was caused by the lack of the data points in the wider angle ranges.
Quote: |
I measured the reflected power from the PBS as a function of half wave plate rotation for five different quarter wave plate rotations.
The optimum angles that minimize the reflected power are 330° for the quarter wave plate and 268° for the half wave plate.
The following data was taken with 2.102 A laser current and 32.25° C crystal temperature.
For each of five quarter wave plate settings around the optimum value, I measured the reflected power from the PBS with an Ophir power meter. I measured the power as a function of half wave plate angle five times for each angle and averaged these values to calculate the mean and uncertainty for each of these angles. The Ophir started to drift when trying to measure relatively large amounts of power. (With approximately 1W reflected from the PBS, the power reading rapidly increased by several hundred mW.) So I could only take data near the minimum reflection accurately.
The data was fit to P = P0 + P1*sin^2(2pi/180*(t-t0)) with the angle t measured in degrees with the following results:
lambda/4 angle (°) |
t0 (°) |
P0 (mW) |
P1 (mW) |
chi^2/ndf |
V |
318 |
261.56 ± 0.02 |
224.9 ± 0.5 |
2016 ± 5 |
0.98 |
0.900 ± 0.001 |
326 |
266.07 ± 0.01 |
178.5 ± 0.4 |
1998 ± 5 |
16.00 |
0.918 ± 0.001 |
330 |
268.00 ± 0.01 |
168.2 ± 0.3 |
2119 ± 5 |
1.33 |
0.926 ± 0.001 |
334 |
270.07 ± 0.02 |
174.5 ± 0.4 |
2083 ± 5 |
1.53 |
0.923 ± 0.001 |
342 |
273.49 ± 0.02 |
226.8 ± 0.5 |
1966 ± 5 |
1.41 |
0.897 ± 0.001 |
where V is the visibility V = 1- P_max/P_min. These fits are shown in attachment 1. We would like to understand better why we can only reduce the reflected light to ~150 mW. Ideally, we would have V = 1. I will redo these measurements with a different power meter that can measure up to 2 W and take data over a full period of the reflected power. |
|
3747
|
Wed Oct 20 21:33:11 2010 |
Kevin | Update | PSL | Quarter Wave Plate Optimization | [Suresh and Kevin]
We placed the quarter wave plate in front of the 2W laser and moved the half wave plate forward. To make both wave plates fit, we had to rotate one of the clamps for the laser. We optimized the angles of both wave plates so that the power in the reflection from the PBS was minimized and the transmitted power through the faraday isolator was maximized. This was done with 2.1 A injection current and 38°C crystal temperature.
Next, I will make plots of the reflected power as a function of half wave plate angle for a few different quarter wave plate rotations. |
15404
|
Wed Jun 17 16:27:51 2020 |
gautam | Update | VAC | Questions/comments on vacuum | I missed the vacuum discussion on the call today, but I have some questions/comments:
- Isn’t it true that we didn’t digitally monitor any of the TP diagnostic channels before 2018 December? I don’t have the full history but certainly there wasn’t any failure of the vacuum system connected to pump current/temp/speed from Sep 2015-Dec2018, whereas we have had 2 interruptions in 6 months because of flaky serial communications.
- According to the manuals, the turbo-pumps have their own internal logic to shut off the pump when either bearing temperature exceeds 60C or current exceeds 1.5A. I agree its good to have some redundancy, but do we really expect that our outer interlock loops will function if the internal ones fail?
- In what scenario do we expect that all our pressure gauge readbacks fail, but not the TP readbacks? If so, won’t the differential pressure conditions protect the vacuum envelope, and the TPs internal shutoffs will protect the pumps? Except during the pump down phase perhaps, when we want to give a little more headroom to the small TPs to stress them less?
At the very least, I think we should consider making the interlock code have levels (like interrupts on a micro controller). So if the pressure gauges are communicating and are reporting acceptable pressure readings, we should be able to reject unphysical readbacks from the TP controllers.
I still don’t understand why TP2 can’t back TP1, but we just disable all the software interlock conditions contingent on TP2 readbacks. This pump is far newer than TP3, and unless I’ve misunderstood something major about the vacuum infrastructure, I don’t really see why we should trust this flaky serial readbacks for any actionable interlocks, at least without some AND logic (since temperature, current and speed aren’t really independent variables).
I also think we should finally implement the email alert in the event the vacuum interlock is tripped. I can implement this if no one else volunteers.
This might also be a good reminder to get the documentation in order about the new vacuum system. |
15406
|
Thu Jun 18 11:00:24 2020 |
Jon | Update | VAC | Questions/comments on vacuum |
Quote: |
- Isn’t it true that we didn’t digitally monitor any of the TP diagnostic channels before 2018 December? I don’t have the full history but certainly there wasn’t any failure of the vacuum system connected to pump current/temp/speed from Sep 2015-Dec2018, whereas we have had 2 interruptions in 6 months because of flaky serial communications.
|
Looking at images of the old vac screens, the TP2/3 rotation speed and status string were digitally monitored. However I don't know if there were software interlocks predicated on those.
Quote: |
- According to the manuals, the turbo-pumps have their own internal logic to shut off the pump when either bearing temperature exceeds 60C or current exceeds 1.5A. I agree its good to have some redundancy, but do we really expect that our outer interlock loops will function if the internal ones fail?
|
The temperature and current interlocks are implemented precisely because the pumps can shut themselves off. The concern is not about damaging the pumps (their internal logic protects against that); it's that a pump could automatically shut down and back-vent the IFO to atmosphere. Another interlock (e.g., the pressure differentials) might catch it, but it would depend on the back-vent rate and the scenario has never been tested. The temperature and current interlocks are set to trip just before the pump reaches its internal shut-down threshold.
One way we might be able to reduce our reliance on the flaky serial readbacks is to implement rotation-speed hardware interlocks. The old vac documentation alludes to these, but as far as Chub and I could determine in 2018, they never actually existed. The older turbo controllers, at least, had an analog output proportional to speed which could be used to control a relay to interrupt the V4/5 control signals. I'll look into this for the new controllers. If it could be done, we could likely eliminate the layer of serial-readback interlocks altogether.
|
- I also think we should finally implement the email alert in the event the vacuum interlock is tripped. I can implement this if no one else volunteers.
|
That would be awesome if you're willing to volunteer. I agree this would be great to have. |
15407
|
Thu Jun 18 12:00:36 2020 |
gautam | Update | VAC | Questions/comments on vacuum | I agree there were MEDM fields, but I can't find any record of these channels being recorded till 2018 December, so I don't agree that they were being digitally monitored. You can also look back in the elog (e.g. here and here) and see that the display fields are just blank. I would then assume that no interlocks were dependent on these channels, because otherwise the vacuum interlocks would be perpetually tripped.
Looking at images of the old vac screens, the TP2/3 rotation speed and status string were digitally monitored. However I don't know if there were software interlocks predicated on those.
Sorry but I'm having trouble imagining a scenario how the pressure gauges wouldn't register this before the IFO volume is compromised. Is there some back of the envelope calculations I can do to understand this? Since both the pressure gauges and the TP diagnostic channels are being monitored via EPICS, the refresh rate is similar, so I don't see how we can have a pump temperature / speed / current threshold tripped but NOT have this be registered on all the pressure gauges, seems like a bit of a contrived scenario to me. Our thresholds currently seem to be arbitrary numbers anyway, or are they based on some expected backstreaming rate? Isn't this scenario degenerate with a leak elsewhere in the vacuum envelope that would be caught by the differential pressure interlocks?
The temperature and current interlocks are implemented precisely because the pumps can shut themselves off. The concern is not about damaging the pumps (their internal logic protects against that); it's that a pump could automatically shut down and back-vent the IFO to atmosphere. Another interlock (e.g., the pressure differentials) might catch it, but it would depend on the back-vent rate and the scenario has never been tested. The temperature and current interlocks are set to trip just before the pump reaches its internal shut-down threshold.
For the email alert, can you expose a soft channel that is a flag - if this flag is not 1, then the service will send out an email.
That would be awesome if you're willing to volunteer. I agree this would be great to have. |
15408
|
Thu Jun 18 14:13:03 2020 |
Jon | Update | VAC | Questions/comments on vacuum | I agree there were MEDM fields, but I can't find any record of these channels being recorded till 2018 December, so I don't agree that they were being digitally monitored. You can also look back in the elog (e.g. here and here) and see that the display fields are just blank. I would then assume that no interlocks were dependent on these channels, because otherwise the vacuum interlocks would be perpetually tripped.
Right, I doubt they were ever recorded or used for interlocks. But the readbacks did work at one point in the past. There's a photo of the old vac monitor screen on p. 19 of E1500239 (last updated 2017) which shows the fields once alive.
Sorry but I'm having trouble imagining a scenario how the pressure gauges wouldn't register this before the IFO volume is compromised. Is there some back of the envelope calculations I can do to understand this? Since both the pressure gauges and the TP diagnostic channels are being monitored via EPICS, the refresh rate is similar, so I don't see how we can have a pump temperature / speed / current threshold tripped but NOT have this be registered on all the pressure gauges, seems like a bit of a contrived scenario to me. Our thresholds currently seem to be arbitrary numbers anyway, or are they based on some expected backstreaming rate? Isn't this scenario degenerate with a leak elsewhere in the vacuum envelope that would be caught by the differential pressure interlocks?​
I don't disagree that the pressure gauges would register the change. What I'm not sure about is whether the change would violate any of the existing interlock conditions, triggering a shutdown. Looking at what we have now, the only non-pump-related conditions I see that might catch it are the diffpres conditions:
-
abs(P2 - PTP2) > 1 torr (for a TP2 failure)
-
abs(P3 - PTP3) > 1 torr (for a TP3 failure)
-
abs(P1a - P2) > 1 torr (for either a TP2 or TP3 failure)
For the P1a-P2 differential, the threshold of 1 torr is the smallest value that in practice still allows us to pump down the IFO without having to disable the interlocks (P1a-P2 is the TP1 intake/exhaust differential). The purpose of the P2-PTP2/P3-PTP3 differentials is to prevent V4/5 from opening and suddenly exposing the spinning turbo to high pressure. I'm not aware of a real damage threshold calculation that any one has done; I think < 1 torr is lore passed down by Steve.
If a turbo pump fails, the rate it would backstream is unknown (to me, at least) and likely depends on the failure mode. The scenario I'm concerned about is if the backstream rate is slower than the conduction time through the pumspool and into the main volume. In that case, the pressure gauges will rise more or less together all the way up to atmosphere, likely never crossing the 1 torr differential pressure thresholds.
For the email alert, can you expose a soft channel that is a flag - if this flag is not 1, then the service will send out an email.
There's already a channel C1:Vac-error_status, where if the value is anything other than an empty string, there is an interlock tripped. Does that work? |
15410
|
Thu Jun 18 15:46:34 2020 |
gautam | Update | VAC | Questions/comments on vacuum | So why not just have a special mode for the interlock code during pumpdown and venting, and during normal operation we expect the main volume pressure to be <100uTorr so the interlock trips if this condition is violated? These can just be EPICS buttons on the Vac control MEDM screen. Both of these procedures are not "business as usual", and even if we script them in the future, it's likely to have some operator supervising, so I don't think it's unreasonable to have to switch between these modes. I just think the pressure gauges have demonstrated themselves to be much more reliable than these TP serial readbacks (as you say, they worked once upon a time, but that is already evidence of its flakiness?). The Pirani gauges are not ultra-reliable, they have failed in the past, but at least less frequently than this serial comm glitching. In fact, if these readbacks are so flaky, it's not impossible that they don't signal a TP shutdown? I just think the real power of having these multi-channel diagnostics is lost without some AND logic - a turbopump failure is likely to result in an increase in pump current and temperature increase and pump speed decrease, so it's not the individual channel values that should be determining if an interlock is tripped.
I definitely think that protecting the vacuum envelope is a priority - but I don't think it should be at the expense of commissioning time. But if you think these extra interlocks are essential to the safety of the vacuum system, I withdraw my request.
I don't disagree that the pressure gauges would register the change. What I'm not sure about is whether the change would violate any of the existing interlock conditions, triggering a shutdown. Looking at what we have now, the only non-pump-related conditions I see that might catch it are the diffpres conditions:
It would be better to have a flag channel, might be useful for the summary pages too. I will make it if it is too much trouble.
There's already a channel C1:Vac-error_status, where if the value is anything other than an empty string, there is an interlock tripped. Does that work? |
15413
|
Fri Jun 19 07:40:49 2020 |
Jon | Update | VAC | Questions/comments on vacuum | I think we should discuss interlock possibilities at a 40m meeting. I'm reluctant to make the system more complicated, but perhaps we can find ways to reduce the reliance on the turbo pump readbacks. I agree they've proven to be the least reliable.
While we may be able to improve the tolerance to certain kinds of hardware malfunctions (and if so, we should), I don't see interlocks triggering on abnormal behavior of critical equipment as the root problem. As I see it, our bigger problem is with all the malfunctioning, mostly end-of-lifetime pieces of vacuum equipment still in use. If we can address the hardware problems, as I'm trying to do with replacements [ELOG 15412], I think that in itself will make the interlocking much less of an issue.
Quote: |
So why not just have a special mode for the interlock code during pumpdown and venting, and during normal operation we expect the main volume pressure to be <100uTorr so the interlock trips if this condition is violated? These can just be EPICS buttons on the Vac control MEDM screen. Both of these procedures are not "business as usual", and even if we script them in the future, it's likely to have some operator supervising, so I don't think it's unreasonable to have to switch between these modes. I just think the pressure gauges have demonstrated themselves to be much more reliable than these TP serial readbacks (as you say, they worked once upon a time, but that is already evidence of its flakiness?). The Pirani gauges are not ultra-reliable, they have failed in the past, but at least less frequently than this serial comm glitching. In fact, if these readbacks are so flaky, it's not impossible that they don't signal a TP shutdown? I just think the real power of having these multi-channel diagnostics is lost without some AND logic - a turbopump failure is likely to result in an increase in pump current and temperature increase and pump speed decrease, so it's not the individual channel values that should be determining if an interlock is tripped.
|
Ok, this can be added pretty easily. Its value will just be toggled between 1 and 0 every time the interlock server raises/clears the existing string channel. Adding the channel will require restarting the whole vac IOC, so I'll do it at a time when Jordan is on hand in case something fails to come back up.
Quote: |
It would be better to have a flag channel, might be useful for the summary pages too. I will make it if it is too much trouble.
|
|
15415
|
Fri Jun 19 09:57:35 2020 |
gautam | Update | VAC | Questions/comments on vacuum | For this particular email service, ideally the email should be sent out as soon as the interlock is tripped, so this would require a line of code to be added to the main interlock code. Which I guess would require a restart of the interlock service. So let me know when you guys plan to do the dry-pump tip seal replacement operation (when I presume valves will be closed anyways) so that we can do this in a minimally invasive way.
Quote: |
Ok, this can be added pretty easily. Its value will just be toggled between 1 and 0 every time the interlock server raises/clears the existing string channel. Adding the channel will require restarting the whole vac IOC, so I'll do it at a time when Jordan is on hand in case something fails to come back up.
|
|
8509
|
Mon Apr 29 23:02:48 2013 |
Koji | Configuration | LSC | Questons | Q. How much Schnupp asymmetry we want in order to improve the signal ratio between PRCL/MICH in REFL ports?
Q. How much can we increase Schnupp asymmetry in the practical constraints?
Q. How PRCL/MICH ratio is different the REFL ports?
=> My modeling (many years ago) shows the ratio of {115, 51, 26, 23} for REFL{11, 33, 55, 165}.
These numbers should be confirmed by modern simulation of the 40m with updated parameters.
I should definitely use 55MHz but also prepare better 165MHz too.
Q. How the TT/PRM motions are affecting the lock stability? How can we quantify this effect? How can we mitigate this issue?
Q. Can we somehow change the sensing matrix by shifting the modulation frequency?
Q. Is normalization by POP22 or POP110 actually working well?
=> Time series measurement of error signals & servo inputs |
9850
|
Thu Apr 24 16:25:31 2014 |
ericq | Update | LSC | Quick CM servo prep | I added ~1m of cable to the LO side of the REFL11 Demodulator, which brought its PRCL demod phase to about 8 degrees. According to my simulations, PRCL and CARM have the same angle (but opposite sign) at resonance. There seems to be a severe lack of SMA cables in the lab, so I didn't tune it to be any closer. Cos(8 degrees)=.99, so I think it should be fine to use it for the CARM servo, since none of the other signals are going to be nearly as big. I plugged analog REFL 11 I back into the CARM servo IN1.
As for IN2, I threw together a temporary setup for using REFLDC as a complementary signal. I T'd off the REFLDC signal (which is the DC signal out of REFL55), and sent it into an SR560 to subtract an offset. The offset comes from a 1Hz-passive-pomona-box-low-passed C1:IOO-TT4_LR output, since there are 8 DAC channels set up for the nonexistent tip tilts 3 and 4 actively running. The output of the SR560 is sent to the CARM servo IN2.
I adjusted the offset by turning on only IN2 in the CARM servo MEDM screen, and looking at the CM_SLOW signal in data viewer. I adjusted gains and such to get it to look just like REFLDC with the PRC locked. There was good coherence and no appreciable phase difference from DC out to some kHz, albeit a dip in coherence to about .8-.9 from ~40 to 300Hz, for some reason. (This included turning on the unWhite FM in the REFLDC filter bank)
If this signal turns out to be useful, it will be relatively straightforward to put together a little box that does the offset subtraction nicely, but this should do for our immediate needs.
Lastly, I hung up this plot in the control room to give us information about the DC values of different signals as the CARM offset changes. This is helpful to see what our CARM offset is, based on the transmission we se, when different signals start to have length dependence, where they start/stop being linear, etc. The TRX curve is scaled to a maximum of 600, REFLDC is normalized to input power = 1, and all the rest are arbitrarily scaled to fit on the plot. I've assumed 75ppm loss on all mirrors in my simulation (PRM, BS, 2xITM, 2xETM), mostly to get some realistic profile of REFLDC.

|
12591
|
Wed Nov 2 12:05:00 2016 |
ericq | Update | ASC | Quick WFS thoughts | I poked around a bit thinking about what we need for a single AS WFS.
New things that we would need:
Things we have:
- Many Eurocard-style Whitening chassis, such as we use for all of the LSC PDs.
- Enough ADC (c1ioo has two ADCs, but doesn't even use 32 channels, so we can steal one, and throw it into c1lsc)
We'd have 12 new signals to acquire: 4 quadrants x DC, I, Q. In principle the DC part could go into a slow channel, but we have the ADC space to do it fast, and it'll be easier than mucking around with c1iscaux or whatever.
Open question: What to do about AA? A quick search didn't turn up any eurocard AA chassis like the ones we use for the LSC PDs. However, given the digital AA that happens from 64kHz->16kHz in the IOP, we've talked about disabling/bypassing the analog AA for the LSC signals. Maybe we can do the same for the QPD signals? Or, modify the post-demod audioband amplifer in the demod chassis to have some simple, not too agressive lowpass. |
11594
|
Mon Sep 14 16:50:12 2015 |
ericq | Update | LSC | Quick note | Just a heads up while I'm out for a bit: the delay line is currently installed in the 55MHz modulation path.
I'll be back later, and will revert the setup. |
11508
|
Fri Aug 14 21:40:26 2015 |
Ignacio | Update | LSC | Quick static offline subtractions of YARM | Plotte below are the resultant subtractions for YARM using different witness configurations,

The best subtraction happens with all the channels of both the GUR1 and T240 seismometers, but one gets just as good subtraction without using the z channels as witnesses.
Also, why is the T240 seismometer better at subtracting noise for YARM compared to what GUR1 alone can acomplish? Using only the X and Y channels for the T240 gave the third best subtraction(purple trace).
My plan for now is as follows:
1) Measure the transfer function from the ETMY actuator to the YARM control signal
2) Collect data for YARM when FF for MCL is on in order to see what kind of subtractions can be done. |
Attachment 1: arms_wiener.png
|
|
8156
|
Mon Feb 25 13:01:39 2013 |
Koji | Summary | General | Quick, compact, and independent tasks | - IMC PDH demodulation phase adjustment
- Permanent setup for green transmission DC PDs on the PSL table |
1010
|
Tue Sep 30 19:50:27 2008 |
Jenne | Update | PSL | Quicky Summary - more details later | Quicky summary for now, more details later tonight / tomorrow morning:
PMC notch: It's tuned up, but it is out, and it is staying out. It looks like the 18.3kHz junk isn't being helped by the brick, in fact the brick makes it worse. And the notch isn't enough to make the peak go away. Rana's and my conclusions about the PMC: the 18.3kHz resonance is associated with the way the PMC touches its mount. Depending on where we push (very gently, not much pressure) on the PMC, we can make the peak come and go. Also, if the PMC happens to be set nicely on its ball bearings, the peak doesn't appear. More notes on this later.
PMC's RF modulation depth: Since with the PMC's brick off, and the PMC sitting nicely on its ball bearings, we don't see any crazy oscillations, we were able to take the gain slider on the PMC screen all the way up to 30dB. To give us more range, we changed the modulation depth of the RF to 2V, from its previous value of 1V.
Phase of PMC servo: Since the phase of the PMC servo hasn't been set in a while, I eyeballed it, and set the phase to: Phase Flip = 180, Phase Slider = 4.8000 . I measured many points, and will plot a calibration curve later.
I also measured the actual value of the RF out of the PMC's LO board, when changing the RF output adjust slider. Again, will post the calibration later.
The attached PNG shows the PMC spectra from now and from Aug. 30 (ref). As you can see there's been some good reduction in the acoustic noise (red v. orange). The large change in the error signal is because of the much higher gain in the servo now. We'll have to redo this plot once Jenne measures the new UGF. |
Attachment 1: mcf.png
|
|
6494
|
Fri Apr 6 11:32:09 2012 |
Jenne | Update | Computers | RAID array is rebuilding.... | Suresh reported to Den, who reported to me (although no elogs were made.....) that something was funny with the FB. I went to look at it, and it's actually the RAID array rebuilding itself. I have called in our guru, Jamie, to have a look-see. |
6495
|
Fri Apr 6 14:39:21 2012 |
Jamie | Update | Computers | RAID array is rebuilding.... | The RAID (JetStor SATA 416S) is indeed resyncing itself after a disk failure. There is a hot spare, so it's stable for the moment. But we need a replacement disk:
RAID disks: 1000.2GB Hitachi HDT721010SLA360
Do we have spares? If not we should probably buy some, if we can. We want to try to keep a stock of the same model number.
Other notes:
The RAID has a web interface, but it was for some reason not connected. I connected it to the martian network at 192.168.113.119.
Viewing the RAID event log on the web interface silences the alarm.
I retrieved the manual from Alex, and placed it in the COMPUTER MANUALS drawer in the filing cabinet. |
1973
|
Tue Sep 8 15:14:26 2009 |
rana, alex | Configuration | DAQ | RAID update to Framebuilder: directories added + lookback increased | Alex logged in around 10:30 this morning and, at our request, adjusted the configuration of fb40m to have 20 days of lookback.
I wasn't able to get him to elog, but he did email the procedure to us:
1) create a bunch of new "Data???" directories in /frames/full
2) change the setting in /usr/controls/daqdrc file
set num_dirs=480;
my guess is that the next step is:
3) telnet fb0 8087
daqd> shutdown
I checked and we do, in fact, now have 480 directories in /frames/full and are so far using up 11% of our 13TB capacity. Lets try to remember to check up on this so that it doesn't get overfull and crash the framebuilder.
|
6082
|
Wed Dec 7 18:47:36 2011 |
Jenne | Update | RF System | RAM Mon is now being demodulated | There were 2 open outputs on the splitter in the RAMmon (formerly known as Stochmon) box underneath the BS oplev table. The input to the splitter comes from the Thorlabs PD that we're using as our RAM monitoring PD. 2 of the outputs go to the RMS detection of 11 and 55 MHz. Now the other 2 (previously terminated) outputs go over to the LSC rack via SMA cable. The signal on both channels is ~200mV pk-pk, so -10dBm. One is plugged into the AS11 demod board (which didn't have a PD input yet), and the other goes to POP55's demod board, so POP55 is not what you think it is for now.
Koji is working on checking out the Rich box, which has 4 demodulators, which we will use eventually. Right now we're just using the already-plugged-in demod boards so we can start looking at some trends of RAM. We're going to need to find some channels when we're ready for the switchover.
Zach is nearing completion of the mini-update to the temp sensing system. Once we have the new more sensitive temp sensor in place, we can have a look-see at the similarities between EOM temperature and RAM levels. |
6084
|
Thu Dec 8 00:04:50 2011 |
rana | Update | IOO | RAM Mon is now being demodulated | Monitoring good, but remember that the EOM alignment must be done carefully to minimize the RAM before we can use these trends. |
6059
|
Thu Dec 1 12:27:51 2011 |
Zach | Metaphysics | RF System | RAM diagnosis/suppression plan? | It seems like there is some confusion---or disagreement---amongst the lab about how to proceed with the RAM work (as Rana mentioned at the TAC meeting, we will henceforth refer to it only as "RAM" and never as "RFAM"; those who refuse to follow this protocol will be taken out back and shot).
I would like to provide a rough outline and then request that people reply with comments, so that we can get a collective picture of how this should work. I have divided this into two sections: 1) Methodology, which is concerned with the overall goal of the testing and the procedure for meeting them, and 2) General Issues, which are broadly important regardless of the chosen methodology.
1. Methodology
There are two broad goals:
- Characterization of extant RAM
- Measuring the RAM levels existing in an aLIGO-type interferometer without any suppression systems
- Modeling to estimate the effect on IFO control and corroboration with measurements where possible
- DC RAM levels contributing offsets to IFO operating point
- Quasi-DC RAM levels affecting long term detector tuning (e.g., sensing matrix, MICH -> DARM feedforward, etc.)
- Audio-frequency RAM contributing noise directly via error point modulation
- Modeling to scale/adapt results from 40m -> aLIGO
- Mitigation
- Developing and assessing systems for suppressing RAM
- Passive: thermal shielding and isolation
- Active: EOM temperature control
- Simple temperature stabilization
- RAM error signal
The question is: which is our goal? The first, the second, or both? If both, what priority is given to which and can/should they be done in parallel? Also, task distribution.
2. General Issues
These are loosely related, so they are in random order:
- Sensing
- Temperature
- What is the priority/urgency of a precision AC-bridge-readout temperature sensor?
- If priority/urgency is low, what is the priority/urgency of upgrading breadboard controller to protoboard version? The common answer will be "make the protoboard version now", but if the urgency of the final AC sensing is high, it may make sense to focus on finalizing that design (after all, other experiments are waiting on a precision temperature controller, and it is not cost-effective to make many temporary controllers as I have done for the 40m).
- Sensor noise issues
- What is the sensor-noise-limited temperature stabilization level?
- What is our willingness to tolerate the thermal low-passing of the EOM can itself (i.e., what is our sensitivity to gradients)?
- To answer the above questions, we need to perform stabilization tests with several sensors on the same can, with some in loop (averaged) and some out of loop.
- If we determine that gradients are a problem, we may need to:
- Design a casing for outside the EOM (inside the foam box) to make the heating uniform, or
- We may be able to get away with a more customized heater (instead of heating the can from one side as we do now).
- Optical RAM
- Stochmon is a nice diagnostic tool, but do we want something better? In particular, we want to have linear signals about a zero-DC-RAM point, which requires phase
- Where will this sensor be located?
- What kind of PD will it be? Broadband? Multi-resonant?
- What sort of electronics will we need? If we are going to use this as an error signal for controlling the EOM temperature, it is just as important as any other IFO readout, since it may couple into all of them.
- RF pickup is a BIG ISSUE HERE
- How will the demodulation phases be selected? It should be possible to take TF measurements in certain misaligned (i.e., non-resonant) conditions and adjust the relative phase between the RAM readouts and standard IFO RF readouts such that they are in phase, but this will require some thinking.
- Lots more, I'm sure
- Control
- Method (overlaps some with methodology portion)
- What is better, simple temperature stabilization or RAM feeback? (More likely, "how much better is RAM feedback?")
- If RAM feedback is difficult or impossible to implement effectively (see below), is temperature stabilization good enough?
- Regime
- Depending on extant RAM levels and on achievable sensing noise, it will be unwise and/or unnecessary to have a RAM control bandwidth above some relatively low frequency (~sub Hz)
- Gain where RAM suppression is not needed only injects noise into the system
- This cutoff frequency is largely determined by the thermal response of the system, but additional filtering will likely be necessary to reduce higher-frequency noise coupling (e.g., nonlinear downconversion of high-frequency signals into heater dissipation, etc.)
- Efficacy
- If we do RAM feedback, which signal (i.e. which frequency and quadrature) do we minimize?
- Do we achieve large common-mode reduction across all RF signals, or is there some differential component?
- In particular, do we make some or all other control signals noisier by stabilizing/minimizing RAM in one channel?
- If the answer is yes, can we derive an effective control signal from a linear combination of some or all individual RAM signals?
There are probably many other issues I have neglected, so please comment on this rough draft as you see fit! |
6069
|
Mon Dec 5 09:46:21 2011 |
Zach | Metaphysics | RF System | RAM diagnosis/suppression plan? | Since no one has made any comments, I will assume that everyone is either 100% satisfied with the outline or they have no interest in the project. Under this assumption, I will make decisions on my own and begin planning the individual steps in more detail.
In particular, I will assume that our goal comprises BOTH characterization of RAM levels and mitigation, and I will try to find the best way that both can be achieved as simultaneously as possible. |
10625
|
Fri Oct 17 17:52:55 2014 |
Jenne | Update | LSC | RAM offsets | Last night I measured our RAM offsets and looked at how those affect the PRMI situation. It seems like the RAM is not creating significant offsets that we need to worry about.
Words here about data gathering, calibration and calculations.
Step 1: Lock PRMI on sideband, drive PRM at 675.13Hz with 100 counts (675Hz notches on in both MICH and PRCL). Find peak heights for I-phases in DTT to get calibration number.
Step 2: Same lock, drive ITMs differentially at 675.13Hz with 2,000 counts. find peak heights for Q-phases in DTT to get calibration number.
Step 3: Look up actuator calibrations. PRM = 19.6e-9/f^2 meters/count and ITMs = 4.68e-9/f^2 meters/count. So, I was driving PRM about 4pm, and the ITMs about 20pm.
Step 4: Unlock PRMI, allow flashes, collect time series data of REFL RF siganls.
Step 5: Significantly misalign ITMs, collect RAM offset time series data.
Step 6: Close PSL shutter, collect dark offset time series data.
Step 7: Apply calibration to each PD time series. For each I-phase of PDs, calibration is (PRM actuator / peak height from step 1). For each Q-phase of PDs, calibration is (ITM actuator / peak height from step 2).
Step 8: Look at DC difference between RAM offset and dark offset of each PD. This is the first 4 rows of data in the summary table below.
Step 9: Look at what peak-to-peak values of signals mean. For PRCL, I used the largest pk-pk values in the plots below. For MICH I used a calculation of what a half of a fringe is - bright to dark. (Whole fringe distance) = (lambda/2), so I estimate that a half fringe is (lambda/4), which is 266nm for IR. This is the next 4 rows of data in the table.
Step 10: Divide. This ratio (RAM offset / pk-pk value) is my estimate of how important the RAM offset is to each length degree of freedom.
Summary table:
|
PRCL (I-phase) |
MICH (Q-phase) |
RAM offsets |
|
|
11 |
4e-11 m |
2.1e-9 m |
33 |
1.5e-11 m |
~2e-9 m |
55 |
2.2e-11 m |
~1e-9 m |
165 |
~1e-11 m |
~1e-9 m |
Pk-pk (PDH or fringes) |
PDH pk-pk from flashes |
MICH fringes from calculation |
11 |
5.5e-9 m |
266e-9 m |
33 |
6.9e-9 m |
266e-9 m |
55 |
2.5e-9 m |
266e-9 m |
165 |
5.8e-9 m |
266e-9 m |
Ratio: (RAM offset) / (pk-pk) |
|
|
11 |
7e-3 |
8e-4 |
33 |
2e-3 |
7e-3 |
55 |
9e-3 |
4e-3 |
165 |
2e-3 |
4e-3 |
Plots (Left side is several PRMI flashes, right side is a zoom to see the RAM offset more clearly):
 
 
 
 
 
 
 
 
|
6417
|
Wed Mar 14 16:33:20 2012 |
keiko | Update | LSC | RAM simulation / RAM pollution plot | In the last post, I showed that SRCL element in the MICH sensor (AS55I-mich) is chaned 1% due to RAM.
Here I calculated how is this 1% residual in MICH sensor (AS55 I-mich) shown in MICH sensitivity. The senario is:
(1) we assume we are canceling SRCL in MICH by feed forward first (original matrix (2,3) element).
(2) SRCL in MICH (matrix(2,3) is changed 1% due to RAM, but you keep the same feed forward with the same feedforward gain
(3) You get 1% SRCL residual motion in MICH sensor. This motion depends on how SRCL is quiet/loud. The assumed level is
Pollution level = SRCL shot noise level in SRCL sensor x SRCL closed loop TF x 1% residual .... the following plot.
AS sensor = AS55I-mich --- SN level 2.4e-11 W/rtHz ------- MICH SN level 6e-17 m/rtHz
SRCL sensor = AS55 I-SRCL --- SN level 2e-11 W/rtHz --- SRCL SN level 5e-14 m/rtHz

Quote: |
Adding some more results with more realistic RAM level assumption.
(4) With 0.1% RAM mod index of PM (normalized by (1) )
|
PRCL |
MICH |
SRCL |
REFL11I |
0.99999 |
-0.001807 |
-0.000148 |
AS 55 Im |
0.000822 |
1.000002 |
0.000475 |
AS 55 Is |
1.068342 |
906.968167 |
1.00559
|
|
|
Attachment 1: Mar14pollution.png
|
|
6475
|
Mon Apr 2 18:24:34 2012 |
keiko | Update | LSC | RAM simulation for Full ifo | I extended my RAM script from DRMI (3DoF) to the full IFO (5DoF).
Again, it calculates the operation point offsets for each DoF from the opt model with RAM. Then the position offsets are added to the model, and calculates the LSC matrix. RAM level is assumed as 0.1% of the PM modulation level, as usual, and lossless for a simple model.
Original matrix without RAM:
REFL f1 : 1.000000 0.000000 0.000008 -0.000005 0.000003
AS f2 : 0.000001 1.000000 0.000005 -0.003523 -0.000001
POP f1 : -3956.958708 -0.000183 1.000000 0.019064 0.000055
POP f2 : -32.766392 -0.154433 -0.072624 1.000000 0.024289
POP f2 : 922.415913 -0.006625 1.488912 0.042962 1.000000
(MICH and SRCL uses the same sensor, with optimised demodulation phase for each DoF.)
Operation position offsets are:
PRCL -3.9125e-11 m
SRCL 9.1250e-12 m
CARM 5.0000e-15 m
and no position offsets for DARM and MICH (because they are differential sensor and not affected by RAM offsets).
Resulting matrix with RAM + RAM offsets, normalised by the original matrix:
REFL f1 : 0.001663 -0.000000 0.003519 0.000005 -0.000003
AS f2 : 0.000004 0.514424 0.000004 -0.001676 -0.000001
POP f1 : 7.140984 -0.001205 15.051807 0.019254 0.000417
POP f2 : 0.029112 -0.319792 0.042583 1.000460 0.024298
POP f2 : -0.310318 -0.014385 -1.761519 0.043005 0.999819
As you can see in the second matrix, the CARM and DARM rows are completely destroyed by the RAM offsets! The signals are half reduced in the DARM case, so the mixture between DARM and MICH are about 50% degraded.
I also would like to extend this script to use the DC readout, but don't know how to calculate the postion offset for AS_DC because the error signal is not zero-crossing for AS_DC anymore. Do you have any suggestions for me?
|
6478
|
Tue Apr 3 01:52:15 2012 |
Zach | Update | LSC | RAM simulation for Full ifo |
Quote: |
I also would like to extend this script to use the DC readout, but don't know how to calculate the postion offset for AS_DC because the error signal is not zero-crossing for AS_DC anymore. Do you have any suggestions for me?
|
I don't think I understand the question. AS_DC should not have a zero crossing, correct? |
|