ID |
Date |
Author |
Type |
Category |
Subject |
5530
|
Fri Sep 23 16:56:07 2011 |
Mirko | Update | LSC | Desired MC modulation frequency measurement, tuning of modulation frequency | [ Mirko, Koji, Suresh ]
Looked into the modulation frequency that should pass the input MC. With a locked MC looked at the RF output of the PD in refl of the MC. Looked at the beat between 11MHz and 29.5MHz. Minimizing it by fine-tuning the 11MHz freq. ( which means maximizing the 11MHz transmission).
SB freq. [MHz] Beat power [dBm]
11.065650 -75
11.065770 -80 (diving into spec. analyzer instrument noise)
11.066060 -80 (surfacing out of spec. analyzer instrument noise)
Set the freq. to the middle of the last two points: 11.065910MHz at 16:26.
ToDo: How big a problem is the AM? |
7271
|
Fri Aug 24 14:46:08 2012 |
Jenne | Summary | General | Detailed alignment plan | Friday / pre-vent:
[done] Align the MC mirrors for the incident beam so that the mirrors can be the alignment reference [Koji]
[in progress] Center spots on MC mirrors [Jenne]
Put beam attenuator optics (PBS + waveplate) on PSL table, realign input beam to MC mirror centers
[In progress] See if we can design a set of nuts and bolts to use at bottom of tiptilt optic ring, to do small adjustments of pitch alignment [Steve]
After doors open:
Use CCD (Watek, with AGC on) to take images of everything we can think of, to see current status of clipping
Check that we get through the Faraday without clipping
Move PZT1 and MMT mirrors to get good spot positions on PR3, PR2. Make sure we're clearing the Faraday's housing
Install dichroic optics, perhaps completely readjust pitch alignment of those tiptilts (we will measure the spares later, and call that good enough for our phase mapping).
Use some kind of oplev setup to check pitch alignment of PR2, PR3.
Tweak (if necessary) PR2 & PR3 pitch to go through center of PRM, BS, hit center of ITMY
Check that we're not clipping on the BS cage anywhere
Use CCD to take images with Sensoray of everything we can think of, to confirm we don't have clipping anywhere. Want to see the edges of the beam on the targets, which would mean that the beam is hitting the center of the optic. If necessary, we'll stay open an extra day to get good camera images everywhere, so we have a good record of what's going on inside.
Note: While having good arm alignment would be good, we're willing to sacrifice some arm alignment to have good DRMI alignment, since we're re-venting and installing the new active tiptilts in another month or so.
Things I'm leaving for Jamie-the-Vent-Czar to plan:
Order of door opening
Beam dump assembly and placement
|
13109
|
Mon Jul 10 21:31:15 2017 |
Kaustubh | HowTo | Computer Scripts / Programs | Details on Cavity Scan Analysis | Summary:
The following elog describes the procedure followed for generating a sample simulation for a cavity scan, fitting an actual cavity scan and calculating the relevant paramaters using the cavity scan and fit data.
1. Cavity Scan Simulation:
- First, we define the sample cavity parameters, i.e., the reflectivitie,transmissivities of the mirrors, the RoCs of the mirrors and the absolute cavity length.
- We then define a frequency range using numpy.linspace function for which we want to take a scan.
- We then define a function that returns the tranmission power output of a Fabry-Perot cavity using the cavity equations as follows:
where Pt is the transmission power ratio of the output power to input power, t1,t2,r1,r2 are the transmissivities and reflectivities of the two mirrors, L is the absolute cavity length, f is the frequency of the input laser, c is the speed of light, is the gouy phase shift with g1,g2 being the g-factors for the two cavity mirrors(g=1-L/R). 'n' and 'm' correspond to the TEMnm higher order mode.
- We now obtain a cavity scan by giving the above defined function the cavity parameters and by adding the outputs for different higher order mode('n', 'm' values). Appropriate factors for the HOMs need to be chosen. The above function with appropriate coefficients can be used ti also add the modulated sidebands to the total transmission power.
- To this obtained total power we can add some random noise using numpy modules random.normal function. We need to normalise the data with respect to the max. power transmission ratio.
- We can now perform fitting on the above data using the procedure stated in the next section and then plot the two data sets using matplotlib module.
- A similar code to do the above is given here.
2. Fitting a Cavity Scan:
- The actual data for a cavity scan can be found in this elog entry or attached below in the zip folder.
- We read this data and separate the frequency data and the transmission data.
- Using the peakutils module's indexes function, we find the indices of the various peaks in the data set.
- These peaks are from the fundamental resonances, sideband resonances(both 11MHz and 55MHz) as well as a few HOMs.
- Each of these resonances follows the cavity equations and hence can be modelled as Lorentzian within small intervals around the peak frequencies. A detailed description of how this is possible is given here and is in the atached zip folder('Functionsused.pdf').
- We define a Lorentzian function which returns the fo
llows: where 'a' is the peak transmission value, 'b' is the 'linewidth' of the Lorentzian and is the peak frequency about which the cavity equations behave like a lorentzian.
- We now, using the Lorentzian function, fit the various identified peaks using the curve_fit function of the scipy module. Remember to turn the 'absolute_sigma' parameter to 'True'.
- The parameters now obtained can be evaluated using the procedure given in the next section.
- The total transmission power is evaluated by feeding in the above obtained parameters back into the Lorentzian function and adding it for each peak.
- We can plot the actual data set and the data obtained using the fit of different peaks in a plot using matplotlib module. We can also plot the residuals for a better depiction of the fit quality.
- The code to analyse the above mentioned cavity scan data is given here and attached below in the zip folder.
3. Calculating Physically Relevant Parameters:
- The data obtained from the fitting the peaks in the previous section now needs to be analysed in order to obtain some physically relevant information such as the FSR value, the TMS value, the modulation depths of the sidebands and perhaps even the linear caliberation of the frequency.
- First we need to identify the fundamental, TEM00 resonances among all the peaks. This we do by using the numpy.where function. We find the peaks with transmission values more than 0.9(or any suitable value).
- Using these indices we will now calculate the FSR and the Finesse of the peaks. A description of the correlation between the Fit Parameters and the FSR and Finesse is given here.
- We define a Linear fitting function for fitting the frequency values of the fundamental resonances against the ith fundamental resonance. The slope of this line gives us the value of FSR and the error in it.
- The Finesse can be calculated by fitting the linewidth with a constant function.
- The cavity length can be calculated using the FSR values as follows:
.
- Now, the approximate positions of the sideband frequncies is given by 11*106%FSR and 55*106%FSR away from the fundamental, carrier resonances.
- The modulation depth, 'm', is given as
where Pc is the carrier transmission power, Ps is the transmission power of the sideband and Jv is the Bessel Function of order 'v'.
- We define a function 'Bessel Ratio' using which we'll fit the transmission power ratio of the carrier to the sideband for the multiple sideband resonances.
- We also check for the Linearity in frequency data by plotting Fitting the frequencies corresponding to peaks in the actual data to ones obtained after fitting.
- After this we attempt to identify the other HOMs. For this we first determine a rough estimate for the value of TMS using the already known parameters of the mirrors,i.e., the RoC. We then look in small intervals (0.5 MHz) around frequencies where we would expect the HOMs to be, i.e., 1*TMS, 2*TMS, 3*TMS... away from the fundamental resonances. These positions are all modulo FSR.
- After identifying the HOMs, we take the difference from the fundamental resonance and then study these modulo the FSR.
- We perform a Linear Fit between these obtained values and (n+m). As 'n','m' are degenerate, we can simply perform the fit against some variable 'k' and obtain the value of TMS as the slope of the linear fit.
- The code to do the above stated analysis is given here.
Most of the above info and some smaller details can be found in the markdown readme file in this git repo. |
14114
|
Sun Jul 29 23:15:34 2018 |
pooja | Update | Cameras | Developing CNN | Aim: To develop a convolutional neural network that resolves mirror motion from video.
Input : Previous simulated video of beam spot motion in pitch by applying 4 sine waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08 where random uniform noise ranging 0.05 has been added to amplitudes and frequencies. This is divided into train (0.4), validation (0.1) and test (0.5).
Model topology:
- Number of filters = 2
- Kernel size = 2
- Size of pooling windows = 2
- -----> Dense layer of 4 nodes ----> Output layer of 1 node
Activation: selu linear
Batch size = 32, Number of epochs = 128, loss function = mean squared error
Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)
Plots of CNN output & applied signal given in Attachment 1. The variation in loss value with epochs given in Attachment 2.
This needs to be further analysed with increasing random uniform noise over the pixels and by training CNN on simulated data of varying ampltides and frequencies for sine waves. |
13937
|
Sun Jun 10 15:04:33 2018 |
pooja | Update | Cameras | Developing neural network | Aim: To develop a neural network in order to correlate the intensity fluctuations in the scattered light to the angular motion of the test mass. A block diagram of the technique employed is given in Attachment 1.
I have used Keras to implement supervised learning using neural network (NN). Initially I had developed a python code that converts a video (59 sec) of scattered light, after an excitation (sine wave of frequency 0.2 Hz) is applied to ETMX pitch, to image frames (of size 480*720) and stores the 2D pixel values of 1791 images frames captured into an hdf5 file. This array of shape (1791,36500) is given as an input to the neural network. I have tried to implement regular NN only, not convolution or recurrent NN. I have used sequential model in Keras to do this. I have tried with various number of dense layers and varied the number of nodes in each layer. I got test accuracy of approximately 7% using the following network. There are two dense layers, first one with 750 nodes with a dropout of 0.1 ( 10% of the nodes not used) and second one with 500 nodes. To add nonlinearity to the network, both the layers are given an activation function of tanh. The output layer has 1 node and expects an output of shape (1791,1). This model has been compiled with a loss function of categorical crossentropy, optimizer = RMSprop. We have used these since they have been mostly used in the image analysis examples. Then the model is trained against the dataset of mirror motion. This has been obtained by sampling the cosine wave fit to the mirror motion so that the shapes of the input and output of NN are consistent. I have used a batch size ( number of samples per gradient update) = 32 and epochs (number of times entire dataset passes through NN) = 20. However, using this we got an accuracy of only 7.6%.
I think that the above technique gives overfitting since dense layers use all the nodes during training apart from giving a dropout. Also, the beam spot moves in the video. So it may be necessary to use convolution NN to extract the information.
The video file can be accesses from this link https://drive.google.com/file/d/1VbXcPTfC9GH2ttZNWM7Lg0RqD7qiCZuA/view.
Gabriele told us that he had used the beam spot motion to train the neural network. Also he informed that GPUs are necessary for this. So we have to figure out a better way to train the network.
gautam noon 11Jun: This link explains why the straight-up fully connected NN architecture is ill-suited for the kind of application we have in mind. Discussing with Gabriele, he informed us that training on a GPU machine with 1000 images took a few hours. I'm not sure what the CPU/GPU scaling is for this application, but given that he trained for 10000 epochs, and we see that training for 20 epochs on Optimus already takes ~30minutes, seems like a futile exercise to keep trying on CPU machines. |
13972
|
Fri Jun 15 09:51:55 2018 |
pooja | Update | Cameras | Developing neural network | Aim : To develop a neural network on simulated data.
I developed a python code that generates a 64*64 image of a white Gaussian beam spot at the centre of black background. I gave a sine wave of frequency 0.2Hz that moves the spot vertically (i.e. in pitch). Then I simulated this video at 10 frames/sec for 10 seconds. Then I saved this data into an hdf5 file, reshaped it to a 1D array and gave as input to a neural network. Out of the 100 image frames, 75 were taken as training dataset and 25 as test data. I varied several hyperparameters like learning rate of the optimizer, number of layers, nodes, activation function etc. Finally, I was successful in reducing the mean squared error with the following network model:
- Sequential model of 2 fully connected layers with 256 nodes each and a dropout of 0.1
- loss function = mean squared error, optimizer = RMSprop (learning rate = 0.00001) and activation function that adds nonlinearity = relu
- batch size = 32 and number of epochs = 1000
I have attached the plot of the output of neural network (NN) as well as sine signal applied to simulate the video and their residula error in Attachment 1. The plot of variation in mean squared error (in log scale) as number of epochs increases is given in Attachment 2.
I think this network worked easily since there is no noise in the input. Gautam suggested to try the working of this network on simulated data with a noisy background.
|
14005
|
Fri Jun 22 10:42:52 2018 |
pooja | Update | | Developing neural networks | Aim: To find a model that trains the simulated data of Gaussian beam spot moving in a vertical direction by the application of a sinusoidal signal.
All the attachments are in the zip folder.
The simulated video of beam spot motion without noise (amplitude of sinusoidal signal given = 20 pixels) is given in this link https://drive.google.com/file/d/1oCqd0Ki7wUm64QeFxmF3jRQ7gDUnuAfx/view?usp=sharing
I tried several cases:
Case 1:
I added random uniform noise (ranging from 0 to 25.5 i.e. 10% of the maximum pixel value 255) using opencv to 64*64 simulated images made in the last case( https://nodus.ligo.caltech.edu:8081/40m/13972), clipped the pixel values from 0 to 255 & trained using the same network as in the previous elog and it worked well. The variation in mean squared error with epochs is given in Attachment 1 & applied signal and output of the neural network (NN) (magnitude of the signal vs time) as well as the residual error is given in Attachment 2.
Case 2:
I simulated images 128*128 at 10 frames/sec by applying a sine wave of frequency 0.2Hz that moves the beam spot & resized it using opencv to 64*64. Then I trained 300cycles & tested with 1000 cycles with the following sequential model:
(i) Layers and number of nodes in each:
4096 (dropout = 0.1) -> 1024 (dropout = 0.1) -> 512 (dropout = 0.1) -> 256 -> 64 -> 8 -> 1
Activation : selu -> selu -> selu -> selu -> selu -> selu -> linear
(ii) loss function = mean squared error ( I used mean squared error to easily comprehend the result. Initially I had tried log(cosh) also but unfortunately I had stopped the run in between when test loss value had no improvement), optimizer = Nadam with default learning rate = 0.002
(iii) batch size = 32, no. of epochs = 400
I have attached the variation in loss function with epochs (Attachment 3). It was found that test loss value increases after ~50 epochs. To avoid overfitting, I added dropout to the layer of 256 nodes in the next model and removed the layer of 4096 nodes.
Case 3:
Same simulated data as case 2 trained with the following model,
(i) Layers and number of nodes in each:
1024 (dropout = 0.1) -> 512 (dropout = 0.1) -> 256 (dropout = 0.1) -> 64 -> 8 -> 1
Activation : selu -> selu -> selu -> selu -> selu -> linear
(ii) changed the learning rate from default value of 0.002 to 0.001. Rest of the hyperparameters same.
The variation in mean squared error in attachment 4 & NN output, applied signal & residual error (zoomed) in attachment 5. Here also test loss value increases after ~65 epochs but this fits better than the previous model as loss value is less.
Case 4:
Since in most of the examples in keras, training dataset was more than test dataset, I tried training 1000 cycles & testing with 300 cycles. The respective plots are attached as attachment 6 & 7. Here also, there is no significant improvement except that the test loss is increasing at a slower rate with epochs as compared to the last case.
Case 5:
Since most of the above cases were like overfitting (https://machinelearningmastery.com/diagnose-overfitting-underfitting-lstm-models/, https://github.com/keras-team/keras/issues/3755) except that test loss is less than train loss value in the beginning , I tried implementing case 4 with the initial model of 2 layers of 256 nodes each but with Nadam optimizer. Respective graphs in attachment 8, 9 & 10(zoomed). The loss value is slightly higher than the previous models as seen from the graph but test & train loss values converge after some epochs.
I have forgot to give ylabel in some of the graphs. It's the magnitude of the applied sine signal to move the beam spot. In most of the cases, the network almost correctly fits the data and test loss value is lower in the initial epochs. I think it's because of the dropout we added in the model & also we are training on the clean dataset.
|
14021
|
Tue Jun 26 17:54:59 2018 |
pooja | Update | Cameras | Developing neural networks | Aim: To find a model that trains the simulated data of Gaussian beam spot moving in a vertical direction by the application of a sinusoidal signal. The data also includes random uniform noise ranging from 0 to 10.
All the attachments are in the zip folder.
I simulated images 128*128 at 10 frames/sec by applying a sine wave of frequency 0.2Hz that moves the beam spot, added random uniform noise ranging from 0 to 10 & resized the image frame using opencv to 64*64. 1000 cycles of this data is taken as train & 300 cycles as test data for the following cases. Optimizer = Nadam (learning rate = 0.001), loss function used = mean squared error, batch size = 32,
Case 1:
Model topology:
256 (dropout = 0.1) -> 256 (dropout = 0.1) -> 1
Activation : selu selu
Number of epochs = 240.
Variation in loss value of train & test datasets is given in Attachment 1 of the attached zip folder & the applied signal as well as the output of neural network given in Attachments 2 & 3 (zoomed version of 2).
The model fits well but there is no training since test loss is lower than train loss value. I found in several sites that dropout of some of the nodes during training but retaining them during test could be the probable reason for this (https://stackoverflow.com/questions/48393438/validation-loss-when-using-dropout , http://forums.fast.ai/t/validation-loss-lower-than-training-loss/4581 ). So I removed dropout while training next time.
Case 2:
Model topology:
256 (dropout = 0.1) -> 256 (dropout = 0.1) -> 1
Activation : selu selu linear
Number of epochs = 200.
Variation in loss value of train & test datasets is given in Attachment 4 of the attached zip folder & the applied signal as well as the output of neural network given in Attachments 5 & 6 (zoomed version of 2).
But still no improvement.
Case 3:
I changed the optimizer to Adam and tried with the same model topology & hyperparameters as case 2 with no success (Attachments 7,8 & 9).
Finally I think this is because I'm training & testing on the same data. So I'm now training with the simulated video but moving it by a maximum of 2 pixels only and testing with a video of ETMY that we had captured earlier. |
14097
|
Sun Jul 22 14:01:07 2018 |
pooja | Update | Cameras | Developing neural networks on simulated video | Aim: To develop a neural network that resolves mirror motion from video.
Since error was high for the same input as in my previous elog http://nodus.ligo.caltech.edu:8080/40m/14089
I modified the network topology by tuning the number of nodes, layers and learning rate so that the model fitted the sum of 4 sine waves efficiently, saved weights of the final epoch and then in a different program, loaded saved weights & tested on simulated video that's produced by moving beam spot from the centre of image by sum of 4 sine waves whose frequencies and amplitudes change with time.
Input : Simulated video of beam spot motion in pitch by applying 4 sine waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08. This is divided into train (0.4), validation (0.1) and test (0.5).
Model topology:
Input --> Hidden layer --> Output layer
8 nodes 1 node
Activation function: selu linear
Batch size = 32, Number of epochs = 128, loss function = mean squared error
Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)
Normalized the target sine signal of NN by dividing by its maximum value.
Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment. These weights of the model in the final epoch have been saved to h5 file and then loaded & tested with simulated data of 4 sine waves with amplitudes and frequencies changing with time from their initial values by random uniform noise ranging from 0 to 0.05. Plot of predicted output by neural network, target signal of sine waves & residual error given in 2nd attachment. The actual signal can be got from predicted output of NN by multiplication with normalization constant used before. However, even though network fits training & validation sets efficiently, it gives a comparatively large error on test data of varying amplitude & frequency.
Gautam suggested to try training on this noisy data of varying amplitudes and frequencies. The results using the same model of NN is given in Attachment 3. It was found that tuning the number of nodes, layers or learning rate didn't improve fitting much in this case.
|
14100
|
Tue Jul 24 06:11:50 2018 |
rana | Update | Cameras | Developing neural networks on simulated video | This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects).
Quote: |
Aim: To develop a neural network that resolves mirror motion from video.
|
|
14101
|
Tue Jul 24 09:47:51 2018 |
gautam | Update | Cameras | Developing neural networks on simulated video | I was thinking a little more about the way we are training the network for the current topology - because the network has no recurrent layers, I guess it has no memory of past samples, and so it doesn't have any sense of the temporal axis. In fact, Keras by default shuffles the training data you give it randomly so the time ordering is lost. So the training amounts to requiring the network to identify the center of the Gaussian beam and output that. So in the training dataset, all we need is good (spatial) coverage of the area in which the spot is most likely to move? Or is the idea to develop some tools to generate video with spot motion close to that on the ETM in lock, so that we can use it with a network topology that has memory?
Quote: |
This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects)
|
|
17563
|
Tue Apr 25 21:21:03 2023 |
Yehonathan | Update | BHD | Dewhitening noises | {Mayank, Paco, Yehonathan}
Dewhitening noise curves were taken using SR785+SR560 for the PRMI noise budget. One representative channel was measured at each board, suspensions were tripped before work was done. The input pins to the dewhitening boards were shorted using an exposed ribbon cable.
At each board, the measurement was taken with and without dewhitening filter on. The toggling of the dewhitening filter was done by turning on and off the SimDW filters at the coil filter screen of each suspension.
Attachment 1 summarizes the results.
ITMX dewhitening noise is much higher than the rest.
ITMY measurement turned out to be bogus since we mostly measured dark noise. The reason we made the gain so low in that measurement is that it was saturating the SR560 whenever we used gain>1. |
4712
|
Fri May 13 14:54:20 2011 |
Leo Singer | Update | Computers | Diaggui fixed on pianosa | I fixed diaggui on pianosa. Previously, it was not able to start because it depended on libreadline5, whereas Ubuntu distributed libreadline6. Now pianosa has both libreadline5 and libreadline6, so diaggui works. |
15188
|
Wed Feb 5 16:35:12 2020 |
gautam | Update | LSC | Diagnosis plan | The goal is to try and identify the source of the excess ALS noise as the CARM offset is reduced. The idea is to look at the MC_F spectrum (or the IMC error point) in a few conditions:
- Regular CARM --> MC2 actuation scheme, PRMI locked on 3f signals, CARM held off resonance.
- Regular CARM --> MC2 actuation scheme, PRMI locked on 3f signals, CARM held on resonance.
- Alternate CARM --> 7.5*ETMX + 1.5*ETMY, PRMI locked on 3f signals, CARM held on resonance.
- Control arms in X/Y basis, lock PRMI on 3f signals and bring the arms into resonance individually, look for excess ALS noise.
#1 vs #2 is like a control experiment, we expect to see the excess noise imprinted on the MC length and hence in MC_F (provided the sensing noise is low enough). #2 vs #3 will be informative of something like backscatter to the PSL increasing the frequency noise. #2/3 vs #4 will help isolate the problem to an individual arm's AUX PDH loop or some optomechanical effect.
I was looking back at some spectra from the last couple of nights but I don't really have an apple-to-apple comparison in the various actuation schemes (some ALS loops were engaged/disengaged), so I'll do a more systematic test tonight. Already, it looks like MC_F is not a good candidate to look for the excess frequency noise, I don't really see a big difference between conditions #1 and #2. According to this, we are looking for an increase at the level of a few 100Hz/rtHz @ ~40 Hz, wheras MC_F is much noisier. |
15191
|
Thu Feb 6 01:16:58 2020 |
gautam | Update | LSC | Diagnosis results | Summary:
I did some more detailed tests to see if I could isolate where the excess ALS noise at low CARM offset is coming from, by measuring the spectrum of the IMC error point (in loop). The results, shown in Attachment #1 and #2, are inconclusive.
Details:
Since MC_F didn't show any signatures of elevated noise, I decided to hook up an SR785 to the A excitation bank TEST1 input of the IMC servo board to monitor the in-loop error signal. I initially took a few measurements spanning 800 Hz in frequency, and to my surprise, I found that there was elevated noise in the frequency band we see an increase in the ALS noise, even when the CARM feedback goes to the ETMs (so the IMC cavity is in principle isolated from the main interferometer). This is Attachment #1. So I re-took a couple of measurements (this time only for the case of CARM feedback to the ETMs), with a 200 Hz frequency span, and found no significant noise elevation. This is Attachment #2. I am led to conclude that the IMC error point level changes over time for reasons other than the CARM offset - it'd be nice to have a spectrogram of the IMC error point and compare excursions relative to the median level over a few 10s of minutes, but we don't have this data stream digitized by the CDS system - maybe I will hijack the MC_L channel temporarily to record this data stream. It seems a waste that we're not able to take full advantage of the measured <10pm RMS noise of the IR ALS system. |
15047
|
Mon Nov 25 22:10:26 2019 |
shruti | Update | NoiseBudget | Diagnostics | This is to help troubleshoot the excess noise measured earlier.
The following channels were measured at GPS times 1258586880 s and 1258597457 s, corresponding to low and high Power Recycling Gain (PRG) respectively.
Excess noise was seen between 25-110 Hz in the high PRG case when compared to the low PRG case in the following channels:
C1:LSC-CARM-IN1_DQ (shown in Attachment 1 where the reference is low PRG)
C1:ALS-Y_ERR_MON_OUT_DQ
C1:ALS-BEAT{X,Y}_FINE_PHASE_OUT_DQ
C1:SUS-ETM{X,Y} _SENSOR_{LL,LR,UL,UR}
C1:ALS-TRX_OUT_DQ
Surprisingly, it was also seen to a smaller extent in (refer Attachment 3)
C1:SUS-ITMX_SENSOR_{LL,LR,UL,UR}
A different type of noise spectrum, attributed to known electronic effects, was observed for
C1:SUS-ITMY_SENSOR_{LL,UL} (refer Attachment 2)
These did not show any significant change in the noise spectrum:
C1:LSC-DARM-IN1_DQ (shown in Attachment 1 where the reference is low PRG)
C1:ALS-X_ERR_MON_OUT_DQ
C1:ALS-TRY_OUT_DQ
C1:SUS-ITMY_SENSOR_{LL,LR,UL,UR}
C1:SUS-ITMY_SENSOR_{LR,UR} (refer Attachment 2)
Broadband noise in:
C1:LSC-PO{X,Y}11_I_ERR_DQ
|
5349
|
Tue Sep 6 21:33:21 2011 |
Jenne | Update | SUS | Diagonalizability of ITMX and ITMY is acceptable | [Rana and Kiwamu on ITMX, Jenne and Suresh on ITMY, Zombie/brains meeting on accepting the matricies]
Optic |
Spectra |
Matrix |
"Badness" |
ITMX |
 |
pit yaw pos side butt
UL 0.584 0.641 1.396 -0.578 0.558
UR 0.755 -1.359 0.120 -0.286 0.262
LR -1.245 -0.139 0.604 -0.388 0.511
LL -1.416 1.861 1.880 -0.681 -2.669
SD -0.753 0.492 3.263 1.000 -1.523 |
5.85983 |
ITMY |
 |
pit yaw pos side butt
UL 1.000 0.572 1.134 -0.059 0.951
UR 0.578 -1.428 0.916 -0.032 -1.024
LR -1.422 -0.531 0.866 -0.009 1.086
LL -1.000 1.469 1.084 -0.036 -0.939
SD -0.662 0.822 1.498 1.000 0.265 |
4.47727
|
OSEMs were tweaked. We have decided that both ITMs are okay in terms of their diagonalization. ITMY isn't stellar when you look at the spectra, but it's kind of close enough. Certainly the matrix looks fine.
Aside from checking on POX, I think we're now ready to close up. Check back later tonight for a final decision announced on the elog. |
12495
|
Wed Sep 14 20:27:03 2016 |
Lydia | Update | SUS | Diagonalization | Today the main optics were free swinging for several hours, so I attempted diagonalization in vacuum.
- ITMY still has bad phases. I looked at the spectra for this and other optics, and it looks like the other optics have the 60Hz line notched out for all coils while ITMY only has it notched on the side coil. (Using C1:SUS-ITMY_SENSOR channels). Where is this controlled from, and could it be the source of the issue?
- I tried using a different coil as the "standard," with the other coils compared against it in tfestimate. Default is UL, I tried UR and LL. The phase problems were still present for ITMY, but the script was still working fine for other optics.
- The phase difference between coils is different for different start times.
- A short segment of the time series for ITMY shows significantly more high frequency noise than for other optics at the same time.
- The ETMY matrix for vacuum has the wrong sign for UL coupling to pitch! The diagonalization results look OK on the graph, but the butterfly mode still has small peaks (See attachment 1). When the individual coil spectra are plotted, the angular degrees of freedom show very weak coupling for UL to pitch, and LL to yaw. We initially replaced the matrix on the MEDM screen with the one generated by the script. After realizing this, the PIT row was changed to 1 1 -1 -1 0, but the effectiveness of the damping on the locked transmission fluctuations was about the same both ways.
|
12497
|
Thu Sep 15 18:37:20 2016 |
Lydia | Update | SUS | Diagonalization | [Teng, Lydia]
- We fixed the 60Hz filter on ITMY. This improved the phase problems somewhat but one coil (UL) is still about 12 degrees out of phase compared to the others for all the dofs. Is there some other place where a filter coule be applied to just one coil sensor? I pressed the "Load coefficients" button for UL, so maybe that will have helped.
- We want to interpret the coil signals to have an accurate measurement of each dof. This means what the input matrix should describe is the dependence of each dof on the OSEM signals, which is found by inverting the matrix which describes the sensitivity of each OSEM to changes in that degree of freedom.
- We looked at the spectra of the individual coils for ITMY and ETMY (See attachment 1 & 2). The coupling between some coils and applicable resonance peaks is very weak (~0.1 times the sensitivity of the other coils).
- However, when a certain degree of freedom, e.g. pitch, is deliberately driven using awggui, the response of the ITMY coils is clear on the StripTool and is about the same magnitude for all of the face OSEMS. So, it seems like the diagonalization script does not always succeed at measuring the relative sensitivity of the OSEMs to the degrees of freedom.
- This may be because the fundamental swing modes experienced by the free swinging pendulum are not the same as what we measure as pitch, yaw, etc. This could be possible if the wire tension is not the same on both sides. For ITMY, the spectra imply that the funamdental frequencies are actually at some linear combinations of pitch and yaw, swinging about a diagonal axis that results in a much weaker response for some of the OSEMS. Calling these peaks pitch and yaw may be inaccurate. Certainly they do not indicate the true relative sensitivity of the coils.
- We propose an alternate approach to measuring this sensitivity: drive one dof at a time with awggui, take a spectrum (less resolution is ok because we already know the drive frequency), and measure the sensing matrix values for that dof the same way as before, but using a spectral peak that decribes motion that we know is purely pitch. Repeat this for all 4 dofs that we can actuate on, then compile these results into a sensing matrix and take the inverse.
|
12499
|
Fri Sep 16 19:14:27 2016 |
Lydia | Update | SUS | Diagonalization | [Lydia, Teng]
We built matrices for ITMY and ETMY by driving one degree of freedom at a time with awggui, while the damping was on. These have been applied to the damping loops.
- Each segment of data is 1000s long and each dof was driven at 0.25 Hz.
- These matrices are much closer to the ideal matrix and have no wrong signs. We believe they represent the relative sensitivity of the OSEMs to the degrees of freedom much more accurately. This is because the free swinging modes are not actually pitch, yaw, etc, but some linear combination of these. However, the damping actuates on pitch, yaw, etc. So we should isolate the degrees of freedom by driving them one at a time instead of just looking at free swinging peaks.
- Attachment 1: An example of the dof spectra, calculated using the default input matrix, when ETMY YAW was driven at 0.25 Hz.
- Attachment 2: The same OSEM sensor data, with the dofs calculated using the matrix found from this data. There is still a significant peak in pitch, but the other dofs are significantly suppressed.
- Attahcment 3: The same data again, but the dofs are measured with the input matrix calculated by the free swinging data. This achieves much less suppression than the new matrix. Obviously this is not exactly a fair comparison because the new matrix was generated with this data, but the method of measuring OSEM responses by driving peaks has a much close relationship between what it measured (the OSEM response), and how the matrix is used (by damping loops which drive the coils in much the same way as awggui).
- The phase problems seem to be mostly solved. Both Y arm test masses have some phase warnings, but they mostly occur with side. This can happen because the ideal matrix elements are 0, so the real parts are small. If there is no strong coupling then there is no reason to expect the background spectrum to be in phase with the peak. Other phase differences are small; most less than 5 degrees, a couple between 5 and 10 degrees. This may still merit further investiagtion.
- Comparing the damping results for ITMY with the old (based on free swinging data) and new (based on driven data), we see the 1Hz peak suppressed by ~35% and the noise above 1Hz generally suppressed by ~25-30% . There is, however, significantly more movement between 0.5 and 1 Hz, maybe because the fundamental physical modes are not being directly measured and suppressed. Overall this seems like an improvement.
GPS times:
ITMY
Pitch:1158085097 Yaw: 1158086537 Pos: 1158089237 Side: 1158087977
ETMY
Pitch: 1158095897 Yaw: 1158097577 Pos: 1158099377 Side: 1158100817 |
12484
|
Mon Sep 12 20:15:22 2016 |
Lydia | Update | SUS | Diagonalization in air | [Lydia, Teng]
We ran the scripts to diagonalize the damping matrices using the free swinging data from staurday night/sunday morning. The actual entries used for damping have not been changed. However, we did generate updated matrices for all the main optics (not including the mode cleaner optics, which were not free swinging over the weekend).
- The scripts appear to be mostly working as intended, with a couple of issues:
- The plots made by makeSUSSpectra claim to be showing spectra of the individual OSEM readings, but are actually dofs calculated using the ideal input matrix.
- The existing parameters file (for the peak finding) was only fitting the lorentz peaks to a very narrow band of data, close to the bandwidth of the spectrum. Too narrow a band means that the initial guess must be very close, and also means there are not enough points to fit to.
- We modified a copy of the paramters file to use a wider band (~.1 Hz) for fitting, and also use updated estimates of the mode frequencies.
- This was largely successful, but the ITMY POS peak is very close to the SIDE peak, and POS is also stringly coupled to SIDE, so the wider bandwidth fitting can't separate the peaks. (See attachment 1)
- A longer time series, plus more accurate initial guesses for the resonance frequencies, would allow us to fit to a smaller (~.03 Hz) band without encountering the stated issues.
- A better way than manually examining plots to choose an initial frequency guess would be to automatically start at the overall maximum point in the spectrum between 0.4 and 1.5 Hz
- Most of the diagonalization results seem good: "Badness" numbers of 4-6 and secondary peaks very supressed or absent on spectra plotted in dof basis (See attachment 2). ITMY, perhaps beacuse of a related issue, has phase problems with the matrix elements that result in messages like "osem/dof 2/1 is imaginary."
|
12490
|
Tue Sep 13 19:18:43 2016 |
Lydia | Update | SUS | Diagonalization in air | [Lydia, Teng]
We continued to work on the diagonalization scripts today and devised a way of choosing starting parameters that seems to work much better, and is easier to use, than tuning up to 15 parameters by hand per optic.
- As before, the spectrum for each dof is estimated by using the "ideal" input matrix.
- The starting guess for the peak frequency for each dof is the bin which achieves the maximum value of the spectrum between 0.4 and 1.5 Hz.
- If another dof has a higher value at that frequency, the next highest peak is used. (Sometimes, for example, the peak in PIT at the POS frequency is stronger than the real POS peak!)
- The peak height is initially guessed to be the spectrum value at the initial frequncy guess.
- The width paramter Q can still be read from a file, but for all the times we tried, the peaks were found successfully if Q was initially guessed to be 300, so there might be no need to do this.
- Spectra should still be examined to make sure the results make sense, and once we look at free swinging data in vacuum, we should compare the frequency results to the wiki values.
- Reasonably good matrix values are saved to peakFit/inMats/1157630417. We got good diagonalization results for all but ITMY (see below). The values used for damping have not been overwritten.
We still noticed phase problems with ITMY, which appear to be preventing good diagonalization (See Attachment 1). Almost every degree of freedom has a significant imaginary part in the sensing matrix. We looked at the phases of the cross spectra in DDT and saw that indeed, the OSEM signals do not have the appropriate relative phases at the peak frequencies, especially in PIT and YAW (see Attachment 2: the phase at the peak is about 30 degrees when it should be 180). These phases are different for data takes ~24 hours apart, but are still wrong. We also looked at this information for ETMY and saw the correct behavior. We temporarily moved the pitch and yaw sliders for ITMY and looked at the OSEM response on a striptool, and the signals moved in the expected way. Can anyone suggest a reason why this would be happening? Is there another stretch of data (besides this past weekend) which would be good to compare to?
|
851
|
Tue Aug 19 13:12:55 2008 |
Jenne | Update | SUS | Diagonalized PRM Input Matrix | NOTE: Use the values in elog #860 instead (20Aug2008)
Using the method described in LIGO-T040054-03-R (Shihori's "Diagonalization of the Input Matrix of the Suspension System"), I have diagonalized the input matrices for the PRM.
Notes about the method in the document:
- Must define the peak-to-peak voltage (measured via DataViewer) to be NEGATIVE for PitLR, PitLL, YawUR, YawLR, and POSITIVE for all others
- As Osamu noted in his 3 Aug 2005 elog entry, all of the negative signs in equations 4-9 should all be plus.
New PRM Input Matrices:
| POS | PIT | YAW
|
UL | 1.000 | 1.000 | 1.000
|
UR | 1.1877 | 1.0075 | -1.0135
|
LR | 0.8439 | -0.9425 | -0.9653
|
LL | 0.9684 | -1.0500 | 1.0216
|
|
16931
|
Tue Jun 21 08:36:50 2022 |
Anchal | Update | SUS | Diagonalized input matrices for LO1, freeSwing on ITMY and ITMX | Over the weekend, I ran freeSwing test with sequential kicks in specific DOFs for LO1, ITMY, ITMX. LO1 results were successfully used to diagonalize LO1 input matrix. There are some issues for ITMY, ITMX still. I could not run LO2 test.
LO1
The free swing test ran successfully, resonant frequencies for different DOFs was extracted, and new input matrix was calculated. The new matrix was only slightly different from before and it worked fine with existing damping loops. The observed resonance frequencies were different from previous values by POS: -6 mHz, PIT : -3 mHz, YAW: -9 mHz, SIDE: -2 mHz. Attached are the diagonalization result.
ITMX
The peculiarity of ITMX remained even after the second free swing test. The calculated input matrix is very different from existing one with sign flips across PIT and POS rows. I found that our LR osem is always bright in ITMX at the current alignment position. I see that LR osem comes in range when C1:SUS-ITMX_PIT_COMM is raised above 0.5. Maybe we should run this test when we know for sure ITMX is in correct position.
ITMY
In ITMY on the other hand, I found that SIDE OSEM was completely bright. This happened during the YAW kick to ITMY. We'll need to reduce kick amplitudes for ITMY and redo this test.
LO2
For LO2, I could not initiate the test. On reducing the alignment offsets for LO2 (so that it doesn't get stuck in the fre swing test), the damping loops were not working. This is a clear evidence also that input matrix is different for different positions of the optic. We need to think about some other strategy to do this test, maybe see if ideal input amtrix works at no offsets and use that to damp during the test.
|
17504
|
Mon Mar 13 14:48:37 2023 |
Anchal | Update | IMC | Diagonalizing YAW output matrix using a different method | I tried a different method today to see if it works. Following are the steps:
- Run WFS relief.
- Turn off the WFS loops.
- Calculate the effective current YAW matrix by transferring C1:IOO-MC#_YAW_GAIN to respective rows of the matrix read from C1:IOO-OUTMATRIX_Y. No need to change the matrix itself.
- This step should not be required. We should move these gains to the matrices as soon as we can.
- Put in the first column (corresponds to WFS1_YAW controller output) of this effective current YAW matrix to C1:IOO-LKIN_OUT_MTRX_4_1, C1:IOO-LKIN_OUT_MTRX_5_1, C1:IOO-LKIN_OUT_MTRX_6_1.
- This is the output matrix of LOCKIN in WFS screens.
- We are trying to actuate on what we think only affects WFS1_YAW and see if it is crosscoupled to WFS2_YAW or MC2_TRANS.
- Then we can cancel coupling to the other two sensors by changing our couple vector.
- Turn on locking at 0.5 Hz with gain 1.
- Turn on BLP0.3 filter module. This is a 8th order 0.3 Hz butterworth filter.
- Adjust phases to get all signal in the I quadratures.
- Using ratio of C1:IOO-WFS_LKIN_I5_OUT16 to C1:IOO-WFS_LKIN_I4_OUTPUT, subtract or add this much factor of the WFS2_YAW column (the second column) of the effective YAW matrix to the column that is put in the LOCKIN output matrix.
- I was able to subtract to less than 10% cross coupling with the intial matrix I started with.
- Repeat until no cross-coupling is seen between WFS1_YAW and WFS2_YAW.
- Repeat the above steps for WFS2_YAW column by putting that into the LOCKIN output matrix. Use the column calculated in last step for adding or subtracting WFS1 actuation.
- I was able to make WFS2 column very clean with less than 1% measurable crosscoupling to other sensors.
- I repeated the step for WFS1 column again to remove the cross coupling to WFS2 further to less than 1%.
- For doing the above steps for MC2_TRANS column, the initial effective matrix column was very bad. The outputs were higher in WFS1 and WFS2 then MC2_TRANS output itself.
- So I made the first guess by taking a cross-product between the obtained WFS1_YAW and WFS2_YAW columns estimated earleir.
- Then I repeated the above steps to minimize coupling to WFS1 or WFS2 sensors to less than 10% of MC2_TRANS.
- THe three column vectors obtained represent the new outpute YAW matrix. I removed the normalization that would be applied by C1:IOO-MC#_YAW filter gains from the rows of this amtrix to get the output matrix that can be put into C1:IOO-OUTMATRIX_Y
Once this matrix was in, I quickly tested it by closing the loop and making gain sign flips if required. Then I took quick swept sine transfer functions to estimate UGFs and scaled the columns of the output matrix to get UGF of 2.5 Hs for WFS1_YAW and WFS2_YAW loops and 0.1 Hz for MC2_TRANS YAW loop when all filter gains are 1 and overall gain C1:IOO-WFS_GAIN is 4. See attached plots.
Old matrix:
-4.094 , -3.0383 , 34.0917
-0.1259 , 0.27008, -16.081
-7.1811 , 0.74271, 28.9458
This was used with gains: 0.5 for WFS1_YAW loop, 0.6 for WFS2_YAW loop and 0.3 for MC2_TRANS_YAW loop.
New matrix:
-1.48948, -1.3029 , -4.93096
-0.05839, 0.15206, -3.66245
-2.82285, 0.92391, -4.68009
All loop gains 1.
Alex and Tomohiro are characterizing this matrix with step response and UGF measurements. |
17510
|
Tue Mar 14 15:46:06 2023 |
Tomohiro | Update | IMC | Diagonalizing YAW output matrix using a different method | Alex, Anchal, and I adjusted the number of the MC2-TRANS column in the YAW output matrix. We used the same method in 40m/17504 but the amplitude of oscillator for Lock In Amplifier is increased from 1 to 4.
The corrected numbers of the column in the output matrix is as follows:
|
MC2_TRANS |
MC1 |
-5.5196 |
MC2 |
-2.8778 |
MC3 |
-5.2232 |
We did the step response test for the corrected output matrix. The sum of off-diagonal terms was 0.62, which is the minimum value. Attachment 1 is the step response test result. From the figure, the reduction of the sum is because the column MC2_TRANS can diagonalize better. We can find out the property from Attachment 2. |
17512
|
Thu Mar 16 13:31:25 2023 |
Tomohiro | Update | IMC | Diagonalizing YAW output matrix using a different method | Purpose
- To adjust the components of the WFS2 column in the YAW output matrix.
- To check the value of the off-diagonal components of the WFS1 column.
Method
Alex, Anchal, and I used the same method in 40m/17504 to adjust the components of the WFS2 column. And we did the same step response test to check the value of the off-diagonal components in the YAW output matrix.
Used script & file
All the scripts & files are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/MC/WFS/ directory.
- DiagnoalizatingMethod.ipynb: for adjusting the components and replacing the new output matrix,
- toggleWFSoffsets.py: for doing the step response test,
- IOO_WFS_YAW_STEP_RESPONSE_TEST.py: for analyzing the step response result.
Result
We changed the WFS2 column as follows
|
From |
To |
MC1 |
-1.3029 |
-1.8548 |
MC2 |
0.15206 |
-0.1357 |
MC3 |
0.92391 |
0.40158 |
We can successfully diagonalize the WFS2 column. The sum of the off-diagonal components is slightly reduced. However, WFS1 has worse diagonalization.
The same step response test should be performed on a different day to see if the results change. It is because the multiple causes could exist: the influence of the changed other columns, the long time drift, the day to day change, and so on. |
3353
|
Tue Aug 3 11:17:10 2010 |
kiwamu | Update | CDS | Diagrams for Cables needed for CDS test | Current Wiring Setup for the Suspension Controls

New Wiring Plan for the Suspension Controls with the New CDS

Missing Stuff for the CDS test
Ideally we can reuse the existing cables, but some of them may not be long enough for the new wiring.
The diagram below shows extremely non-ideal case.

Some more information will be summarized on the wiki later.
|
3964
|
Mon Nov 22 16:16:04 2010 |
josephb | Update | CDS | Did an SVN update on the CDS code | Problem:
The CDS oscillator part doesn't work inside subsystems.
Solution:
Rolf checked in an older version of the CDS oscillator which includes an input (which you just connect to a ground). This makes the parser work properly so you can build with the oscillator in a subsystem.
So I did an SVN checkout and confirmed that the custom changes we have here were not overwritten.
Edit:
Turns out the latest svn version requires new locations for certain codes, such as EPICS installs. I reverted back to version 2160, which is just before the new EPICs and other rtapps directory locations, but late enough to pick up the temporary fix to the CDS oscillator part. |
671
|
Tue Jul 15 10:09:42 2008 |
Eric | DAQ | Cameras | Did anyone kill the picture taking process on Mafalda? | Did anyone kill the process on Mafalda that was taking pictures of the end mirror of the x-arm last Friday? I need to know whether or not it crashed of its own accord. |
6108
|
Mon Dec 12 16:30:17 2011 |
Jenne | Update | Computers | Did someone just do something to fb?? | Dataviewer couldn't connect to the framebuilder, so I checked the CDS status screen, and all the fb-related things on each model went white, then red, then computer-by-computer they came back green. Now dataviewer works again. Is someone secretly doing shit while not in the lab??? Not cool man! |
6112
|
Tue Dec 13 11:51:33 2011 |
Jamie | Update | Computers | Did someone just do something to fb?? |
Quote: |
Dataviewer couldn't connect to the framebuilder, so I checked the CDS status screen, and all the fb-related things on each model went white, then red, then computer-by-computer they came back green. Now dataviewer works again. Is someone secretly doing shit while not in the lab??? Not cool man!
|
This happens on occasion, and I have reported it to the CDS guys. Something apparently causes the framebuilder to crash, but I haven't figured out what it is yet. I doubt this particular instance had anything to do with remote futzing. |
10716
|
Fri Nov 14 15:26:45 2014 |
Steve | Update | safety | Diego gets safety traning | Diego Bersanetti received 40m specific safety training today. |
7186
|
Wed Aug 15 01:14:19 2012 |
Yaakov | Update | PEM | Differential Motion of X and Y Arm | Den and I measured the differential motion of the x and y arms using Guralp 1 at the end of the y arm, Guralp 2 at the beamsplitter, and the Streckeisen at the end of the x arm.
I calibrated the Streckeisen to the Guralp by calculating the relative gain of the seismometer signals at the microseism. The Guralp 1-y amplitude was 1.0237 times Guralp 2-y and Guralp 2-x was 38.54 times STS-x. The Guralp calibration (to go from counts to meters) I used was 0.61/1000/800/80/(2*pi*f) m/count.
The differential motion should keep decreasing at low frequencies because the ground will move together at such large wavelengths. It goes up because the seismometer noise begins to dominate at low frequencies (below about 0.5 Hz). Another possible error source could be that the seismometers are not perfectly aligned along the arm.
 
|
4216
|
Thu Jan 27 23:21:50 2011 |
rana | Summary | Green Locking | Digital Frequency Discriminator | That's some pretty fast work! I thought we would be taking up to a week to get that happening. I wonder what's the right way to measure the inherent frequency noise of this thing?
Also, should the comparator part have some hysteresis (ala Schmidt trigger) or is it best to just let it twirl as is? Is it sensitive to DC offsets on the input or is there a high pass filter? What's the correct low pass filter to use here so that we can have a low phase lag feedback to the ETM? |
4217
|
Fri Jan 28 09:03:38 2011 |
Aidan | Summary | Green Locking | Digital Frequency Discriminator |
Quote: |
That's some pretty fast work! I thought we would be taking up to a week to get that happening. I wonder what's the right way to measure the inherent frequency noise of this thing?
Also, should the comparator part have some hysteresis (ala Schmidt trigger) or is it best to just let it twirl as is? Is it sensitive to DC offsets on the input or is there a high pass filter? What's the correct low pass filter to use here so that we can have a low phase lag feedback to the ETM?
|
We could try inputing a 4kHz carrier modulated width a depth of a few Hz at a modulation frequency of F1. Then we could take an FFT of the output of the discriminator and measure the width of the peak at F1 Hz. This seems like an arduous way to measure the frequency noise at a single frequency though.
It'll definitely be sensitive to DC offsets but there is already a filter bank on the INPUT filter so we can shape that as necessary. We could probably band-pass that from [4.5 - 5.3kHz] (which would correspond to a range of [73,87] MHz into a 2^14 frequency divider.
|
4218
|
Fri Jan 28 10:27:46 2011 |
Aidan, Joe | Summary | Green Locking | Digital Frequency Discriminator - calibration | One more thing ... we can calibrate the output of the LP filter to give a result in Hz with the following calibration:
LP_OUT = -1/(2*dt)*(LP_IN -1), where dt is 1/16384, the delay time of the delayed path.
Therefore LP_OUT = -8192*(LP_IN-1). |
4259
|
Tue Feb 8 10:23:02 2011 |
Aidan | Summary | Green Locking | Digital Frequency Discriminator - reference |
Here's the reference for the self-reference frequency detection idea. See Figure 2.
http://www.phys.hawaii.edu/~anita/new/papers/militaryHandbook/mixers.pdf |
4227
|
Sun Jan 30 17:15:09 2011 |
Aidan | Summary | Green Locking | Digital Frequency discriminator - frequency noise | I've had a go at trying to estimate the frequency noise of the digital frequency discriminator (DFD). I input a 234.5Hz (0.5Vpp) signal from a 30MHz function generator into the ADC. The LP output of the DFD measured 234.5Hz. However, this signal is clearly modulated by roughly +/- 0.2Hz at harmonics of 234.5Hz (as you can see in the top plot in the dataviewer screenshot below). So the frequency noise can be estimated as rms of approximately 0.2Hz.
This is supported by taking the spectra of the LP output and looking at the RMS. Most of the power in the RMS frequency noise (above the minimum frequency) comes from the harmonics of the input signal and the RMS is approximately 0.2Hz.
I believe this stems from the rather basic LP filter (three or four poles around 10Hz?) that is used in the LP filter to remove the higher frequency components that exist after the mixing stage. (The currently loaded LPF filter is not the same as the saved one in Foton - and that one won't load at the moment, so I'm forced to remember the shape of the current filter).
The attached screen capture from data viewer shows the LP_OUT hovering around 234.5Hz. |
4213
|
Thu Jan 27 17:12:02 2011 |
Aidan, Joe | Summary | Green Locking | Digital Frequency to Amplitude converter | Joe and I built a very simple digital frequency to amplitude converter using the RCG. The input from an ADC channel goes through a filter bank (INPUT), is rectified and then split in two. One path is delayed by one DAQ cycle (1/16384 s) and then the two paths are multiplied together. Then the output from the mixer goes through a second filter bank (LP) where we can strip off twice the beat frequency. The DC output from the LP filter bank should be proportional to the input frequency.
Input Channel: C1:GFD-INPUT_xxx
Output Channel: C1:GFD-LP_xxx
Joe compiled the code and we tested it by injecting a swept sine [100, 500]Hz in the input filter bank. We confirmed that output of the LP filter bank changed linearly as a function of the input frequency.
The next thing we need to do is add a DAC output. Once that's in place we should inject the output from a 4kHz VCO into the ADC. Then we can measure the transfer function of the loop with an SR785 (driving the VCO input and looking at the output of the DAC) and play around with the LP filter to make sure the loop is fast enough.
The model is to be found here:
/opt/rtcds/caltech/c1/core/advLigoRTS/src/epics/simLink/c1gfd.mdl
The attached figures show the model file in Simulink and a realtime dataviewer session with injecting a swept sine (from 500Hz to 100Hz) into the INPUT EXC channel. We've had some frame builder issues so the excitation was not showing on the green trace and, for some reason, the names of the channels are back to front in dataviewer (WTF?), - the lower red trace in dataviewer is actually displaying C1:GFD-LP_OUT_DAQ, but it says it is displaying C1:GFD-INPUT_OUT_DAQ - which is very screwy.
However, the basic principle (frequency to amplitude) seems to work. |
9353
|
Wed Nov 6 14:47:41 2013 |
Steve | Frogs | LSC | Din connectors added at 1Y2 | The north side of the LSC rack is full. I installed more DIN connectors with fuses on the south side of the rack 1Y2
The access to this may be a little bit awkward. You just remove the connector, wire it and put it back in.
|
8332
|
Fri Mar 22 19:46:29 2013 |
Koji | Summary | LSC | Diode impedance test result | I've tested Perkin-Elmer InGaAs PDs at OMC Lab.
- The diode impedances were measured with the impedance measurement kit. Reverse bias of 5V was used.
- Diode characteristics were measured between 10MHz and 100MHz.
- 4-digit numbers are SN marked on the can
- Ls and Rs are the series inductance and resistance
- Cd is the junction capacitance.
- i.e. Series LCR circuit o--[Cd]--[Ls]--[Rs]--o
C30665GH, Ls ~ 1nH
0782 Perkin-Elmer, Rs=8.3Ohm, Cd=219.9pF
1139 Perkin-Elmer, Rs=9.9Ohm, Cd=214.3pF
0793 Perkin-Elmer, Rs=8.5Ohm, Cd=212.8pF
C30642G, Ls ~ 12nH
2484 EG&G, Rs=12.0Ohm, Cd=99.1pF
2487 EG&G, Rs=14.2Ohm, Cd=109.1pF
2475 EG&G glass crack, Rs=13.5Ohm, Cd=91.6pF
6367 ?, Rs=9.99Ohm, Cd=134.7pF
1559 Perkin-Elmer, Rs=8.37Ohm, Cd=94.5pF
1564 Perkin-Elmer, Rs=7.73Ohm, Cd=94.5pF
1565 Perkin-Elmer, Rs=8.22Ohm, Cd=95.6pF
1566 Perkin-Elmer, Rs=8.25Ohm, Cd=94.9pF
1568 Perkin-Elmer, Rs=7.83Ohm, Cd=94.9pF
1575 Perkin-Elmer, Rs=8.32Ohm, Cd=100.5pF
C30641GH, Perkin Elmer, Ls ~ 12nH
8983 Perkin-Elmer, Rs=8.19Ohm, Cd=25.8pF
8984 Perkin-Elmer, Rs=8.39Ohm, Cd=25.7pF
8985 Perkin-Elmer, Rs=8.60Ohm, Cd=25.2pF
8996 Perkin-Elmer, Rs=8.02Ohm, Cd=25.7pF
8997 Perkin-Elmer, Rs=8.35Ohm, Cd=25.8pF
8998 Perkin-Elmer, Rs=7.89Ohm, Cd=25.5pF
9000 Perkin-Elmer, Rs=8.17Ohm, Cd=25.7pF
Note: Calculated Ls&Rs of straight wires
1mm Au wire with dia. 10um -> 1nH, 0.3 Ohm
20mm BeCu wire with dia. 460um -> 18nH, 0.01 Ohm |
10610
|
Wed Oct 15 17:09:49 2014 |
manasa | Update | General | Diode laser test preparation | [EricG, manasa]
The He-Ne laser oplev setup was swapped with a fiber-coupled diode laser from W Bridge. The laser module and its power supply are sitting on a bench in the east side of the SP table. |
10651
|
Wed Oct 29 18:07:28 2014 |
manasa | Update | General | Diode laser test preparation | I ran 3 BNC cables from the SP table to 1X7 rack so that we can have 16 bit channels for the Ontrak PD that will be used to test oplev lasers. The BNC cables are plugged to the Ch 29, 30 & 31 that were already created for this purpose (elog 10488) |
394
|
Sat Mar 22 22:39:02 2008 |
mevans | Summary | CDS | Direct Form 2 filters are bad | Here I show a comparison between the filter algorithm currently used in LIGO (Direct Form II), and an alternative algorithm designed to reduce numerical noise. The input signal is
x = sin(2 * pi * t) + 1e-9 * sin(2 * pi * (fs / 4) * t);
where fs = 16384 is the sample rate. The filter is a 4th order notch at 1Hz (f_poles = f_zeros = 1Hz, Q_poles = 1, Q_zeros = 1e6). It is clear that the DF2 algorithm produces a noise floor that is, for this simple filter, 1e-11 / rtHz smaller than the input drive amplitude (see plots). That should probably be scary given how many second-order-sections we run our signals through. The low-noise form does a somewhat better job. The low-noise algorithm has the same memory and computational requirements as DF2, and our CDS guys have the code in hand. I suggest we start testing soon.
(The code is included below. You will need my Matlab library to run the top level test script.) |
11791
|
Thu Nov 19 17:06:57 2015 |
Koji | Configuration | CDS | Disabled auto-launching RT processes upon FE booting | We want to startup the RT processes one by one on boot of FE machines.
Therefore /diskless/root/etc/rc.local on FB was modified as follows. The last sudo line was commented out.
for sys in $(/etc/rt.sh); do
#sudo -u controls sh -c ". /opt/rtapps/rtapps-user-env.sh && /opt/rtcds/cal\
tech/c1/scripts/start${sys}"
# NOTE: we need epics stuff AND iniChk.pl in PATH
# we use -i here so that the .bashrc is sourced, which should also
# source rtapps and rtcds user env (for epics and scripts paths)
# commented out Nov 19, 2015, KA
# see ELOG 11791 http://nodus.ligo.caltech.edu:8080/40m/11791
# sudo -u controls -i /opt/rtcds/caltech/c1/scripts/start${sys}
done
|
14474
|
Tue Mar 5 15:56:27 2019 |
gautam | Summary | Tip-TIlt | Discussion points about TT re-design | Chub, Koji and I have been talking about Udit's re-design. Here are a few points that were raised. Chub/Koji can add to/correct where necessary. Summary is that this needs considerable work before we can order the parts for a prototype and characterize it. I think the requirements may be stated as:
- The overall pendulum length should be similar to that of the SOS, i.e. ~0.3m (current length is more like 0.1m) such that the eigenfrequencies are lowered to more like ~1 Hz. Mainly we wan't to avoid any overlap with the stack eigenmodes. This may require an additional stiffening piece near the top of the tower as we have for the SOS. What is a numerical way to spec this?
- The center of the 2" optic should be 6" from the table.
- The mass of the optic + holder should be similar to the current design so we may use the same suspension wires (I believe they are a different thickness than that used for the SOS).
- Ensure we can extract any transmitted beams without clipping.
- Fine pitch adjustment capablity should be yyy mrad (20mrad?).
- We should preserve the footprint of the existing TTs, given the space constraints in vacuum. Moreover, we should be able to use dog-clamps to fix the tower in place, so the base plate should be designed accordingly.
- Keep the machining requirements as simple as possible while achieving the above requirements- i.e. do we really need rounded optic holder? Why not just rectangular? Similarly for other complicated features in the current design.
Some problems with Udit's design as it stands:
- I noticed that the base of the TT and the center of the 2" optic are 4" separated. The SOS cage base and center of 3" optic are separated by 6". Currently, there is an adaptor piece that raises the TT height to match that of the SOS. If we are doing a re-design, shouldn't we just aim for the correct height in the first place?
- Udit doesn't seem to have taken into account the torque due to the optic+holder in the pitch balancing calculations he did. Since this is expected to be >> that of any rod/screw we use for fine pitch balancing, we need to factor that into the calculation.
- For the coarse pitch adjustment, we'd need to slide the wire clamping piece relative to the optic holding piece. Rather than do this stochastically and hope for the best, the idea was to use a threaded screw to realize this operation in a controlled way. However, Udit's design doesn't include the threaded hole.
- There are many complicated machining features which are un-necessary.
|
13117
|
Fri Jul 14 17:47:03 2017 |
gautam | Update | General | Disks from LLO have arrived | [jamie, gautam]
Today morning, the disks from LLO arrived. Jamie and I have been trying to get things back up and running, but have not had much success today. Here is a summary of what we tried.
Keith Thorne sent us two disks: one has the daqd code and the second is the boot disk for the FE machines. Since Jamie managed to successfully compile the daqd code on FB1 yesterday, we decided to try the following: mount the boot disk KT sent us (using a SATA/USB adapter) on /mnt on FB1, get the FEs booted up, and restart the RT models.
Quote: |
I just want to mention that the situation is actually much more dire than we originally thought. The diskless NFS root filesystem for all the front-ends was on that fb disk. If we can't recover it we'll have to rebuilt the front end OS as well.
As of right now none of the front ends are accessible, since obviously their root filesystem has disappeared.
|
While on FB1, Jamie realized he actually had a copy of the /diskless/root directory, which is the NFS filesystem for the FEs, on FB1. So we decided to try and boot some of the FEs with this (instead of starting from scratch with the disks KT sent us). The way things were set up, the FEs were querying the FB machine as the DHCP server. But today, we followed the instructions here to get the FEs to get their IP address from chiara instead. We also added the line
/diskless/root *(sync,rw,no_root_squash,no_all_squash,no_subtree_check)
to /etc/exports followed by exportfs -ra on FB1. At which point the FE machine we were testing (c1lsc) was able to boot up.
However, it looks like the NFS filesystem isn't being mounted correctly, for reasons unknown. We commented out some of the rtcds related lines in /etc/rc.local because they were causing a whole bunch of errors at boot (the lines that were touched have been tagged with today's date).
So in summary, the status as of now is:
- Front-end machines are able to boot
- There seems to be some problem during the boot process, leading to the NFS file system not being correctly mounted. The closest related thing I could find from an elog search is this entry, but I think we are facing a different probelm.
- We wanted to see if we could start the realtime models (but without daqd for now), but we weren't even able to get that far today.
We will resume recovery efforts on Monday. |
1030
|
Tue Oct 7 10:49:29 2008 |
Alberto | Update | General | Displaced Photodiode | This morning I found that the photodidode of the PLL in the PSL table was not aligned to the beam anymore. The PD support was not tight to the pedestal so that the PD was rotated and completely off of the beam.
It is possible that the BNC cable connected to the PD was pulled very strongly, or the PD was hit so that the support got unscrewed by its pedestal. Anyways, it did not happen spontaneously.
I re-aligned the PD and observed again the beat between the two laser beams. Here are the values from the measurement of the signal from the PD:
I measured the DC values of the hitting power, alternatively occluding one of the two laser beams, and I measured the beat amplitude letting them interfere and reading the peak-to-peak amplitude of the oscillating signal:
main beam DC: 200mV
secondary beam DC: 490
beat: 990mV
beat at the spectrum analyzer (after the two-way splitter of the PLL): -8.40dBm on a noise floor of the photodiode of -75dBm
the frequency of the beast is 8.55MHz and the temperature of the NPRO of the secondary beam, as read from the laser driver display, is 48.7357C.
Alberto |
1845
|
Thu Aug 6 17:51:21 2009 |
Chris | Update | General | Displacement Sensor Update | For the past week Dmass and I have been ordering parts and getting ready to construct our own modified version of EUCLID (figure). Changes to the EUCLID design could include the removal of the first lens, the replacement of the cat's eye retroreflector with a lens focusing the beam waist on a mirror in that arm of the Michelson, and the removal of the linear polarizers. A beam dump was added above the first polarizing beam splitter and the beam at Photodetector 2 was attenuated with an additional polarizing beam splitter and beam dump. Another proposed alteration is to change the non-polarizing beam splitter from 50/50 to 33/66. By changing the reflectivity to 66\%, less power coming into the non-polarizing beam splitter would be ``lost" at the reference detector (1/3 instead of 1/2), and on the return trip less power would be lost at the polarizing beam splitter (1/6 instead of 1/4). Also, here's a noise plot comparing a few displacement sensors that are used to the shot noise levels for the three designs I've been looking at. |
|