40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 63 of 341  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  3291   Mon Jul 26 11:15:23 2010 GopalHowToCOMSOL TipsPictures from Transfer Function Tutorial on the Wiki

The attached pictures give a brief overview of my transfer function measurement procedure in COMSOL. For more details, please see the Wiki.

Screen_shot_2010-07-23_at_2.57.14_PM.png

Screen_shot_2010-07-23_at_2.57.38_PM.png

Screen_shot_2010-07-23_at_2.57.45_PM.png

Screen_shot_2010-07-23_at_2.58.05_PM.png

Screen_shot_2010-07-23_at_2.58.18_PM.png

Screen_shot_2010-07-23_at_2.59.02_PM.png

Screen_shot_2010-07-23_at_3.00.37_PM.png

  3322   Thu Jul 29 17:11:16 2010 GopalUpdateCOMSOL TipsIncluding Gravity in COMSOL

[Gopal, Jan]

For the past couple of days, Jan and I have been discussing a major issue in COMSOL involving modeling both oscillatory and non-oscillatory forces simultaneously while using FDA. It turns out that he and I had run into the same problem at different times and with different projects. After discussing with an expert, Jan had decided in the past that this simple task was impossible via direct means.

The issue could still be resolved if there was a way for us to work on the Weak Form of the differential equations describing the system:

  • Usually, one must define weight as a body load in the negative-z direction. However, this problematically instantiates a new force in COMSOL, which is automatically driven over the range of frequencies during FDA.
  • Instead, we could define gravity as an anti-restoring force, since we assume that the base of the stack is fixed.
  • In other words, Fg = (ρ*g/L)*x + (ρ*g/L)*y for a point mass which is constrained on the bottom (for small angles).
  • Working in Weak Form then, we'd never have to define an explicit gravity load-- this could just be an extra couple of terms in the differential equation which are related entirely to the x- and y-vectors (well-defined for each mesh point). This would fool COMSOL into never tacking on the oscillatory term during FDA. 

According to current documentation however, Weak Form analysis is not yet possible in COMSOL 4.0. Jan suggested moving my work over to ANSYS or waiting for the 4.0 upgrade, but there's probably not enough time left in my SURF for either of these options. I suggested attempting a backwards-compatibility test to COMSOL 3.5; Jan and I will be exploring this option some time next week. 

  3536   Tue Sep 7 20:44:54 2010 YoichiHowToCOMSOL TipsCOMSOL example for calculating mechanical transfer functions

I added COMSOL example files to the 40m svn to demonstrate how to make transfer function measurements in COMSOL.

https://nodus.ligo.caltech.edu:30889/svn/trunk/comsol/MechanicalTF/

The directory also contains an (incomplete) explanation of the method in a PDF file.

  8190   Wed Feb 27 19:27:29 2013 AnnalisaHowToCOMSOL TipsMirror support Eigenfrequency

 I studied the eigenfrequencies of a mirror support using COMSOL.

 

  8226   Mon Mar 4 20:03:42 2013 AnnalisaHowToCOMSOL TipsStudy of mirror mount eigenfrequencies

 I studied the eigenfrequencies of a mirror mount designed with COMSOL.

I imposed fixed constraints for the base screws and for the screw connecting the base with the pedestal. Note that the central screw is connected to the base only for a small thickness, and the pedestal touches the base only with a thin annulus. This is in way to make a better model of the actual stress.

Shown in fig. 2 is the lowest eigenfrequency of the mount.

I' going to change the base and study the way the eigenfrequency vary, in way to find the configuration which minimizes the lowest eigenfrequency.

 

  8437   Wed Apr 10 15:49:22 2013 AnnalisaConfigurationCOMSOL TipsYend table eigenfrequency simulation with COMSOL

 I made a Simulation with COMSOL for the Yend table. Mainly, I tried to see how the lower eigenmode changes with the number and the size of the posts inside.

The lateral frame is just sitting on the table, it is fixed by its weight. I also put a couple of screws to fix it better, but the resulting eigenfrequency didn't change so much (less than 1 Hz). 

In Fig. 1 I didn't put any post. Of course, the lowest eigenfrequency is very low (around 80 Hz).

Then I added 2 posts, one per side (Fig. 2 and Fig. 3), with different diameter.

In some cases posts don't have a base, but they are fixed to the table only by a screw. It is just a condition to keep them fixed to the table

Eventually I put 4 posts, 2 per side. 

The lowest eigenfrequency is always increasing.

At the end I also put a simulation for 4 1.6 inch diameter posts without base, and the eigenfrequency is slightly higher. I want to check it again, because I would expect that the configuration shown in Fig.5a could be more stable.

P.S.: All the post are stainless steel.

 

  15650   Thu Oct 29 09:50:12 2020 anchalSummaryCalibrationPreliminary calibration measurement taken

I went to 40m yesterday at around 2:30 pm and Koji showed me how to acquire lock in different arms and for different lasers. Finally, we took a preliminary measurement of shaking the ETMX at some discrete frequencies and looking at the beatnote frequency spectrum of X-end laser's fiber-coupled IR and Main laser's IR pick-off.


Basic controls and measurement 101 at 40m

  • I learned a few things from Koji about how to align the cavity mirrors for green laser or IR laser.
  • I learned how to use ASS and how to align the green end laser to the cavity. I also found out about the window at ETMX chamber where we can directly see the cavity mode, cool stuff.
  • Koji also showed me around on how to use diaggui and awggui for taking measurements with any of the channels.

Preliminary measurement for calibration scheme

We verified that we can send discrete frequency excitation signals to ETMX actuators directly and see a corresponding peak in the spectrum of beatnote frequency between fiber-coupled X-end IR laser and main laser IR pickoff.

  • I sent excitation signal at 200 Hz, 250 Hz and 270 Hz at C1:SUS-ETMX_LSC_EXC channel using awggui with an amplitude of 100 cts and gain of 2.
  • I measured corresponding peaks in the beatnote spectrum using diaggui.
  • Page 1 shows the ASD data for the 4 measurements taken with Hanning window and averaging of 10.
  • Page 2 shows close up Spectrum data for the 4 measurements taken with flattop window and averaging of 10.
  • I converted this frequency signal into displacement by using conversion factor \nu_{FSR}/\frac{\lambda}{2} or \frac{L \lambda}{c}.

If full interferometer had been locked, we could have used the DARM error signal output to calibrate it against this measurement.

Data

  16128   Mon May 10 10:57:54 2021 Anchal, PacoSummaryCalibrationUsing ALS beatnote for calibration, test

Test details:

  • We locked both arms and opened the shutter for Yend green laser.
  • After toggling the shutter on.off, we got a TEM00 mode of green laser locked to YARM.
  • We then cleared the phase Y history by clicking "CLEAR PHASE Y HISTROY" on C1LSC_ALS.adl (opened from sitemap > ALS > ALS).
  • We sent excitation signal at ITMY_LSC_EXC using awggui at 43Hz, 77Hz and 57Hz.
  • We measured the power spectrum and coherence of C1:ALS-BEATY_FINE_PHASE_OUT_HZ_DQ and C1:SUS-ITMY_LSC_OUT_DQ.
  • The BEATY_FINE_PHASE_OUT_HZ is already calibrated in Hz. This we assume is done by multip[lying the VCO slope in Hz/cts to the error signal of the digital PLL loop that tracks the phase of beatnote.
  • We calibrated C1:SUS-ITMY_LSC_OUT_DQ by multiplying with
    \large 3 \times \frac{2.44 \, nm/cts}{f^2} \times \frac{c}{1064\,nm \times 37.79\, m} = \frac{54.77}{f^2} kHz/cts where f is in Hz.
    The 2.44/f2 nm/cts is taken from 13984.
  • We added the calibration as Poles/zeros option in diaggui using gain=54.577e3 and poles as "0, 0".
  • We found that ITMY_LSC_OUT_DQ calibration matches well at 57Hz but overshoots (80 vs 40) at 43 Hz and undershoots (50 vs 80) at 77Hz.

Conclusions:

  • If we had DRFPMI locked, we could have used the beatnote spectrum as independent measurement of arm lengths to calibrate the interferometer output.
  • We can also use the beatnote to confirm or correct the ITM actuator calibrations. Maybe shape is not exactly 1/f2 unless we did something wrong here or the PLL bandwidth is too short.
  16315   Tue Sep 7 18:00:54 2021 TegaSummaryCalibrationSystem Identification via line injection

[paco]

This morning, I spent some time restoring the jupyter notebook server running in allegra. This server was first set up by Anchal to be able to use the latest nds python API tools which is handy for the calibration stuff. The process to restore the environment was to run "source ~/bashrc.d/*" to restore some of the aliases, variables, paths, etc... that made the nds server work. I then ran ssh -N -f -L localhost:8888:localhost:8888 controls@allegra from pianosa and carry on with the experiment.


[paco, hang, tega]

We started a notebook under /users/paco/20210906_XARM_Cal/XARM_Cal.ipynb on which the first part was doing the following;

  • Set up list of excitations for C1:LSC-XARM_EXC (for example three sine waveforms) using awg.py
  • Make sure the arm is locked
  • Read a reference time trace of the C1:LSC-XARM_IN2 channel for some duration
  • Start excitations (one by one at the moment, ramptime ~ 3 seconds, same duration as above)
  • Get data for C1:LSC-XARM_IN2 for an equal duration (raw data in Attachment #1)
  • Generate the excitation sine and cosine waveforms using numpy and demodulate the raw timeseries using a 4th order lowpass filter with fc ~ 10 Hz
  • Estimate the correct demod phase by computing arctan(Q / I) and rerunning the demodulation to dump the information into the I quadrature (Attachment #2).
  • Plot the estimated ASD of all the quadratures (Attachment #3)

[paco, hang, tega]

Estimation of open loop gain:

  • Grab data from the C1:LSC-XARM_IN1 and C1:LSC-XARM_IN2 test points
  • Infer excitation from their differnce, i.e. C1:LSC-XARM_EXC = C1:LSC-XARM_IN2 - C1:LSC-XARM_IN1
  • Compute the open loop gain as follows : G(f) = csd(EXC,IN1)/csd(EXC,IN2), where csd computes the cross spectra density of the input arguments
  • For the uncertainty in G, dG, we repeat steps (1) to (3) with & without signal injection in the C1:LSC-XARM_EXC channel. In the absence of signal injection, the signal in C1:LSC-XARM_IN2 is of the form: Y_ref = Noise/(1-G), whereas with nonzero signal injection, the signal in C1:LSC-XARM_IN2 has the form: Y_cal = EXC/(1-G) + Noise/(1-G), so their ratio, Y_cal/Y_ref = EXC/Noise, gives the SNR, which we can then invert to give the uncertainty in our estimation of G, i.e dG = Y_ref/Y_cal.
  • For the excitation at 53 Hz, our measurtement for the open loop gain comes out to about 5 dB whiich is consistent with previous measurement.
  • We seem to have an SNR in excess of 100 at measurement time of 35 seconds and 1 count of amplitude which gives a relative uncertainty of G of 0.1%
  • The analysis details are ongoing. Feedback is welcome.
  16352   Tue Sep 21 11:13:01 2021 PacoSummaryCalibrationXARM calibration noise

Here are some plots from analyzing the C1:LSC-XARM calibration. The experiment is done with the XARM (POX) locked, a single line is injected at C1:LSC-XARM_EXC at f0 with some amplitude determined empirically using diaggui and awggui tools. For the analysis detailed in this post, f0 = 19 Hz, amp = 1 count, and gain = 300 (anything larger in amplitude would break the lock, and anything lower in frequency would not show up because of loop supression). Clearly, from Attachment #3 below, the calibration line can be detected with SNR > 1.

We read the test point right after the excitation C1:LSC-XARM_IN2 which, in a simplified loop will carry the excitation suppressed by 1 - OLTF, the open loop transfer function. The line is on for 5 minutes, and then we read for another 5 minutes but with the excitation off to have a reference. Both the calibration and reference signal time series are shown in Attachment #1 (decimated by 8). The corresponding ASDs are shown in Attachment #2. Then, we demodulate at 19 Hz and a 30 Hz, 4th-order butterworth LPF, and get an I and Q timeseries (shown in Attachment #3). Even though they look similar, the Q is centered about 0.2 counts, while the I is centered about 0.0. From this time series, we can of course show the noise ASDs in Attachment #3.


The ASD uncertainty bands in the last plot are statistical estimates and depend on the number of segments used in estimating the PSD. A thing to note is that the noise features surrounding the signal ASD around f0 are translated into the ASD in the demodulated signals, but now around dc. I guess from Attachment #3 there is no difference in the noise spectra around the calibration line with and without the excitation. This is what I would have expected from a linear system. If there was a systematic contribution, I would expect it to show at very low frequencies.

  16353   Wed Sep 22 11:43:04 2021 ranaSummaryCalibrationXARM calibration noise

I would expect to see some lower frequency effects. i.e. we should look at the timeseries of the demod with the excitation on and off.

I would guess tat the exc on should show us the variations in the optical gain below 3 Hz, whereas the exc off would not show it.

Maybe you should do some low pass filtering on the time series you have to see the ~DC effects? Also, reconsider your AA filter design: how do you quantitatively choose the cutoff frequency and stopband depth?

  16363   Tue Sep 28 16:31:52 2021 PacoSummaryCalibrationXARM OLTF (calibration) at 55.511 Hz

[anchal, paco]

Here is a demonstration of the methods leading to the single (X)arm calibration with its budget uncertainty. The steps towards this measurement are the following:

  1. We put a single line excitation through the C1:SUS-ETMX_LSC_EXC at 55.511 Hz, amp = 1 counts, gain = 300 (ramptime=10 s).
  2. With the arm locked, we grab a long timeseries of the C1:LSC-XARM_IN1_DQ (error point) and C1:SUS-ETMX_LSC_OUT_DQ (control point) channels.
  3. We assume the single arm loop to have the four blocks shown in Attachment #1, A (actuator + sus), plant (mainly the cavity pole), D (detection + electronics), and K (digital control).
    1. At this point, Anchal made a model of the single arm loop including the appropriate filter coefficients and other parameters. See Attachments #2-3 for the split and total model TFs.
    2. Our line would actually probe a TF from point b (error point) to point d (control point). We multiplied our measurement with open loop TF from b to d from model to get complete OLTF.
    3. Our initial estimate from documents and elog made overall loop shape correct but it was off by an overall gain factor. This could be due to wrong assumption on RFPD transimpedance or analog gains of AA or whitening filters. We have corrected for this factor in the RFPD transimpedance, but this needs to be checked (if we really care).
  4. We demodulate decimated timeseries (final sampling rate ~ 2.048 kHz) and I & Q for both the b and d signals. From this and our model for K, we estimate the OLTF. Attachment #4 shows timeseries for magnitude and phase.
  5. Finally, we compute the ASD for the OLTF magnitude. We plot it in Attachment #5 together with the ASD of the XARM transmission (C1:LSC-TRX_OUT_DQ) times the OLTF to estimate the optical gain noise ASD (this last step was a quick attempt at budgeting the calibration noise).
    1. For each ASD we used N = 24 averages, from which we estimate rms (statistical) uncertainties which are depicted by error bands (\pm \sigma) around the lines.

** Note: We ran the same procedure using dtt (diaggui) to validate our estimates at every point, as well as check our SNR in b and d before taking the ~3.5 hours of data.

  16369   Thu Sep 30 18:04:31 2021 PacoSummaryCalibrationXARM OLTF (calibration) with three lines

[anchal, paco]

We repeated the same procedure as before, but with 3 different lines at 55.511, 154.11, and 1071.11 Hz. We overlay the OLTF magnitudes and phases with our latest model (which we have updated with Koji's help) and include the rms uncertainties as errorbars in Attachment #1.

We also plot the noise ASDs of calibrated OLTF magnitudes at the line frequencies in Attachment #2. These curves are created by calculating power spectral density of timeseries of OLTF values at the line frequencies generated by demodulated XARM_IN and ETMX_LSC_OUT signals. We have overlayed the TRX noise spectrum here as an attempt to see if we can budget the noise measured in values of G to the fluctuation in optical gain due to changing power in the arms. We multiplied the the transmission ASD with the value of OLTF at those frequencies as the transfger function from normalized optical gain to the total transfer function value.

It is weird that the fluctuations in transmission power at 1 mHz always crosses the total noise in the OLTF value in all calibration lines. This could be an artificat of our data analysis though.

Even if the contribution of the fluctuating power is correct, there is remaining excess noise in the OLTF to be budgeted.

  16373   Mon Oct 4 15:50:31 2021 HangUpdateCalibrationFisher matrix estimation on XARM parameters

[Anchal, Hang]

What: Anchal and I measured the XARM OLTF last Thursday.

Goal: 1. measure the 2 zeros and 2 poles in the analog whitening filter, and potentially constrain the cavity pole and an overall gain. 

          2. Compare the parameter distribution obtained from measurements and that estimated analytically from the Fisher matrix calculation.

          3. Obtain the optimized excitation spectrum for future measurements.   

How: we inject at C1:SUS-ETMX_LSC_EXC so that each digital count should be directly proportional to the force applied to the suspension. We read out the signal at C1:SUS-ETMX_LSC_OUT_DQ. We use an approximately white excitation in the 50-300 Hz band, and intentionally choose the coherence to be only slightly above 0.9 so that we can get some statistical error to be compared with the Fisher matrix's prediction. For each measurement, we use a bandwidth of 0.25 Hz and 10 averages (no overlapping between adjacent segments). 

The 2 zeros and 2 poles in the analog whitening filter and an overall gain are treated as free parameters to be fitted, while the rest are taken from the model by Anchal and Paco (elog:16363). The optical response of the arm cavity seems missing in that model, and thus we additionally include a real pole (for the cavity pole) in the model we fit. Thus in total, our model has 6 free parameters, 2 zeros, 3 poles, and 1 overall gain. 

The analysis codes are pushed to the 40m/sysID repo. 

===========================================================

Results:

Fig. 1 shows one measurement. The gray trace is the data and the olive one is the maximum likelihood estimation. The uncertainty for each frequency bin is shown in the shaded region. Note that the SNR is related to the coherence as 

        SNR^2 = [coherence / (1-coherence)] * (# of average), 

and for a complex TF written as G = A * exp[1j*Phi], one can show the uncertainty is given by 

        \Delta A / A = 1/SNR,  \Delta \Phi = 1/SNR [rad]. 

Fig. 2. The gray contours show the 1- and 2-sigma levels of the model parameters using the Fisher matrix calculation. We repeated the measurement shown in Fig. 1 three times, and the best-fit parameters for each measurement are indicated in the red-crosses. Although we only did a small number of experiments, the amount of scattering is consistent with the Fisher matrix's prediction, giving us some confidence in our analytical calculation. 

One thing to note though is that in order to fit the measured data, we would need an additional pole at around 1,500 Hz. This seems a bit low for the cavity pole frequency. For aLIGO w/ 4km arms, the single-arm pole is about 40-50 Hz. The arm is 100 times shorter here and I would naively expect the cavity pole to be at 3k-4k Hz if the test masses are similar. 

Fig. 3. We then follow the algorithm outlined in Pintelon & Schoukens, sec. 5.4.2.2, to calculate how we should change the excitation spectrum. Note that here we are fixing the rms of the force applied to the suspension constant. 

Fig. 4 then shows how the expected error changes as we optimize the excitation. It seems in this case a white-ish excitation is already decent (as the TF itself is quite flat in the range of interest), and we only get some mild improvement as we iterate the excitation spectra (note we use the color gray, olive, and purple for the results after the 0th, 1st, and 2nd iteration; same color-coding as in Fig. 3).   

 

 

 

  16399   Wed Oct 13 15:36:38 2021 HangUpdateCalibrationXARM OLTF

We did a few quick XARM oltf measurements. We excited C1:LSC-ETMX_EXC with a broadband white noise upto 4 kHz. The timestamps for the measurements are: 1318199043 (start) - 1318199427 (end).

We will process the measurement to compute the cavity pole and analog filter poles & zeros later.

  16957   Tue Jun 28 17:07:47 2022 AnchalUpdateCalibrationAdded Beatnote channels in demodulation of c1cal

I added today demodulation of C1:LSC-BEATX/Y_FINE_I/Q in the c1cal demodulation where different degrees of freedom can be dithered. For McCal (formerly soCal), we'll dither the arm cavity for which we can use any of the DOFs (like DARM) to send the dither to ETMX/ETMY. Then with green laser locked as well, we'll get the calibration signal from the beatnotes in the demodulaed channels. We can also read right after the mixing in c1cal model and try differnt poles for integration .

I've also added medm screens in the sensing matrix part of LSC screen. These let you see demodulation of beatnote frequency signals.

  17010   Mon Jul 18 04:42:54 2022 AnchalUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty

We can calculate how much detector calibration uncertainty affects the estimation of astrophysical parameters using the following method:

Let \overrightarrow{\Theta} be set of astrophysical parameters (like component masses, distance etc), \overrightarrow{\Lambda}be set of detector parameters (like detector pole, gain or simply transfer function vaue for each frequency bin). If true GW waveform is given by h(f; \overrightarrow{\Theta}), and the detector transfer function is given by \mathcal{R}(f; \overrightarrow{\Lambda}), then the detected gravitational waveform becomes:
g(f; \Theta, \Lambda) = \frac{\mathcal{R}(f; \overrightarrow{\Lambda_t})}{\mathcal{R}(f; \overrightarrow{\Lambda})} h(f; \overrightarrow{\Theta})

One can calculate a derivative of waveform with respect to the different parameters and calculate Fisher matrix as (see correction in 40m/17017):

\Gamma_{ij} = \left( \frac{\partial g}{\partial \mu_i} | \frac{\partial g}{\partial \mu_j}\right )

where the bracket denotes iner product defined as:

\left( k_1 | k_2 \right) = 4 Re \left( \int df \frac{k_1(f)^* k_2(f))}{S_{det}(f)}\right)

where S_{det}(f) is strain noise PSD of the detector.

With the gamma matrix in hand, the error propagation from detector parameter fractional errors \frac{\Delta \Lambda_j}{\Lambda_j}to astrophysical paramter fractional errors \frac{\Delta \Theta_i}{\Theta_i}is given by (eq 26 in Evan et al 2019 Class. Quantum Grav. 36 205006):

\frac{\Delta \Theta_j}{\Theta_j} = - \mathbf{H}^{-1} \mathbf{M} \frac{\Delta \Lambda_j}{\Lambda_j}

where \mathbf{H}_{ij} = \left( \frac{\partial g}{\partial \Theta_i} | \frac{\partial g}{\partial \Theta_j}\right ) and \mathbf{M}_{ij} = \left( \frac{\partial g}{\partial \Lambda_i} | \frac{\partial g}{\partial \Theta_j}\right ).


Using the above mentioned formalism, I looked into two ways of calculating error propagation from detector calibration error to astrophysical paramter estimations:

Using detector response function model:

If we assume detector response function as a simple DC gain (4.2 W/nm) and one pole (500 Hz) transfer function, we can plot conversion of pole frequency error into astrophysical parameter errors. I took two cases:

  • Binary Neutron Star merger with star masses of 1.3 and 1.35 solar masses at 100 Mpc distance with a \tilde{\Lambda} of 500. (Attachment 1)
  • Binary black hole merger with black masses of 35 and 30 at 400 MPc distance with spin along z direction of 0.5 and 0.8. (I do not fully understand the meaning of these spin components but a pycbc waveform generation model still lets me calculate the effect of detector errors) (Attachment 2)

The plots are plotted in both loglog and linear plots to show the order of magnitude effect and how the error propsagation slope is different for different parameters. 'm still not sure which way is the best to convey the information. The way to read this plot is for a given error say 4% in pole frequency determination, what is the expected error in component masses, merger distance etc. I

Note that the overall gain of detector response is not sensitive to astrophysical error estimation.

Using detector transfer function as frequency bin wise multi-parameter function

Alternatively, we can choose to not fit any model to the detector transfer function and simply use the errors in magnitude and phase at each frequency point as an independent parameter in the above formalism. This then lets us see what is the error propagation slope for each frequency point. The hope is to identify which parts of the calibration function are more important to calibrate with low uncertainty to have the least effect on astrophysical parameter estimation. Attachment 3 and 4 show these plots for BNS and BBH cases mentioned above. The top panel is the error propagation slope at each frequency due to error in magnitude of the detector transfer function at that frequency and the bottom panel is the error propagation slope at each frequency due to error in phase of the detector transfer function.

The calibration error in magnitude and phase as a function of frequency would be multiplied by the curves and summed together, to get total uncertainty in each parameter estimation.


This is my first attempt at this problem, so I expect to have made some mistakes. Please let me know if you can point out any. Like, do the order of magnitude and shape of error propagation makes sense? Also, comments/suggestions on the inference of these plots would be helpful.

Finally, I haven't yet tried seeing how these curves change for different true values of the merger event parameters. I'm not yet sure what is the best way to extract some general information for a variety of merger parameters.

Future goals are to utilize this information in informing system identification method i.e. multicolor calibration scheme parameters like calibration line frequencies and strength.

Code location

  17011   Mon Jul 18 15:17:51 2022 HangUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty

1. In the error propogation equation, it should be \Delta \Theta = -H^{-1} M \Delta \Lambda, instead of the fractional error. 

2. For the astro parameters, in general you would need t_c for the time of coalescence and \phi_c for the phase. See, e.g., https://ui.adsabs.harvard.edu/abs/1994PhRvD..49.2658C/abstract.

3. Fig. 1 looks very nice to me, yet I don't understand Fig. 3... Why would phase or amplitude uncertainties at 30 Hz affect the tidal deformability? The tide should be visible only > 500 Hz. 

4. For BBH, we don't measure individual spin well but only their mass-weighted sum, \chi_eff = (m_1*a_1 + m_2*a_2)/(m_1 + m_2). If you treat S1z and S2z as free parameters, your matrix is likely degenerate. Might want to double-check. Also, for a BBH, you don't need to extend the signal much higher than \omega ~ 0.4/M_tot ~ 10^4 Hz * (Ms/M_tot). So if the total mass is ~ 100 Ms, then the highest frequency should be ~ 100 Hz. Above this number there is no signal. 

 

  17017   Tue Jul 19 07:34:46 2022 AnchalUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty

Addressing the comments as numbered:

  1. Yeah, that's correct, that equation normally \Delta \Theta = -\mathbf{H}^{-1} \mathbf{M} \Delta \Lambda but it is different if I define \Gamma bit differently that I did in the code, correct my definition of \Gamma to :
    \Gamma_{ij} = \mu_i \mu_j \left( \frac{\partial g}{\partial \mu_i} | \frac{\partial g}{\partial \mu_j} \right )
    then the relation between fractional errors of detector parameter and astrophysical parameters is:
    \frac{\Delta \Theta}{\Theta} = - \mathbf{H}^{-1} \mathbf{M} \frac{\Delta \Lambda}{\Lambda}
    I prefer this as the relation between fractional errors is a dimensionless way to see it.
  2. Thanks for pointing this out. I didn't see these parameters used anywhere in the examples (in fact there is no t_c in documentation even though it works). Using these did not affect the shape of error propagation slope function vs frequency but reduced the slope for chirped Mass M_c by a couple of order of magnitudes.
    1. I used the get_t_merger(f_gw, M1, M2) function from Hang's work to calculate t_c by assuming f_{gw} must be the lowest frequency that comes within the detection band during inspiral. This function is:
      t_c = \frac{5}{256 \pi^{8/3}} \left(\frac{c^3}{G M_c}\right)^{5/3} f_{gw}^{-8/3}
      For my calculations, I've taken f_{gw} as 20 Hz.
    2. I used the get_f_gw_2(f_gw_1, M1, M2, t) function from Hang's work to calculate the evolution of the frequency of the IMR defined as:
      f_{gw}(t) = \left( f_{gw0}^{-8/3} - \frac{768}{15} \pi^{8/3} \left(\frac{G M_c}{c^3}\right)^{5/3} t \right)^{-3/8}
      where f_{gw0} is the frequency at t=0. I integrated this frequency evolution for t_c time to get the coalescence phase phi_c as:
      \phi_c = \int^{t_c}_0 2 \pi f_{gw}(t) dt
  3. In Fig 1, which representation makes more sense, loglog of linear axis plot? Regarding the affect of uncertainties on Tidal amplitude below 500 Hz, I agree that I was also expecting more contribution from higher frequencies. I did find one bug in my code that I corrected but it did not affect this point. Maybe the SNR of chosen BNS parameters (which is ~28) is too low for tidal information to come reliably anyways and the curve is just an inverse of the strain noise PSD, that is all the information is dumped below statistical noise. Maybe someone else can also take a look at get_fisher2() function that I wrote to do this calculation.
  4. Now, I have made BBH parameters such that the spin of the two black holes would be assumed the same along z. You were right, the gamma matrix was degenerate before. To your second point, I think the curve also shows that above ~200 Hz, there is not much contribution to the uncertainty of any parameter, and it rolls-off very steeply. I've reduced the yspan of the plot to see the details of the curve in the relevant region.
Quote:

1. In the error propogation equation, it should be \Delta \Theta = -H^{-1} M \Delta \Lambda, instead of the fractional error. 

2. For the astro parameters, in general you would need t_c for the time of coalescence and \phi_c for the phase. See, e.g., https://ui.adsabs.harvard.edu/abs/1994PhRvD..49.2658C/abstract.

3. Fig. 1 looks very nice to me, yet I don't understand Fig. 3... Why would phase or amplitude uncertainties at 30 Hz affect the tidal deformability? The tide should be visible only > 500 Hz.

4. For BBH, we don't measure individual spin well but only their mass-weighted sum, \chi_eff = (m_1*a_1 + m_2*a_2)/(m_1 + m_2). If you treat S1z and S2z as free parameters, your matrix is likely degenerate. Might want to double-check. Also, for a BBH, you don't need to extend the signal much higher than \omega ~ 0.4/M_tot ~ 10^4 Hz * (Ms/M_tot). So if the total mass is ~ 100 Ms, then the highest frequency should be ~ 100 Hz. Above this number there is no signal.

 

  17029   Sun Jul 24 08:56:01 2022 HangUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty

Sorry I forgot to put tc & phic in the example. 

I modified astroFisherLib.py to include these parameters. Please note that their meaning is that we don't know when the signal happens and at which phase it merges.

It does not mean the time & phase from a reference frequency to the merger. This part is not free to vary because it is fixed by the intrinsic parameters.  

It might be good to have a quick scan through the Cutler & Flanagan 94 paper to better understand their physical meanings.

 

  10436   Thu Aug 28 11:02:53 2014 SteveUpdateCalibration-RepairSR785 repair

SN 46,795 of 2003 is back.

  11641   Thu Sep 24 17:06:14 2015 ericqUpdateCalibration-RepairC1CAL Lockins

Just a quick note for now: I've repopulated C1CAL with a limited set of lockin oscillators/demodulators, informed by the aLIGO common LSC model. Screens are updated too. 

Rather than trying to do the whole magnitude phase decompostion, it just does the demodulation of the RFPD signals online; everything beyond that is up to the user to do offline. 

Briefly testing with PRMI, it seems to work as expected. There is some beating evident from the fact that the MICH and PRCL oscillation frequencies are only 2Hz apart; the demod low pass is currently at an arbitrary 1Hz, so it doesn't filter the beat much. 

Screens, models, etc. all svn'd.

  12040   Mon Mar 21 14:29:32 2016 SteveUpdateCalibration-Repair1W Innolight laser repair diagnoses

 

Quote:
Quote:

After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I

Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010

It's diodes should be replaced, based on it's age and performance.

RIN and noise eater bad. I will get a quote on this job.

The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956

Diagnoses from Glasglow:

“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”

  12045   Thu Mar 24 07:56:09 2016 SteveUpdateCalibration-RepairNO Noise Eater for 1W Innolight

1W Innolight is NOT getting Noise Eater as it was decided yesterday at the 40m meeting. Corrected 3-25-2016

Repair quote with adding noise eater is in 40m wiki

Quote:

 

Quote:
Quote:

After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I

Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010

It's diodes should be replaced, based on it's age and performance.

RIN and noise eater bad. I will get a quote on this job.

The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956

Diagnoses from Glasglow:

“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”

 

  12070   Mon Apr 11 17:03:41 2016 SteveUpdateCalibration-Repair1W Innolight repair completed

The laser is back. Test report is in the 40m wiki as New Pump Diode Mephisto 1000

It will go on the PSL table.

  13456   Tue Nov 28 17:27:57 2017 awadeBureaucracyCalibration-RepairSR560 return, still not charging

I brought a bunch of SR560s over for repair from Bridge labs. This unit, picture attached (SN 49698), appears to still not be retaining charge. I’ve brought it back. 

  14759   Mon Jul 15 03:30:47 2019 KruthiUpdateCalibration-RepairWhite paper as a Lambertian scatterer

I made some rough measurements, using the setup I had used for CCD calibration, to get an idea of how good of a Lambertian scatterer the white paper is. Following are the values I got:

Angle (degrees) Photodiode reading (V)  Ps (W) BRDF (per str) % error
12 0.864 2.54E-06 0.334 20.5
24 0.926 2.72E-06 0.439 19.0
30 1.581 4.65E-06 0.528 19.0
41 0.94 2.76E-06 0.473 19.8
49 0.545 1.60E-06 0.423 22.5
63 0.371 1.09E-06 0.475 28

Note: All the measurements are just rough ones and are prone to larger errors than estimated.

I also measured the transmittance of the white paper sample being used (it consists of 2 white papers wrapped together). It was around 0.002

  14804   Wed Jul 24 04:20:35 2019 KruthiUpdateCalibration-RepairMC2 pitch and yaw calibration

Summary:  I calibrated MC2 pitch and yaw offsets to spot position in mm. Here's what I did:

  1. Changed the MC2 pitch and yaw offset values using  ezca.Ezca().write('IOO-MC2_TRANS_PIT_OFFSET', <pitch offset value> ) and ezca.Ezca().write('IOO-MC2_TRANS_YAW_OFFSET', <yaw offset value> )
  2. Waited for ~ 700-800 sec for system to adjust to the assigned values
  3. Took snapshots with the 2 GigEs I had installed - zoomed in and zoomed out. (I'll be using these to make a scatter loss map, verify the calibration results, etc)
  4. Ran the mcassDecenter script, which can be found in /scripts/ASS/MC. This enters the spot position in mm in the specified text file.

Results:  In the pitch/yaw vs pitch_offset/yaw_offset graph attached,

  • intercept_pitch = 6.63 (in mm) ,  slope_pitch = -0.6055 (mm/counts) 
  • intercept_yaw = -4.12 (in mm) ,  slope_yaw = 4.958 (mm/counts) 
  15510   Sat Aug 8 07:36:52 2020 Sanika KhadkikarConfigurationCalibration-RepairBS Seismometer - Multi-channel calibration

Summary : 

I have been working on analyzing the seismic data obtained from the 3 seismometers present in the lab. I noticed while looking at the combined time series and the gain plots of the 3 seismometers that there is some error in the calibration of the BS seismometer. The EX and the EY seismometers seem to be well-calibrated as opposed to the BS seismometer.

The calibration factors have been determined to be :

BS-X Channel: \small {\color{Blue} 2.030 \pm 0.079 }

BS-Y Channel: \small {\color{Blue} 2.840 \pm 0.177 }

BS-Z Channel: \small {\color{Blue} 1.397 \pm 0.182 }


Details :

The seismometers each have 3 channels i.e X, Y, and Z for measuring the displacements in all the 3 directions. The X channels of the three seismometers should more or less be coherent in the absence of any seismic excitation with the gain amongst all the similar channels being 1. So is the case with the Y and Z channels. After analyzing multiple datasets, it was observed that the values of all the three channels of the BS seismometer differed very significantly from their corresponding channels in the EX and the EY seismometers and they were not calibrated in the region that they were found to be coherent as well. 


Method :

Note: All the frequency domain plots that have been calculated are for a sampling rate of 32 Hz. The plots were found to be extremely coherent in a certain frequency range i.e ~0.1 Hz to 2 Hz so this frequency range is used to understand the relative calibration errors. The spread around the function is because of the error caused by coherence values differing from unity and the averages performed for the Welch function. 9 averages have been performed for the following analysis keeping in mind the needed frequency resolution(~0.01Hz) and the accuracy of the power calculated at every frequency. 

  1. I first analyzed the regions in which the similar channels were found to be coherent to have a proper gain analysis. The EY seismometer was found to be the most stable one so it has been used as a reference. I saw the coherence between similar channels of the 2 seismometers and the bode plots together. A transfer function estimator was used to analyze the relative calibration in between all 3 pairs of seismometers. In the given frequency range EX and EY have a gain of 1 so their relative calibration is proper. The relative calibration in between the BS and the EY seismometers is not proper as the resultant gain is not 1. The attached plots show the discrepancies clearly : 
  • BS-X & EY-X Transfer Function : Attachment #1
  • BS-Y & EY-Y Transfer Function : Attachment #2

          The gain in the given frequency range is ~3. The phase plotting also shows a 180-degree phase as opposed to 0 so a negative sign would also be required in the calibration factor. Thus the calibration factor for the Y channel of the BS seismometer should be around ~3. 

  • BS-Z & EY-Z Transfer Function : Attachment #3

The mean value of the gain in the given frequency range is the desired calibration factor and the error would be the mean of the error for the gain dataset chosen which is caused due to factors mentioned above.

Note: The standard error envelope plotted in the attached graphs is calculated as follows :

         1. Divide the data into n segments according to the resolution wanted for the Welch averaging to be performed later. 

         2. Calculate PSD for every segment (no averaging).

         3. Calculate the standard error for every value in the data segment by looking at distribution formed by the n number values we obtain by taking that respective value from every segment.

Discussions :

The BS seismometer is a different model than the EX and the EY seismometers which might be a major cause as to why we need special calibration for the BS seismometer while EX and EY are fine. The sign flip in the BS-Y seismometer may cause a lot of errors in future data acquisitions. The time series plots in Attachment #4 shows an evident DC offset present in the data. All of the information mentioned above indicates that there is some electrical or mechanical defect present in the seismometer and may require a reset. Kindly let me know if and when the seismometer is reset so that I can calibrate it again. 

  227   Tue Jan 8 15:20:17 2008 PkpUpdateCamerasGigE update
[Tobin , Pinkesh]

Finally we got the camera doing something (other than giving out its attributes). The only thing that seems to work so far is a program called AAviewer, which converts the image into an ASCII format and displays it on the screen. If you want to play around with it, log into mafalda (131.215.113.23) via rana.ligo.caltech.edu. Access /cvs/cds/caltech/target/Prosilica/bin-pc/x86/ and there should be a few programs in there, one of which is AAviewer, which requires you to get an IP address (which is 131.215.113.103) for the camera right now. (You can also get the IP information via the ListCameras program). The camera is physically in the 40m near the network rack.

Other programs dont seem to be working and its probably due to the network/packetsize issues. Since linux2 can change its packetsize to a higher number, I will get it to compile on linux2 for now and then give it a shot.
  234   Thu Jan 10 13:45:52 2008 PkpUpdateCamerasGLIBC Error
So, I have tried to compile the camera files which are in /cvs/cds/caltech/target/Prosilica/examples for the past 2 days now and have been unable to get rid of the following error. (specifically ListCameras.cpp, as it doesnt have any other libraries required, which unnecessarily complicates things)

../../bin-pc/x86/libPvAPI.so: undefined reference to `__stack_chk_fail@GLIBC_2.4'
collect2: ld returned 1 exit status
make: *** [sample] Error 1

I used to get this error on mafalda too, but I had fixed it by installing the latest version of the glibc libraries. Inspite of doing so on linux2, the error still persists. I suspect it had something to do with it being a FC3 machine. My own laptop, which also runs Ubuntu works fine too. The problem with these Ubuntu machines is that they dont let me set the packet sizes to 9 kb which is required by the camera. Linux2 does.

If anyone has any idea how to resolve this issue, please let me know.

Thanks
Pinkesh.
  236   Fri Jan 11 17:01:51 2008 pkpUpdateCamerasGigE again
So, here I detail all the efforts on the GigE so far

(1) The GiGE camera requires a minimum of 9 kb packet size, which is not available on mafalda or on my laptop ( both of which run Ubuntu and the Camera programs compile there). The programs which require smaller sizes work perfectly fine on these machines. I tried to statically compile the files on these machines so that I could then port them to the other machines. But that fails because the static libraries given by the company dont work.

(2) On Linux2, which lets me set a packet size as high as 9 Kb, it doesnt compile because of a GLIBC error. I tried updating the glibc and it tells me that the version already existing is the latest ( which it clearly is not). So I tried to uninstall GLIBC and reinstall it, but it wont let me uninstall (it == rpm) glibc, since there are a lot of dependencies. A dead end in essence.

Steps being taken

(1) Locally installing the whole library suites on linux2. Essentially install another version of gcc and g++ and see if that helps.
(2) IF this doesnt work, then the only course of action I can take is to cannibalize linux2's GigE card and put it on mafalda. ( I need permission for this Smile ).

Once again any suggestions welcome.
  245   Thu Jan 17 15:11:13 2008 josephbUpdateCamerasWorking on Malfalda
1) I can statically compile the ListCamera code (which basically just goes out and finds what cameras are connected to the network) on Malfalda and use that compiled code to run on Linux2 without a problem. Simply needed to add explicit links to libpthread.a and librt.a.
(i.e. -Bstatic -L /usr/lib/ -lpthread -Bstatic -L /usr/lib -lrt)

With appropriate static libraries, it should be possible to port this code to other linux machines even if we can't get it to compile on the target machine itself.

2)I've modified the Snap.cpp file so that it uses a packet size of 1000 or less. This simply involves setting the "PacketSize" attribute with the built in functions they provide in their library. After un-commenting some lines in that code, I was able to save tiff type images from the camera of up to 400x240 pixels on Malfalda. The claimed maximum resolution for the camera is 752x480, but it doesn't seem to work with the current setup. The max number of pixels seems to about 100 times the packet size. I.e. packet size of 1000 will allow up to 400x240 (96000) but not 500x240 (120,000). Not sure if this is an issue just with snap code or the general libraries used.

3)Will be working towards getting video running over the next day or so.
  266   Fri Jan 25 11:38:16 2008 josephbConfigurationCamerasWorking GiGE video on Linux - sort of
1)I have been able to compile the SampleViewer program which can stream the video from the Prosilica 750C camera. This was accomplished on my 64-bit laptop running Ubuntu, after about 3 hours of explicitly converting strings to wxStrings and back again within the C++ code. (There was probably an easier way to simply overload the functions that were being called, but I wasn't sure how to go about doing so). By connecting it to the CDS network, I was able to immediately detect the camera and display the images.

Unfortunately, I have not yet been able to get it to compile on Mafalda with the x86 architecture. This may be do the fact that it has wxWidgets version 2.8.7 while my laptop has 2.8.4. Certainly the failure at compile time looks different from the errors earlier, and seem to be within the wxWidget code rather than the SampleViewer code. I may simply need to uninstall 2.8.7 and install 2.8.4 of wxWidgets.

The modified code that will compile on my machine has been copied to /cvs/cds/caltech/target/Prosilica/examples/SampleViewer2b.

2)The Snap program (under /cvs/cds/caltech/target/Prosilica/examples/Snap) also will now take full resolution images even on Mafalda. This was achieved by reducing the packet size to 1000 and also increasing the wait until timeout time up to 400 ms, which originally was at 100. Apparently, it takes on the order of 1 ms per packet as far as I can tell. So full resolution at 752x480 required something of order 360 packets.

To Do:
1) Get sample viewer to compile on Mafalda, and then statically compile it so it can be run from any Linux based machine.
2) Get a user friendly version of Snap up and running, statically compiled, with options for a continuous loop every X seconds and also to set desired parameters (such as height, width, file name to save to, save format, etc).
3) Figure out data analysis with the images in Matlab and an after the fact image viewer.

Attached is an example .tiff image from the Snap program.
  267   Fri Jan 25 13:36:13 2008 josephbConfigurationCamerasWorking GiGE video on Mafalda
Finally got the GiGE camera sample viewer video running on Mafalda by updating to the latest API (version 1.16 from Dec 16, 2007) from Prosilica and then using the modified Sample Viewer code I had written. The API version previously in cvs was 1.14.

It can currently be run by ssh -X into Mafalda and going to /cvs/cds/caltech/target/Prosilica/bin-pc/x86 and running the SampleViewer executable found there.
  289   Thu Jan 31 16:53:41 2008 josephbConfigurationCamerasImproving camera user interface
There's a new and improved version of Snap program at the moment people are free to play with.

Located in /cvs/cds/caltech/target/Prosilica/40mCode/

The program Snap now has a -h or --help option which describes some basic command line arguments. The height (in pixels), width (in pixels), exposure time (in micro seconds), file name to be saved to (in .tiff format), and packet size can all be set. The format type (i.e. pixel format such as Mono8 or Mono16) doesn't work at the moment.

At the moment, it only runs on mafalda.

Currently in the process of adding a loop option which will take images every X seconds, saving them to a given file name and then appending the time of capture to the file name.

After that need to add the ability to identify and choose the camera you want (as opposed to the first one it finds).

Lastly, I've been finding on occassion that the frame fails to save. However if you try again a few seconds later with the exact same parameters, it generally does save the second time. Not sure whats causing this, whether on the camera or network side of things.

I've attached two images, the first at default exposure time (15,000 microseconds) and the second at 1/5th that time (3,000 microseconds).
The command line used was "./Snap -E 3000 -F 'Camera_exp_3000.tiff' "
  292   Fri Feb 1 15:04:54 2008 josephbConfigurationCamerasSnap with looping functionality available
New GiGE camera code is available in /cvs/cds/caltech/target/Prosilica/40mCode/. Currently only runs on Mafalda.

Snap has expanded functionality to continuously loop infinitely or for a maximum number of images set by the user. File names generated with the loop option have the current Unix time and .tiff appended to them. So -f './test' will produce tiff files with format "test1234567.tiff". The -l option sets the number of seconds between images.

"./Snap -l 5 -i -f './test' " will cause the program to infinitely loop, saving images every 5 seconds. Using "-m 10" instead of "-i" will take a series of 10 images every 5 seconds (so taking a total of 50 seconds to run).

It also now defaults to 16-bit (in reality only 10 bit) output instead of 8 bit output. You can select between the two with -F 'Mono8' or -F 'Mono16'.

Use --help for a full list of options.

Note that if you ctrl-c out of the loop, you may need to run ./ResetCamera 131.215.113.104 (or whatever the IP is - use ./ListCameras to determine IP if necessary) in order to reset the camera because it doesn't close out elegantly at the moment.
  297   Tue Feb 5 15:32:29 2008 josephbConfigurationCamerasPMC and the GigE Camera
The PMC transmission video camera has been removed and replaced with the GigE GC750 camera for the moment.

A ND4.0 filter has been added in the path to that camera to reduce saturation for the moment.

The old camera has been placed on the elevated section inside the enclosure, and the cable for it is still on the table proper.

The Gige camera is currently running the Snap code on Linux3 with the following command line:

./Snap -E 2000 -l 60 -m 1440 -f './pmc_trans/pmc_trans'

So its going to be taking tiff images every minute for the next 24 hours into the cvs/cds/caltech/target/Prosilica/40mCode/pmc_trans/ directory.

Attached is an example image with exposure set to 2000, loaded into matlab and plotted with the surf command. 2500 microseconds looked like it was still saturating, but this seems to be a good level (with a max of 58560 out of 65535).
  300   Wed Feb 6 16:50:47 2008 josephbConfigurationCamerasRegions of Interest and max frame rate
The Snap code has once again been modified such that setting the -l option to 0 will take images as fast as possible. Also, the -H and -W options set the height and width, while in principle the -Y and -X options set the position in pixels of the top edge and left edge of the image. It also seems possible to set these values such that the saved image wraps around. I'll be adding some command checking so that the user can't do this in the near future.

Doing some timed runs, using a -H 350 and -W 350 (as opposed to the full 752x480), 100 images can be saved in roughly 8 seconds, and 1000 images took about 73 seconds. This corresponds to a frame rate of about 12-13 frames per second (or a 12-13 Hz display). The size of this area was sufficient to cover the current PMC transmission beam.

The command line I used was

time ./Snap -l 0 -m 1000 -f 'test' -W 350 -H 350 -Y 50 -X 350 -E 2000

Interestingly enough, there would be bursts of failed frame saves if I executed commands in another terminal (such as using ls on the directory where the files were being stored).

As always, this code is available in /cvs/cds/caltech/target/Prosilica/40mCode/.
  301   Wed Feb 6 19:39:11 2008 ranaConfigurationCamerasRegions of Interest and max frame rate
We really need to look into making the 40m CDS network have an all GigE backbone so that we can have cooler cameras as well as collect multiple datastreams...
  378   Fri Mar 14 12:06:29 2008 josephbConfigurationCamerasGC750 looking at ETMX while locked
The GC750 (CMOS) is currently looking at the front of ETMX. Unfortunately, its being routed through a 10Mbit connection (which I will be purchasing a replacement for today), so getting it to send images to Mafalda/Linux 2 or 3 isn't working well, but by using a local gigabit switch and a laptop I can get sufficient speed for full images with the sample viewer.

The attached image is from a full 752x480 reslution with 10,000 microsecond exposure with the X-arm locked. Although it looks like I still need to work on the focusing. Will be switching the GC750 with the GC 650 (CCD) later today and comparing the resulting images.
  379   Fri Mar 14 14:59:51 2008 josephbConfigurationCamerasComparison between GC650 (CCD) and GC750 (CMOS) looking at ETMX
Attached are images taken of ETMX while locked.

The first two are 300,000 microsecond exposure time, with approximately the same focusing/zoom. (The 750 is slightly more zoomed in than the 650 in these images). The second are 30,000 microsecond exposures. The la

The CMOS appears to be more sensitive to the 1064 nm reflected light (resulting in bright images for the same exposure time). This may make a difference in applications where images are desired to be taken quickly and repeatedly.

Both seem to be resolving individual specks on the optic reasonably well.

Next test is to place both camera on a Gaussian beam (in a couple different modes say 00, 11, and so forth), probably using the PMC.
  434   Tue Apr 22 08:34:22 2008 josephbConfigurationCamerasCurrent Network Diagram
The attached network diagram has also been added to the 40m Wiki at http://lhocds.ligo-wa.caltech.edu:8000/40m/Image_Processing_with_GigE_Cameras
  471   Thu May 8 16:40:36 2008 josephbConfigurationCamerasGige Camera currently on PSL table
Andrey and myself were working on the PSL table today, using a pickoff of a pickoff of the main beam (adding a microscope slide to pickoff ~4% of the original pickoff) to the GC750 GigeCam.

At the time we left, we scanned the area with a beam scan and didn't see any new stray beams, and nothing in any useful beam paths should have changed. We also strung a Cat 6 cable from the control room switch out to the PSL table in the cable trays, and then above the PSL table.

Currently, its not as well aligned as it could be, and also requires a very low exposure setting, of -E 50 or so to avoid saturation.
  481   Thu May 15 16:24:18 2008 josephbSummaryCameras 
The GC750 camera is currently looking at a very small pickoff of the PSL output (transmission of a Y1-1037-45-S mirror). The plan is to take images tomorrow with it and the GC650 from the same spot and do comparisons.

For those interested, the camera can be run with two codes, from mafalda. Use ssh -X mafalda to login, to allow the live stream to work with the SampleViewer code. The codes can be found in:

/cvs/cds/caltech/target/Prosilica/40mCode/Snap

and

/cvs/cds/caltech/target/Prosilica/bin-pc/x86/SampleViewer

Type Snap --help for a list of options for that program. Click the circle looking thing in SampleViewer to start the live stream. Note only 1 of the two programs can be running at a time, and the only way to change settings (such as exposure length) is with Snap at the moment.
  482   Fri May 16 14:38:50 2008 josephbSummaryCamerasTwo cameras setup
I've changed the pickoff setup from yesterday for the GigE cameras to include a 33% beam splitter (first one I could find). The reflection is going to the GC650 (CCD camera) while the transimission is going to the GC750 CMOS camera. This means the CMOS camera has roughly twice the light incident as the GC650 and should be kept in mind in all comparisons. The distances from the beam splitter are approximately the same both cameras, but some more accurate positioning might be useful.

Its very easy to get the GC650 camera into a bad state where you need to go out and cycle the power (simply unplug and re-plug in the power supply either at the camera or outlet). If the ListCamera program doesn't see it, this is probably necessary.

Andrey added at 6.30PM: Actually the 650 camera keeps crashing constantly. Every time I attempt to capture an image, the camera fails.
  506   Fri May 30 12:03:08 2008 josephb, AndreyConfigurationCamerasHead to head comparison of cameras
Andrey and myself - Joseph B. - have examined the output of the GC650 (CCD) and GC750 (CMOS) prosilica cameras. We did several live motion tests (i.e. rotate the turning mirror, move and rotate the camera, etc) and also used a microscope slide to try to eliminate back reflections and interference.

Both the GC650 and GC750 produce dark lines in the images, some of which look parallel, while others are in much stranger shapes, such as circles and arcs.

Moving the GC750 camera physically, we have the spot moving around, with the dark lines appearing to be fixed to the camera itself, and remain in the same location on the detector. I.e. coming back to the same spot keeps showing a circle. In reasonably well behaved sections, these lines are about 10% dips in power, and could in principle be subtracted out. Its possible that the camera was damaged with too much light incident in the past, although going back to the pmc_trans images that were taken, similar lines are still visible.

Moving the GC650 camera physically seems to change the position of the lines (if one also rotates the turning mirror to get to the same spot on the CCD). It seems as if a slight change in angle has a large effect on these dark bands, which can either be thin, or very large, bordering on the size of the spot size. My guess is (as the vendor suggested) the light is interacting with the electronics behind the surface layer rather than a surface defect producing these lines. Using a microscope slide in between the turning mirror and the GC650, we were able to produce new fringes, but didn't affect the underlying ones.

Placing a microscope slide in between the last turning mirror and the GC750 does not affect the dark lines (although it does seem to add some), nor does turning the final turning mirror, so it seems unlikely to be caused by back reflection in this case.

So it seems the CMOS may be more consistent, although we need to determine if the current line problems are due to exposure to too much light at some point in the past (i.e. I broke it) or they come that way from the factory.

Attached are the results of image-processing of the images from the two our cameras using Andrey's new Matlab script.
  511   Mon Jun 2 12:20:35 2008 josephbBureaucracyCamerasBeam scan has moved
The beamscan has been moved from the Rana lab back over to the 40m, to be used to calibrate the Prosilica cameras.
  512   Tue Jun 3 02:15:29 2008 AndreySummaryCamerasFitting results

There have been a lot of work going on related to the processing of images captured by the cameras GC-650 and GC-750 recently.

In the end of the week of May 30 Joseph and me (Andrey) installed the two cameras capturing the images of the pick-off of the main beam on the PSL optical table. The cameras are located after the picked-off beam going towards the "PSL position QPD", after the 33-66 beamsplitter (33% of reflection and 66% of transmission).

Initially (on May 30) the GC-650 camera was taking the images of reflected beam, while the camera GC-750 was taking images of transmitted beam. On Monday June 2 we switched the positions of the cameras, so GC-650 appeared to be on the path of the transmitted beam and GC-750 on the path of the reflected beam.

I (Andrey Rodionov) was able in the weekend to succeed in writing a Matlab program that performs the two-dimensional Gaussian fitting of the captured images, and I used that program to fit the images from the cameras.

The program fits the camera data by a two-dimensional Gaussian surface:

Z = A * exp[ - 2 * (X - X_Shift)^2 / (Waist_X)^2 ] * exp[ - 2 * (Y - Y_Shift)^2 / (Waist_Y)^2 ] + CONST_Shift,

where A, X_Shift, Waist_X, Y_Shift, Waist_Y, CONST_Shift are 6 parameters of the fit.

Attached are the pdf-files showing the results: images taken with our cameras, the 2-dimensional Gaussian fit for these images and the surfaces of residuals. Residuals are differences between the exact beam profile and the result of fitting. In normalized version of residual graph I normalize it by the first coefficient of fitting A, the factor in front of the exponents.
  515   Tue Jun 3 12:33:36 2008 AndreyUpdateCamerasAndrey, Josephb

Continuing our work with cameras,

1) we removed both cameras from their places on Monday afternoon, and were taking the beam-scans with a special equipment (see elog-entry 511) from Bridge bld.,

2) and on Tuesday morning we putted back the GC-750 camera into the transmitted beam path, camera GC-650 into the reflected beam path. We plan to compare the images from the "reflection camera" for several different angles of tilt of the camera.
ELOG V3.1.3-