40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 277 of 339  Not logged in ELOG logo
ID Date Author Type Categorydown Subject
  14746   Wed Jul 10 22:32:38 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I trained a bunch (around 25 or so - to tune hyperparameters) of networks today. They were all CNNs. They all produced garbage. I also looked at lstm networks with CNN encoders (see this very useful link) and gave some thought to what kind of architecture we want to use and how to go about programming it (in Keras, will use tensorflow if I feel like I need more control). I will code it up tomorrow after some thought and discussion. I am not sure if abandoning CNNs is the right thing to do or if I should continue probing this with more architectures and tuning attempts. Any thoughts?

Right now, after speaking to Stuart (ldas_admin) I've decided on coding up the LSTM thing and then running that on one machine while probing the CNN thing on another.

 


Update on 10 July, 2019: I'm attaching all the results of training here in case anyone is interested in the future.

Quote:

I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-

  1. On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
  2. On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
  3. On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.

Therefore, I will carry out all training only on this machine from now.

 


Note to self:

Steps to repeat what you did are:

  1. ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
  2. activate virtualenv as descirbed above
  3. navigate to code  and run it.
Quote:

 I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

  14757   Sun Jul 14 00:24:29 2019 KruthiUpdateCamerasCCD Calibration

On Friday, I took images for different power outputs of LED. I calculated the calibration factor as explained in my previous elog (plots attached).

Vcc (V) Photodiode
reading(V)

Power incident on photodiode (W)

Power incident on GigE (W)
Slope (counts/​𝝁s)
Uncertainity in
 slope (counts/​𝝁s)
CF (W-sec/counts)
16 0.784 2.31E-06 3.89E-07 180.4029 1.02882 2.16E-15
18 0.854 2.51E-06 4.24E-07 207.7314 0.7656 2.04E-15
20 0.92 2.71E-06 4.57E-07 209.8902 1.358 2.18E-15
22 0.969 2.85E-06 4.81E-07 222.3862 1.456 2.16E-15
25 1.026 3.02E-06 5.09E-07 235.2349 1.53118 2.17E-15
  Average  2.14E-15

To estimate the uncertainity, I assumed an error of at most 20mV (due to stray lights or difference in orientation of GigE and photodiode) for the photodiode reading. Using the uncertainity in slope from the linear fit, I expect an uncertainity of maximum 4%. Note: I haven't accounted for the error in the responsivity value of the photodiode.

GigE area 10.36 sq.mm
PDA area 61.364 sq.mm
Responsivity 0.34 A/W
Transimpedance gain (at gain = 20dB) 10^6 V/W +/- 0.1%
Pixel format used Mono 8 bit

Johannes had reported CF as 0.0858E-15 W-sec/counts for 12 bit images, with measured a laser source. This value and the one I got are off by a factor of 25. Difference in the pixel formats and effect of coherence of the light used might be the possible reasons.

  14760   Mon Jul 15 14:09:07 2019 MilindUpdateCamerasCNN LSTM for beam tracking

I've set up network with a CNN encoder (front end) feeding into a single LSTM cell followed by the output layer (see attachment #1). The network requires significantly more memory than the previous ones. It takes around 30s for one epoch of training. Attached are the predicted yaw motion and the fft of the same. The FFT looks rather curious. I still haven't done any tuning and these are only the preliminary results.

Quote:

 Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).

Well, what about the previous conv nets?

What I did:

  1. Extensive tuning - of learning rate, batch size, dropout ratio, input size using a grid search
  2. Trained each network for 75 epochs and obtained weights, predicted motion and corresponding FFT, error etc.

What I observed:

  1. Loss curves look okay, validation loss isn't going up, so I don't think overfitting is the issue
  2. Training for over (even) 75 epochs seems to be pointless.

What I think is going wrong:

  1. Input size- relatively large input size: 350 x 350. Here, the input image size seems to be 128 x 128.
  2. Inadequate pre-processing.
    1. I have not applied any filters/blurs etc. to the frames.
    2. I have also not tried dimensionality reduction techniques such as PCA

What I will try now:

  1. Collect new data: with smaller amplitudes and different frequencies
  2. Tune the LSTM network for the data I have
  3. Try new CNN architectures with more aggressive max pooling and fewer parameters
  4. Ensembling the models (see this and this). Right now, I have multiple models trained either with same architecture and different hyperparameters or with different architectures. As a first pass, I intend to average the predictions of all the models and see if that improves performance.
  14768   Wed Jul 17 20:12:26 2019 KruthiUpdateCamerasAnother GigE in place of analog camera

I've taken the MC2 analog camera down and put another GigE (unit 151) in its place. This is just temporary and I'll put the analog camera back once I finish the MC2 loss map calibration. I'm using a 25mm focal length camera lens with it and it gives a view of MC2 similar to the analog camera one. But I don't think it is completely focused yet (pictures attached).

...more to follow

gautam - Attachment #3 is my (sad) attempt at finding some point scatterers - Kruthi is going to play around with photUtils to figure out the average size of some point scatterers.

  14774   Thu Jul 18 22:03:00 2019 KruthiUpdateCamerasMC2 and cameras

[Kruthi, Yehonathan, Gautam]

Today evening, Yehonathan and I aligned the MC2 cameras. As of now there are 2 GigEs in the MC2 enclosure. For the temporary GigE (which is the analog camera's place), we are using an ethernet cable connection from the Netgear switch in 1x6. The MC2 was misaligned and the autolocker wasn't able to lock the mode cleaner. So, Gautam disabled the autolocker and manually changed the settings; the autolocker was able to take over eventually.

  14779   Fri Jul 19 16:47:06 2019 MilindUpdateCamerasCNNs for beam tracking || Analysis of results

I did a whole lot of hyperparameter tuning for convolutional networks (without 3d convolution). Of the results I obtained, I am attaching the best results below.

Define "best"?

The lower the power of the error signal (difference between the true and predicted X and Y positions), essentially mse, on the test data, the better the performance of the model. Of the trained models I had, I chose the one with the lowest mse.

Attached results:

  1. Attachment 1: Training configuration
  2. Attachment 2: Predicted motion along the Y direction for the test data
  3. Attachment 3: Predicted motion along the Y direction for the training data
  4. Attachment 4: Learning curves
  5. Attachment 5: Error in test predictions
  6. Attachment 6: Video of image histogram plots
  7. Attachment 7: Plot of percentage of pixels with intensity over 240 with time

(Note: Attachment 6 and 7 present information regarding a fraction of the data. However, the behaviour remains the same for the rest of the data.)

Observations and analysis:

  1. Data:
    1. From attachemtns 2, 3, 5: Maximum deviation from true labels at the peaks of applied dither/motion. Possible reasons:
      1. Stupid Cropping? I checked (by watching the video of cropped frames, i.e visually) to ensure that the entire motion of the beam spot is captured. Therefore, this is not the case.
      2. Intensity variation: The intensity (brightness?) of the beam spot varies (decreases) significantly at the maximum displacement. This, I think, is creating a skewed dataset with very few frames with low intensity pixels. Therefore, I think it makes sense to even this out and get more data points (frames) with similar (lower) pixel intensities. I can think of two ways of doing this:
        1. Collect more data with lower amplitude of sinusoidal dither. I used an amplitude of 80 cts to dither the optic. Perhaps something like 40 is more feasible. This will ensure the dataset isn't too skewed.
        2. Increase exposure time. I used an exposure time of 500us to capture data. Perhaps a higher exposure time will ensure that the image of the beam spot doesn't fade out at the peak of motion.
    2. From attachment 5, Saturated images?: We would like to gun for a maximum deviation of 10% (0.1 in this case) from the true values in the predicted labels (Tbh, I'm not sure why this is a good baseline, I ought to give that some thought. I think the maximum deviation of the OpenCV thing I did at the start might also be a good baseline?). Clearly, we're not meeting that. One possible reason is that the video might be saturated- (too many pixels at 255, bleeding into surrounding pixles) leading to loss of information. I set the exposure time to 500us precisely to avoid this. However, I also created videos of the image histograms of the frames to make sure the frames weren't saturated (Is there some better standard way of doing it?). From attachements 6 and 7, I think it's evident that saturation is not an issue. Consequently, I think increasing the exposure time and collecting data is a good idea.
  2. The network:
    1. From attachment 4: Training post 25 epochs seems to produce overfitting, though it doesn't seem too terrible (from attachments 2 and 3). The network is still learning after 75 epochs, so I'll tinker with the learning rate, dropout and maybe put in annealing.
    2. I don't think there is a need to change the architecture yet. The model seems to generalize okay (valdiation error is close to training error), therefore I think it'll be a good idea to increase dropout for the fully connected layers and train for longer/ with a higher learning rate.

 


 

P.S. I will also try the 2D convolution followed by the 1D convolution thing now. 

P.P.S. Gabriele suggested that I try average pooling instead of max pooling as this is a regression task. I'll give that a shot.

 

  14786   Sat Jul 20 12:16:39 2019 gautamUpdateCamerasCNNs for beam tracking || Analysis of results
  1. Make the MSE a subplot on the same axes as the time series for easier interpretation.
  2. Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.
  3. What is the minimum detectable motion given the CCD resolution?
  4. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully?
  5. What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.
  6. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  7. Is the time-sync problem Koji raised limiting this approach?
  14787   Sat Jul 20 14:43:45 2019 MilindUpdateCamerasCNNs for beam tracking || Analysis of results

<Adding details>

See Attachment #2.

Quote:

Make the MSE a subplot on the same axes as the time series for easier interpretation.

Training dataset:

  1. Peak to peak amplitue in physical units: ?
  2. Dither frequency: 0.2 Hz
  3. Video data: zoomed in video of the beam spot obtained from GigE camera 198.162.113.153 at 500us exposure time. Each frame has a resolution of 640 x 480 which I have cropped to 350 x 350. Attachment #1 is one such frame.
  4. Yes, therefore I am going to obtain video at lower amplitudes. I think that should help me avoid the problem of not-nominal-maximum value?
  5. Other details of the training dataset:
    1. Dataset created from four vides of duration ~ 30, 60, 60, 60 s at 25 FPS.
    2. 4032 training data points
      1. Input (one example/ data point): 10 successive frames stacked to form a 3D volume of shape 350 x 350 x 10
      2. Output (2 dimensional vector): QPD readings (C1:IOO-MC_TRANS_PIT_ERR, C1:IOO-MC_TRANS_YAW_ERR)
    3. Pre-processing: none
    4. Shuffling: Dataset was shuffled before every epoch
    5. No thresholding: Binary images are gonna be of little use if the expectation is that the network will learn to interpret intensity variations of pixels.

Do I need to provide any more details here?

Quote

Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.

?

Quote:

What is the minimum detectable motion given the CCD resolution?

see attachment #4.

Quote:
  1. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully

 

I wrote what I think is a handy script to observe if the frames are saturated. I thought this might be handy for if/when I collect data with higher exposure times. I assumed there was no saturation in the images because I'd set the exposure value to something low. I thought it'd be useful to just verify that. Attachment #3 has log scale on the x axis.

Quote:

What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.

 

Quote:
  1. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  2. Is the time-sync problem Koji raised limiting this approach?

 

  14801   Tue Jul 23 21:59:08 2019 JonUpdateCamerasPlan for GigE cameras

This afternoon Gautam and I assessed what to do about restoring the GigE camera software. Here's what I propose:

  • Set up one of the new rackmount Supermicros as a dedicated camera feed server
  • All GigE cameras on a local subnet connected to the second network interface (these Supermicros have two)
  • Put the SnapPy, pypylon, and pylon5 binaries on the shared network drive. These all have to be built from source.
  • All other dependencies can be gotten through the package managers, so create requirements files for yum and pip to automatically install these locally.

I've started resolving the many dependencies of this code on rossa. The idea is to get a working environment on one workstation, then generate requirements files that can be used to set up the rest of the machines. I believe the dependencies have all been installed. However, many of the packages are newer versions than before, and this seems to have broken SnapPy. I'll continue debugging this tomorrow.

  14803   Wed Jul 24 02:06:05 2019 KruthiUpdateCamerasHDR images

I have been trying a couple of HDR algorithms, all of them seem to give very different results. I don't know how suitable these algorithms are for our purpose, because they are more concerned with final display. I'm attaching the HDR image I got by modifying Jigyasa's code a bit (this image has been be modified further to make it suitable for displaying). Here, I'm trying compare the plots of images that look similar. The HDR image has a dynamic ratio of 700:1

PS: 300us_image.png file actually looks very similar to HDR image on my laptop (might be an issue with elog editor?). So I'm attaching its .tiff version also to avoid any confusion.

  14806   Wed Jul 24 16:45:32 2019 JonUpdateCamerasUpgraded Pylon from 5.0.12 to 5.2.0

I upgraded Pylon, the C/C++ API for the GigE cameras, to the latest release, 5.2.0. It is installed in the same location as before, /opt/rtcds/caltech/c1/scripts/GigE/pylon5, so environment variables do not change. The old version, 5.0.12, still exists at opt/rtcds/caltech/c1/scripts/GigE/backup_pylon5.

The package contains a GUI application (/bin/PylonViewerApp) for streaming video. The old version supports saving still images, but Milind discovered that the new version supports saving video as well. This required installing a supplementary package supporting MPEG-4 output.

Basler's GUI application is launched from the terminal using the alias pylon. I've tested it and confirm it can save both videos and still-image formats. I recommend to also try grabbing images using this program and check the bit resolution. It would be a useful diagnostic to know whether it's a bug in Joe B.'s code or something deeper in the camera settings.

  14807   Wed Jul 24 20:05:47 2019 MilindUpdateCamerasCNNs for beam tracking || Tales of desperation

At the lab meeting today, Rana suggested that I use the Pylon app to collect more data if that's what I need. Following this, Jon helped me out by updating the pylon version and installing additional software to record video. Now I am collecting data at

  1. higher exposure rate - 600 us magically gives me a saturation percentage of around 1%, see attachment #1 (i.e around 1% of the pixels in the region containing the beam spot are over 240 in value). Ths is a consequence of my discussion with Gabriele where we concluded that I was losing information due the low exposure rate I was using.
  2. For much longer: roughly 10 minutes
    1. at an amplitude of 40 cts for 0.2 Hz
    2. at an amplitude of 20 cts for 0.2 Hz
    3. at an amplitude of 10 cts for 0.2 Hz
    4. at an amplitude of 40 cts for 0.4 Hz
    5. at an amplitude of 20 cts for 0.2 Hz
    6. Random motion

Consequently I have dithered the MC2 optic from around 9:00 PM.

  14808   Wed Jul 24 20:23:52 2019 gautamUpdateCamerasUpgraded Pylon from 5.0.12 to 5.2.0

Since there are multiple SURF projects that rely on the cameras:

  1. I moved the new installs Jon made to "new_pylon5" and "new_pypylon". The old installs were moved back to be the default directories.
  2. The bashrc alias for pylon was updated to allow the recording of videos (i.e. it calls the PylonViewerApp from new_pypylon).
  3. There is a script that can grab images at multiple exposures and save 12-bit data as uint16 numpy arrays to an HDF5 file. Right now, it is located at /users/kruthi/scripts/grabHDR.py. We can move this to a better place later, and also improve the script for auto adjusting the exposure time to avoid saturations.

My changes were necessary because the grabHDR.py script was throwing python exceptions, whereas it was running just fine before Jon's changes. We can move the "new_*" dirs to the default once the SURFs are gone.

Let's freeze the camera software config in this state until next week.

  14809   Thu Jul 25 00:26:47 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

Somehow I never got around to doing the pixel sum thing for the new real data from the GigE. Since I have to do it for the presentation, I'm putting up the results here anyway. I've normalized this and computed the SNR with the true readings.

SNR = (power in true readings)/ (power in error signal between true and predicted values)

Attachment #2 is SNR of best performing CNN for comparison.

  14810   Thu Jul 25 09:19:32 2019 JonUpdateCamerasUpgraded Pylon from 5.0.12 to 5.2.0

I'll keep developing the camera server on a parallel track using the "new_..." directory naming convention. One thing I forgot to note is that the new pylon/pypylon packages require Python 3, so will not work with any of the 2.7 scripts. All of the environment I need to set up is exclusively Python 3. I won't change anything in the Python 2.7 environment in current use.

Also, I found the source of the bit resolution issue: Joe B's code loads a set of initialization parameters from a config file. One of them is "Frame Type = Mono8" which sets the dynamic range of the stream. I'll look into how this should be changed. 

Quote:

Since there are multiple SURF projects that rely on the cameras:

  1. I moved the new installs Jon made to "new_pylon5" and "new_pypylon". The old installs were moved back to be the default directories.
  2. The bashrc alias for pylon was updated to allow the recording of videos (i.e. it calls the PylonViewerApp from new_pypylon).
  3. There is a script that can grab images at multiple exposures and save 12-bit data as uint16 numpy arrays to an HDF5 file. Right now, it is located at /users/kruthi/scripts/grabHDR.py. We can move this to a better place later, and also improve the script for auto adjusting the exposure time to avoid saturations.
  14814   Fri Jul 26 19:53:53 2019 JonOmnistructureCamerasGigE Camera Server

I've started setting up the last new rackmount SuperMicro as a dedicated server for the GigE cameras. The new machine is currently sitting on the end of the electronics test bench. It is assigned the hostname c1cam at IP 192.168.113.116 on the martian network. I've installed Debian 10, which will be officially supported until July 2024.

I've added the /cvs/cds NFS mount and plan to house all the client/server code on this network disk. Any dependencies that must be built from source will be put on the network disk as well. Any dependencies that can be gotten through the package manager, however, will be installed locally but in an automated way using a reqs file.

We should ask Chub to reorder several more SuperMicro rackmount machines, SSD drives, and DRAM cards. Gautam has the list of parts from Johannes' last order.

  14824   Fri Aug 2 16:46:09 2019 KruthiUpdateCamerasClean up

I've put the analog camera back and disconnected the 151 unit GigE. But I ran out of time and wasn't able to replace the beamsplitter. I've put all the equipments back to the place where I took them from. The chopper and beam dump mount, that Koji had got me for the scatterometer, are kept outside, on the table I was working on earlier, in the control room. The camera lenses, additional GigEs, wedge beamsplitter, 1050nm LED and all related equipments are kept in the GigE box. This box was put back into CCD cameras' cabinet near the X arm.

Note: To clean stuff up, I had entered the lab around 9.30pm on Monday. This might have affected Yehonathan's loss measurement readings (until then around 57 readings had been recorded).

Sorry for the late update.

  14856   Fri Aug 23 19:10:02 2019 JonUpdateCamerasGigE camera server is online

Following the death of rossa, which was hosting the only working environment for the GigE camera software, I've set up a new dedicated rackmount camera server: c1cam (details here). The Python server script is now configured as a persistent systemd service, which automatically starts on boot and respawns after a crash. The server depends on a set of EPICS channels being available to control the camera settings, so c1cam is also running a softIOC service hosting these channels. At the moment only the ETMX camera is set up, but we can now easily add more cameras.

Usage

Instructions for connecting to a live video feed are posted here. Any machine on the martian network can stream the feed(s). The only requirement is that the client machine have GStreamer 0.10 installed (all the control room workstations satisfy this).

Code Locations

As much as possible, the code and dependencies are hosted on the /cvs/cds network drive instead of installed locally. The client/server code and the Pylon5, PyPylon, and PyEpics dependencies are all installed at /cvs/cds/rtcds/caltech/c1/scripts/GigE. The configuration files for the soft IOC are located at /cvs/cds/caltech/target/c1cam.

Upgrade Goals

The 40m GigE camera code is a slightly-updated version of the 10+ year-old camera code in use at the sites. Consequently every one of its dependencies is now deprecated. Ultimately, we'd like to upgrade to the following:

  • Python 2.7 --> 3.7
  • Basler Pylon 5.0.12 --> 5.2.0
  • PyPylon 1.1.1 --> 1.4.0
  • GStreamer 0.10 --> 1.2

This is a long-term project, however, as many of these APIs are very different between Python 2 and 3.

  14883   Mon Sep 16 17:53:16 2019 aaronUpdateCamerasMC2 trans camera (?) rotated

We noticed last week that the MC2 trans camera has pitch and yaw swapped; I rotated what I thought is the correct camera by 90 degrees clockwise (as viewed from above, like in the attachment), but I now have doubts. It's the camera on the right in the attachment.

  14884   Mon Sep 16 19:29:24 2019 KojiUpdateCamerasMC2 trans camera (?) rotated

The left one is analog and 90deg rotated.

See also: This issue tracker

  15048   Tue Nov 26 13:33:33 2019 YehonathanUpdateCamerasMC2 Camera rotated by 90 degrees

MC2 analog camera was rotated by 90 degrees. Orientation correctness was verified by exciting the MC2 Yaw degree of freedom.

Attached before and after photos of the camera setup.

  15306   Sat Apr 18 13:32:31 2020 ranaUpdateCamerasGigE w better NIR sensitivvity

There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.

Might allow for better scatter measurements - not that we need more signal, but it could allow us to use shorter exposure times and reduce blurring due to the wobbly beams.

  15311   Thu Apr 23 09:52:02 2020 JonUpdateCamerasGigE w better NIR sensitivvity

Nice, and we should also permanently install the camera server (c1cam) which is still sitting on the electronics bench. It is running an adapted version of the Python 2/Debian 8 site code. Maybe if COVID continues long enough I'll get around to making the Python 3 version we've long discussed.

Quote:

There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.

  16060   Wed Apr 21 10:59:07 2021 ranaSummaryCamerasnote on new GigE cam @ 1064

Note from Stephen on more sensitive Baslers.

  16190   Mon Jun 7 15:37:01 2021 Anchal, Paco, YehonathanSummaryCamerasMon 7 in Control Room Died

We found Mon7 in control room dead today afternoon. It's front power on green light is not lighting up. All other monitors are working as normal.

This monitor was used for looking at IMC camera analog feed. It is one of the most important monitors for us, so we should replace it with a different monitor.

Yehonathan and Paco disconnected the monitor and brought it down. We put it under the back table if anyone wants to fix it. Paco has ordered a BNC to VGA/HDMI converter to put in any normal monitor up there. It will happen this Wednesday. Meanwhile, I have changed the MON4 assignment from POP to Quad2 to be used for IMC.

  16204   Wed Jun 16 13:20:19 2021 Anchal, PacoSummaryCamerasMon 7 in Control Room Replaced

We replaced the Mon 7 with an LCD monitor from back bench. It is fed the analog signal from BNC converted into VGS with a converter box that Paco bought. We can replace this monitor with another monitor if it is required on the back bench. For now, we definitely need a monitor to show IMC camera's up there.

  16774   Wed Apr 13 15:57:25 2022 Ian MacMillanUpdateCamerasCamera Battery Test

Tested the Nikon batteries for the camera. they are supposed to be 7V batteries but they don't hold a charge. I confirmed this with multi-meter after charging for days. Ordered new ones Nikon EN-EL9

  16776   Wed Apr 13 18:55:54 2022 KojiUpdateCamerasCamera Battery Test

I believe that the Nikon has an exposure problem and that's why we bought the Canon.

 

  10436   Thu Aug 28 11:02:53 2014 SteveUpdateCalibration-RepairSR785 repair

SN 46,795 of 2003 is back.

  11641   Thu Sep 24 17:06:14 2015 ericqUpdateCalibration-RepairC1CAL Lockins

Just a quick note for now: I've repopulated C1CAL with a limited set of lockin oscillators/demodulators, informed by the aLIGO common LSC model. Screens are updated too. 

Rather than trying to do the whole magnitude phase decompostion, it just does the demodulation of the RFPD signals online; everything beyond that is up to the user to do offline. 

Briefly testing with PRMI, it seems to work as expected. There is some beating evident from the fact that the MICH and PRCL oscillation frequencies are only 2Hz apart; the demod low pass is currently at an arbitrary 1Hz, so it doesn't filter the beat much. 

Screens, models, etc. all svn'd.

  12040   Mon Mar 21 14:29:32 2016 SteveUpdateCalibration-Repair1W Innolight laser repair diagnoses

 

Quote:
Quote:

After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I

Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010

It's diodes should be replaced, based on it's age and performance.

RIN and noise eater bad. I will get a quote on this job.

The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956

Diagnoses from Glasglow:

“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”

  12045   Thu Mar 24 07:56:09 2016 SteveUpdateCalibration-RepairNO Noise Eater for 1W Innolight

1W Innolight is NOT getting Noise Eater as it was decided yesterday at the 40m meeting. Corrected 3-25-2016

Repair quote with adding noise eater is in 40m wiki

Quote:

 

Quote:
Quote:

After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I

Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010

It's diodes should be replaced, based on it's age and performance.

RIN and noise eater bad. I will get a quote on this job.

The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956

Diagnoses from Glasglow:

“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”

 

  12070   Mon Apr 11 17:03:41 2016 SteveUpdateCalibration-Repair1W Innolight repair completed

The laser is back. Test report is in the 40m wiki as New Pump Diode Mephisto 1000

It will go on the PSL table.

  13456   Tue Nov 28 17:27:57 2017 awadeBureaucracyCalibration-RepairSR560 return, still not charging

I brought a bunch of SR560s over for repair from Bridge labs. This unit, picture attached (SN 49698), appears to still not be retaining charge. I’ve brought it back. 

  14759   Mon Jul 15 03:30:47 2019 KruthiUpdateCalibration-RepairWhite paper as a Lambertian scatterer

I made some rough measurements, using the setup I had used for CCD calibration, to get an idea of how good of a Lambertian scatterer the white paper is. Following are the values I got:

Angle (degrees) Photodiode reading (V)  Ps (W) BRDF (per str) % error
12 0.864 2.54E-06 0.334 20.5
24 0.926 2.72E-06 0.439 19.0
30 1.581 4.65E-06 0.528 19.0
41 0.94 2.76E-06 0.473 19.8
49 0.545 1.60E-06 0.423 22.5
63 0.371 1.09E-06 0.475 28

Note: All the measurements are just rough ones and are prone to larger errors than estimated.

I also measured the transmittance of the white paper sample being used (it consists of 2 white papers wrapped together). It was around 0.002

  14804   Wed Jul 24 04:20:35 2019 KruthiUpdateCalibration-RepairMC2 pitch and yaw calibration

Summary:  I calibrated MC2 pitch and yaw offsets to spot position in mm. Here's what I did:

  1. Changed the MC2 pitch and yaw offset values using  ezca.Ezca().write('IOO-MC2_TRANS_PIT_OFFSET', <pitch offset value> ) and ezca.Ezca().write('IOO-MC2_TRANS_YAW_OFFSET', <yaw offset value> )
  2. Waited for ~ 700-800 sec for system to adjust to the assigned values
  3. Took snapshots with the 2 GigEs I had installed - zoomed in and zoomed out. (I'll be using these to make a scatter loss map, verify the calibration results, etc)
  4. Ran the mcassDecenter script, which can be found in /scripts/ASS/MC. This enters the spot position in mm in the specified text file.

Results:  In the pitch/yaw vs pitch_offset/yaw_offset graph attached,

  • intercept_pitch = 6.63 (in mm) ,  slope_pitch = -0.6055 (mm/counts) 
  • intercept_yaw = -4.12 (in mm) ,  slope_yaw = 4.958 (mm/counts) 
  15510   Sat Aug 8 07:36:52 2020 Sanika KhadkikarConfigurationCalibration-RepairBS Seismometer - Multi-channel calibration

Summary : 

I have been working on analyzing the seismic data obtained from the 3 seismometers present in the lab. I noticed while looking at the combined time series and the gain plots of the 3 seismometers that there is some error in the calibration of the BS seismometer. The EX and the EY seismometers seem to be well-calibrated as opposed to the BS seismometer.

The calibration factors have been determined to be :

BS-X Channel: \small {\color{Blue} 2.030 \pm 0.079 }

BS-Y Channel: \small {\color{Blue} 2.840 \pm 0.177 }

BS-Z Channel: \small {\color{Blue} 1.397 \pm 0.182 }


Details :

The seismometers each have 3 channels i.e X, Y, and Z for measuring the displacements in all the 3 directions. The X channels of the three seismometers should more or less be coherent in the absence of any seismic excitation with the gain amongst all the similar channels being 1. So is the case with the Y and Z channels. After analyzing multiple datasets, it was observed that the values of all the three channels of the BS seismometer differed very significantly from their corresponding channels in the EX and the EY seismometers and they were not calibrated in the region that they were found to be coherent as well. 


Method :

Note: All the frequency domain plots that have been calculated are for a sampling rate of 32 Hz. The plots were found to be extremely coherent in a certain frequency range i.e ~0.1 Hz to 2 Hz so this frequency range is used to understand the relative calibration errors. The spread around the function is because of the error caused by coherence values differing from unity and the averages performed for the Welch function. 9 averages have been performed for the following analysis keeping in mind the needed frequency resolution(~0.01Hz) and the accuracy of the power calculated at every frequency. 

  1. I first analyzed the regions in which the similar channels were found to be coherent to have a proper gain analysis. The EY seismometer was found to be the most stable one so it has been used as a reference. I saw the coherence between similar channels of the 2 seismometers and the bode plots together. A transfer function estimator was used to analyze the relative calibration in between all 3 pairs of seismometers. In the given frequency range EX and EY have a gain of 1 so their relative calibration is proper. The relative calibration in between the BS and the EY seismometers is not proper as the resultant gain is not 1. The attached plots show the discrepancies clearly : 
  • BS-X & EY-X Transfer Function : Attachment #1
  • BS-Y & EY-Y Transfer Function : Attachment #2

          The gain in the given frequency range is ~3. The phase plotting also shows a 180-degree phase as opposed to 0 so a negative sign would also be required in the calibration factor. Thus the calibration factor for the Y channel of the BS seismometer should be around ~3. 

  • BS-Z & EY-Z Transfer Function : Attachment #3

The mean value of the gain in the given frequency range is the desired calibration factor and the error would be the mean of the error for the gain dataset chosen which is caused due to factors mentioned above.

Note: The standard error envelope plotted in the attached graphs is calculated as follows :

         1. Divide the data into n segments according to the resolution wanted for the Welch averaging to be performed later. 

         2. Calculate PSD for every segment (no averaging).

         3. Calculate the standard error for every value in the data segment by looking at distribution formed by the n number values we obtain by taking that respective value from every segment.

Discussions :

The BS seismometer is a different model than the EX and the EY seismometers which might be a major cause as to why we need special calibration for the BS seismometer while EX and EY are fine. The sign flip in the BS-Y seismometer may cause a lot of errors in future data acquisitions. The time series plots in Attachment #4 shows an evident DC offset present in the data. All of the information mentioned above indicates that there is some electrical or mechanical defect present in the seismometer and may require a reset. Kindly let me know if and when the seismometer is reset so that I can calibrate it again. 

  15650   Thu Oct 29 09:50:12 2020 anchalSummaryCalibrationPreliminary calibration measurement taken

I went to 40m yesterday at around 2:30 pm and Koji showed me how to acquire lock in different arms and for different lasers. Finally, we took a preliminary measurement of shaking the ETMX at some discrete frequencies and looking at the beatnote frequency spectrum of X-end laser's fiber-coupled IR and Main laser's IR pick-off.


Basic controls and measurement 101 at 40m

  • I learned a few things from Koji about how to align the cavity mirrors for green laser or IR laser.
  • I learned how to use ASS and how to align the green end laser to the cavity. I also found out about the window at ETMX chamber where we can directly see the cavity mode, cool stuff.
  • Koji also showed me around on how to use diaggui and awggui for taking measurements with any of the channels.

Preliminary measurement for calibration scheme

We verified that we can send discrete frequency excitation signals to ETMX actuators directly and see a corresponding peak in the spectrum of beatnote frequency between fiber-coupled X-end IR laser and main laser IR pickoff.

  • I sent excitation signal at 200 Hz, 250 Hz and 270 Hz at C1:SUS-ETMX_LSC_EXC channel using awggui with an amplitude of 100 cts and gain of 2.
  • I measured corresponding peaks in the beatnote spectrum using diaggui.
  • Page 1 shows the ASD data for the 4 measurements taken with Hanning window and averaging of 10.
  • Page 2 shows close up Spectrum data for the 4 measurements taken with flattop window and averaging of 10.
  • I converted this frequency signal into displacement by using conversion factor \nu_{FSR}/\frac{\lambda}{2} or \frac{L \lambda}{c}.

If full interferometer had been locked, we could have used the DARM error signal output to calibrate it against this measurement.

Data

  16128   Mon May 10 10:57:54 2021 Anchal, PacoSummaryCalibrationUsing ALS beatnote for calibration, test

Test details:

  • We locked both arms and opened the shutter for Yend green laser.
  • After toggling the shutter on.off, we got a TEM00 mode of green laser locked to YARM.
  • We then cleared the phase Y history by clicking "CLEAR PHASE Y HISTROY" on C1LSC_ALS.adl (opened from sitemap > ALS > ALS).
  • We sent excitation signal at ITMY_LSC_EXC using awggui at 43Hz, 77Hz and 57Hz.
  • We measured the power spectrum and coherence of C1:ALS-BEATY_FINE_PHASE_OUT_HZ_DQ and C1:SUS-ITMY_LSC_OUT_DQ.
  • The BEATY_FINE_PHASE_OUT_HZ is already calibrated in Hz. This we assume is done by multip[lying the VCO slope in Hz/cts to the error signal of the digital PLL loop that tracks the phase of beatnote.
  • We calibrated C1:SUS-ITMY_LSC_OUT_DQ by multiplying with
    \large 3 \times \frac{2.44 \, nm/cts}{f^2} \times \frac{c}{1064\,nm \times 37.79\, m} = \frac{54.77}{f^2} kHz/cts where f is in Hz.
    The 2.44/f2 nm/cts is taken from 13984.
  • We added the calibration as Poles/zeros option in diaggui using gain=54.577e3 and poles as "0, 0".
  • We found that ITMY_LSC_OUT_DQ calibration matches well at 57Hz but overshoots (80 vs 40) at 43 Hz and undershoots (50 vs 80) at 77Hz.

Conclusions:

  • If we had DRFPMI locked, we could have used the beatnote spectrum as independent measurement of arm lengths to calibrate the interferometer output.
  • We can also use the beatnote to confirm or correct the ITM actuator calibrations. Maybe shape is not exactly 1/f2 unless we did something wrong here or the PLL bandwidth is too short.
  16315   Tue Sep 7 18:00:54 2021 TegaSummaryCalibrationSystem Identification via line injection

[paco]

This morning, I spent some time restoring the jupyter notebook server running in allegra. This server was first set up by Anchal to be able to use the latest nds python API tools which is handy for the calibration stuff. The process to restore the environment was to run "source ~/bashrc.d/*" to restore some of the aliases, variables, paths, etc... that made the nds server work. I then ran ssh -N -f -L localhost:8888:localhost:8888 controls@allegra from pianosa and carry on with the experiment.


[paco, hang, tega]

We started a notebook under /users/paco/20210906_XARM_Cal/XARM_Cal.ipynb on which the first part was doing the following;

  • Set up list of excitations for C1:LSC-XARM_EXC (for example three sine waveforms) using awg.py
  • Make sure the arm is locked
  • Read a reference time trace of the C1:LSC-XARM_IN2 channel for some duration
  • Start excitations (one by one at the moment, ramptime ~ 3 seconds, same duration as above)
  • Get data for C1:LSC-XARM_IN2 for an equal duration (raw data in Attachment #1)
  • Generate the excitation sine and cosine waveforms using numpy and demodulate the raw timeseries using a 4th order lowpass filter with fc ~ 10 Hz
  • Estimate the correct demod phase by computing arctan(Q / I) and rerunning the demodulation to dump the information into the I quadrature (Attachment #2).
  • Plot the estimated ASD of all the quadratures (Attachment #3)

[paco, hang, tega]

Estimation of open loop gain:

  • Grab data from the C1:LSC-XARM_IN1 and C1:LSC-XARM_IN2 test points
  • Infer excitation from their differnce, i.e. C1:LSC-XARM_EXC = C1:LSC-XARM_IN2 - C1:LSC-XARM_IN1
  • Compute the open loop gain as follows : G(f) = csd(EXC,IN1)/csd(EXC,IN2), where csd computes the cross spectra density of the input arguments
  • For the uncertainty in G, dG, we repeat steps (1) to (3) with & without signal injection in the C1:LSC-XARM_EXC channel. In the absence of signal injection, the signal in C1:LSC-XARM_IN2 is of the form: Y_ref = Noise/(1-G), whereas with nonzero signal injection, the signal in C1:LSC-XARM_IN2 has the form: Y_cal = EXC/(1-G) + Noise/(1-G), so their ratio, Y_cal/Y_ref = EXC/Noise, gives the SNR, which we can then invert to give the uncertainty in our estimation of G, i.e dG = Y_ref/Y_cal.
  • For the excitation at 53 Hz, our measurtement for the open loop gain comes out to about 5 dB whiich is consistent with previous measurement.
  • We seem to have an SNR in excess of 100 at measurement time of 35 seconds and 1 count of amplitude which gives a relative uncertainty of G of 0.1%
  • The analysis details are ongoing. Feedback is welcome.
  16352   Tue Sep 21 11:13:01 2021 PacoSummaryCalibrationXARM calibration noise

Here are some plots from analyzing the C1:LSC-XARM calibration. The experiment is done with the XARM (POX) locked, a single line is injected at C1:LSC-XARM_EXC at f0 with some amplitude determined empirically using diaggui and awggui tools. For the analysis detailed in this post, f0 = 19 Hz, amp = 1 count, and gain = 300 (anything larger in amplitude would break the lock, and anything lower in frequency would not show up because of loop supression). Clearly, from Attachment #3 below, the calibration line can be detected with SNR > 1.

We read the test point right after the excitation C1:LSC-XARM_IN2 which, in a simplified loop will carry the excitation suppressed by 1 - OLTF, the open loop transfer function. The line is on for 5 minutes, and then we read for another 5 minutes but with the excitation off to have a reference. Both the calibration and reference signal time series are shown in Attachment #1 (decimated by 8). The corresponding ASDs are shown in Attachment #2. Then, we demodulate at 19 Hz and a 30 Hz, 4th-order butterworth LPF, and get an I and Q timeseries (shown in Attachment #3). Even though they look similar, the Q is centered about 0.2 counts, while the I is centered about 0.0. From this time series, we can of course show the noise ASDs in Attachment #3.


The ASD uncertainty bands in the last plot are statistical estimates and depend on the number of segments used in estimating the PSD. A thing to note is that the noise features surrounding the signal ASD around f0 are translated into the ASD in the demodulated signals, but now around dc. I guess from Attachment #3 there is no difference in the noise spectra around the calibration line with and without the excitation. This is what I would have expected from a linear system. If there was a systematic contribution, I would expect it to show at very low frequencies.

  16353   Wed Sep 22 11:43:04 2021 ranaSummaryCalibrationXARM calibration noise

I would expect to see some lower frequency effects. i.e. we should look at the timeseries of the demod with the excitation on and off.

I would guess tat the exc on should show us the variations in the optical gain below 3 Hz, whereas the exc off would not show it.

Maybe you should do some low pass filtering on the time series you have to see the ~DC effects? Also, reconsider your AA filter design: how do you quantitatively choose the cutoff frequency and stopband depth?

  16363   Tue Sep 28 16:31:52 2021 PacoSummaryCalibrationXARM OLTF (calibration) at 55.511 Hz

[anchal, paco]

Here is a demonstration of the methods leading to the single (X)arm calibration with its budget uncertainty. The steps towards this measurement are the following:

  1. We put a single line excitation through the C1:SUS-ETMX_LSC_EXC at 55.511 Hz, amp = 1 counts, gain = 300 (ramptime=10 s).
  2. With the arm locked, we grab a long timeseries of the C1:LSC-XARM_IN1_DQ (error point) and C1:SUS-ETMX_LSC_OUT_DQ (control point) channels.
  3. We assume the single arm loop to have the four blocks shown in Attachment #1, A (actuator + sus), plant (mainly the cavity pole), D (detection + electronics), and K (digital control).
    1. At this point, Anchal made a model of the single arm loop including the appropriate filter coefficients and other parameters. See Attachments #2-3 for the split and total model TFs.
    2. Our line would actually probe a TF from point b (error point) to point d (control point). We multiplied our measurement with open loop TF from b to d from model to get complete OLTF.
    3. Our initial estimate from documents and elog made overall loop shape correct but it was off by an overall gain factor. This could be due to wrong assumption on RFPD transimpedance or analog gains of AA or whitening filters. We have corrected for this factor in the RFPD transimpedance, but this needs to be checked (if we really care).
  4. We demodulate decimated timeseries (final sampling rate ~ 2.048 kHz) and I & Q for both the b and d signals. From this and our model for K, we estimate the OLTF. Attachment #4 shows timeseries for magnitude and phase.
  5. Finally, we compute the ASD for the OLTF magnitude. We plot it in Attachment #5 together with the ASD of the XARM transmission (C1:LSC-TRX_OUT_DQ) times the OLTF to estimate the optical gain noise ASD (this last step was a quick attempt at budgeting the calibration noise).
    1. For each ASD we used N = 24 averages, from which we estimate rms (statistical) uncertainties which are depicted by error bands (\pm \sigma) around the lines.

** Note: We ran the same procedure using dtt (diaggui) to validate our estimates at every point, as well as check our SNR in b and d before taking the ~3.5 hours of data.

  16369   Thu Sep 30 18:04:31 2021 PacoSummaryCalibrationXARM OLTF (calibration) with three lines

[anchal, paco]

We repeated the same procedure as before, but with 3 different lines at 55.511, 154.11, and 1071.11 Hz. We overlay the OLTF magnitudes and phases with our latest model (which we have updated with Koji's help) and include the rms uncertainties as errorbars in Attachment #1.

We also plot the noise ASDs of calibrated OLTF magnitudes at the line frequencies in Attachment #2. These curves are created by calculating power spectral density of timeseries of OLTF values at the line frequencies generated by demodulated XARM_IN and ETMX_LSC_OUT signals. We have overlayed the TRX noise spectrum here as an attempt to see if we can budget the noise measured in values of G to the fluctuation in optical gain due to changing power in the arms. We multiplied the the transmission ASD with the value of OLTF at those frequencies as the transfger function from normalized optical gain to the total transfer function value.

It is weird that the fluctuations in transmission power at 1 mHz always crosses the total noise in the OLTF value in all calibration lines. This could be an artificat of our data analysis though.

Even if the contribution of the fluctuating power is correct, there is remaining excess noise in the OLTF to be budgeted.

  16373   Mon Oct 4 15:50:31 2021 HangUpdateCalibrationFisher matrix estimation on XARM parameters

[Anchal, Hang]

What: Anchal and I measured the XARM OLTF last Thursday.

Goal: 1. measure the 2 zeros and 2 poles in the analog whitening filter, and potentially constrain the cavity pole and an overall gain. 

          2. Compare the parameter distribution obtained from measurements and that estimated analytically from the Fisher matrix calculation.

          3. Obtain the optimized excitation spectrum for future measurements.   

How: we inject at C1:SUS-ETMX_LSC_EXC so that each digital count should be directly proportional to the force applied to the suspension. We read out the signal at C1:SUS-ETMX_LSC_OUT_DQ. We use an approximately white excitation in the 50-300 Hz band, and intentionally choose the coherence to be only slightly above 0.9 so that we can get some statistical error to be compared with the Fisher matrix's prediction. For each measurement, we use a bandwidth of 0.25 Hz and 10 averages (no overlapping between adjacent segments). 

The 2 zeros and 2 poles in the analog whitening filter and an overall gain are treated as free parameters to be fitted, while the rest are taken from the model by Anchal and Paco (elog:16363). The optical response of the arm cavity seems missing in that model, and thus we additionally include a real pole (for the cavity pole) in the model we fit. Thus in total, our model has 6 free parameters, 2 zeros, 3 poles, and 1 overall gain. 

The analysis codes are pushed to the 40m/sysID repo. 

===========================================================

Results:

Fig. 1 shows one measurement. The gray trace is the data and the olive one is the maximum likelihood estimation. The uncertainty for each frequency bin is shown in the shaded region. Note that the SNR is related to the coherence as 

        SNR^2 = [coherence / (1-coherence)] * (# of average), 

and for a complex TF written as G = A * exp[1j*Phi], one can show the uncertainty is given by 

        \Delta A / A = 1/SNR,  \Delta \Phi = 1/SNR [rad]. 

Fig. 2. The gray contours show the 1- and 2-sigma levels of the model parameters using the Fisher matrix calculation. We repeated the measurement shown in Fig. 1 three times, and the best-fit parameters for each measurement are indicated in the red-crosses. Although we only did a small number of experiments, the amount of scattering is consistent with the Fisher matrix's prediction, giving us some confidence in our analytical calculation. 

One thing to note though is that in order to fit the measured data, we would need an additional pole at around 1,500 Hz. This seems a bit low for the cavity pole frequency. For aLIGO w/ 4km arms, the single-arm pole is about 40-50 Hz. The arm is 100 times shorter here and I would naively expect the cavity pole to be at 3k-4k Hz if the test masses are similar. 

Fig. 3. We then follow the algorithm outlined in Pintelon & Schoukens, sec. 5.4.2.2, to calculate how we should change the excitation spectrum. Note that here we are fixing the rms of the force applied to the suspension constant. 

Fig. 4 then shows how the expected error changes as we optimize the excitation. It seems in this case a white-ish excitation is already decent (as the TF itself is quite flat in the range of interest), and we only get some mild improvement as we iterate the excitation spectra (note we use the color gray, olive, and purple for the results after the 0th, 1st, and 2nd iteration; same color-coding as in Fig. 3).   

 

 

 

  16399   Wed Oct 13 15:36:38 2021 HangUpdateCalibrationXARM OLTF

We did a few quick XARM oltf measurements. We excited C1:LSC-ETMX_EXC with a broadband white noise upto 4 kHz. The timestamps for the measurements are: 1318199043 (start) - 1318199427 (end).

We will process the measurement to compute the cavity pole and analog filter poles & zeros later.

  16957   Tue Jun 28 17:07:47 2022 AnchalUpdateCalibrationAdded Beatnote channels in demodulation of c1cal

I added today demodulation of C1:LSC-BEATX/Y_FINE_I/Q in the c1cal demodulation where different degrees of freedom can be dithered. For McCal (formerly soCal), we'll dither the arm cavity for which we can use any of the DOFs (like DARM) to send the dither to ETMX/ETMY. Then with green laser locked as well, we'll get the calibration signal from the beatnotes in the demodulaed channels. We can also read right after the mixing in c1cal model and try differnt poles for integration .

I've also added medm screens in the sensing matrix part of LSC screen. These let you see demodulation of beatnote frequency signals.

  17010   Mon Jul 18 04:42:54 2022 AnchalUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty

We can calculate how much detector calibration uncertainty affects the estimation of astrophysical parameters using the following method:

Let \overrightarrow{\Theta} be set of astrophysical parameters (like component masses, distance etc), \overrightarrow{\Lambda}be set of detector parameters (like detector pole, gain or simply transfer function vaue for each frequency bin). If true GW waveform is given by h(f; \overrightarrow{\Theta}), and the detector transfer function is given by \mathcal{R}(f; \overrightarrow{\Lambda}), then the detected gravitational waveform becomes:
g(f; \Theta, \Lambda) = \frac{\mathcal{R}(f; \overrightarrow{\Lambda_t})}{\mathcal{R}(f; \overrightarrow{\Lambda})} h(f; \overrightarrow{\Theta})

One can calculate a derivative of waveform with respect to the different parameters and calculate Fisher matrix as (see correction in 40m/17017):

\Gamma_{ij} = \left( \frac{\partial g}{\partial \mu_i} | \frac{\partial g}{\partial \mu_j}\right )

where the bracket denotes iner product defined as:

\left( k_1 | k_2 \right) = 4 Re \left( \int df \frac{k_1(f)^* k_2(f))}{S_{det}(f)}\right)

where S_{det}(f) is strain noise PSD of the detector.

With the gamma matrix in hand, the error propagation from detector parameter fractional errors \frac{\Delta \Lambda_j}{\Lambda_j}to astrophysical paramter fractional errors \frac{\Delta \Theta_i}{\Theta_i}is given by (eq 26 in Evan et al 2019 Class. Quantum Grav. 36 205006):

\frac{\Delta \Theta_j}{\Theta_j} = - \mathbf{H}^{-1} \mathbf{M} \frac{\Delta \Lambda_j}{\Lambda_j}

where \mathbf{H}_{ij} = \left( \frac{\partial g}{\partial \Theta_i} | \frac{\partial g}{\partial \Theta_j}\right ) and \mathbf{M}_{ij} = \left( \frac{\partial g}{\partial \Lambda_i} | \frac{\partial g}{\partial \Theta_j}\right ).


Using the above mentioned formalism, I looked into two ways of calculating error propagation from detector calibration error to astrophysical paramter estimations:

Using detector response function model:

If we assume detector response function as a simple DC gain (4.2 W/nm) and one pole (500 Hz) transfer function, we can plot conversion of pole frequency error into astrophysical parameter errors. I took two cases:

  • Binary Neutron Star merger with star masses of 1.3 and 1.35 solar masses at 100 Mpc distance with a \tilde{\Lambda} of 500. (Attachment 1)
  • Binary black hole merger with black masses of 35 and 30 at 400 MPc distance with spin along z direction of 0.5 and 0.8. (I do not fully understand the meaning of these spin components but a pycbc waveform generation model still lets me calculate the effect of detector errors) (Attachment 2)

The plots are plotted in both loglog and linear plots to show the order of magnitude effect and how the error propsagation slope is different for different parameters. 'm still not sure which way is the best to convey the information. The way to read this plot is for a given error say 4% in pole frequency determination, what is the expected error in component masses, merger distance etc. I

Note that the overall gain of detector response is not sensitive to astrophysical error estimation.

Using detector transfer function as frequency bin wise multi-parameter function

Alternatively, we can choose to not fit any model to the detector transfer function and simply use the errors in magnitude and phase at each frequency point as an independent parameter in the above formalism. This then lets us see what is the error propagation slope for each frequency point. The hope is to identify which parts of the calibration function are more important to calibrate with low uncertainty to have the least effect on astrophysical parameter estimation. Attachment 3 and 4 show these plots for BNS and BBH cases mentioned above. The top panel is the error propagation slope at each frequency due to error in magnitude of the detector transfer function at that frequency and the bottom panel is the error propagation slope at each frequency due to error in phase of the detector transfer function.

The calibration error in magnitude and phase as a function of frequency would be multiplied by the curves and summed together, to get total uncertainty in each parameter estimation.


This is my first attempt at this problem, so I expect to have made some mistakes. Please let me know if you can point out any. Like, do the order of magnitude and shape of error propagation makes sense? Also, comments/suggestions on the inference of these plots would be helpful.

Finally, I haven't yet tried seeing how these curves change for different true values of the merger event parameters. I'm not yet sure what is the best way to extract some general information for a variety of merger parameters.

Future goals are to utilize this information in informing system identification method i.e. multicolor calibration scheme parameters like calibration line frequencies and strength.

Code location

  17011   Mon Jul 18 15:17:51 2022 HangUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty

1. In the error propogation equation, it should be \Delta \Theta = -H^{-1} M \Delta \Lambda, instead of the fractional error. 

2. For the astro parameters, in general you would need t_c for the time of coalescence and \phi_c for the phase. See, e.g., https://ui.adsabs.harvard.edu/abs/1994PhRvD..49.2658C/abstract.

3. Fig. 1 looks very nice to me, yet I don't understand Fig. 3... Why would phase or amplitude uncertainties at 30 Hz affect the tidal deformability? The tide should be visible only > 500 Hz. 

4. For BBH, we don't measure individual spin well but only their mass-weighted sum, \chi_eff = (m_1*a_1 + m_2*a_2)/(m_1 + m_2). If you treat S1z and S2z as free parameters, your matrix is likely degenerate. Might want to double-check. Also, for a BBH, you don't need to extend the signal much higher than \omega ~ 0.4/M_tot ~ 10^4 Hz * (Ms/M_tot). So if the total mass is ~ 100 Ms, then the highest frequency should be ~ 100 Hz. Above this number there is no signal. 

 

  17017   Tue Jul 19 07:34:46 2022 AnchalUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty

Addressing the comments as numbered:

  1. Yeah, that's correct, that equation normally \Delta \Theta = -\mathbf{H}^{-1} \mathbf{M} \Delta \Lambda but it is different if I define \Gamma bit differently that I did in the code, correct my definition of \Gamma to :
    \Gamma_{ij} = \mu_i \mu_j \left( \frac{\partial g}{\partial \mu_i} | \frac{\partial g}{\partial \mu_j} \right )
    then the relation between fractional errors of detector parameter and astrophysical parameters is:
    \frac{\Delta \Theta}{\Theta} = - \mathbf{H}^{-1} \mathbf{M} \frac{\Delta \Lambda}{\Lambda}
    I prefer this as the relation between fractional errors is a dimensionless way to see it.
  2. Thanks for pointing this out. I didn't see these parameters used anywhere in the examples (in fact there is no t_c in documentation even though it works). Using these did not affect the shape of error propagation slope function vs frequency but reduced the slope for chirped Mass M_c by a couple of order of magnitudes.
    1. I used the get_t_merger(f_gw, M1, M2) function from Hang's work to calculate t_c by assuming f_{gw} must be the lowest frequency that comes within the detection band during inspiral. This function is:
      t_c = \frac{5}{256 \pi^{8/3}} \left(\frac{c^3}{G M_c}\right)^{5/3} f_{gw}^{-8/3}
      For my calculations, I've taken f_{gw} as 20 Hz.
    2. I used the get_f_gw_2(f_gw_1, M1, M2, t) function from Hang's work to calculate the evolution of the frequency of the IMR defined as:
      f_{gw}(t) = \left( f_{gw0}^{-8/3} - \frac{768}{15} \pi^{8/3} \left(\frac{G M_c}{c^3}\right)^{5/3} t \right)^{-3/8}
      where f_{gw0} is the frequency at t=0. I integrated this frequency evolution for t_c time to get the coalescence phase phi_c as:
      \phi_c = \int^{t_c}_0 2 \pi f_{gw}(t) dt
  3. In Fig 1, which representation makes more sense, loglog of linear axis plot? Regarding the affect of uncertainties on Tidal amplitude below 500 Hz, I agree that I was also expecting more contribution from higher frequencies. I did find one bug in my code that I corrected but it did not affect this point. Maybe the SNR of chosen BNS parameters (which is ~28) is too low for tidal information to come reliably anyways and the curve is just an inverse of the strain noise PSD, that is all the information is dumped below statistical noise. Maybe someone else can also take a look at get_fisher2() function that I wrote to do this calculation.
  4. Now, I have made BBH parameters such that the spin of the two black holes would be assumed the same along z. You were right, the gamma matrix was degenerate before. To your second point, I think the curve also shows that above ~200 Hz, there is not much contribution to the uncertainty of any parameter, and it rolls-off very steeply. I've reduced the yspan of the plot to see the details of the curve in the relevant region.
Quote:

1. In the error propogation equation, it should be \Delta \Theta = -H^{-1} M \Delta \Lambda, instead of the fractional error. 

2. For the astro parameters, in general you would need t_c for the time of coalescence and \phi_c for the phase. See, e.g., https://ui.adsabs.harvard.edu/abs/1994PhRvD..49.2658C/abstract.

3. Fig. 1 looks very nice to me, yet I don't understand Fig. 3... Why would phase or amplitude uncertainties at 30 Hz affect the tidal deformability? The tide should be visible only > 500 Hz.

4. For BBH, we don't measure individual spin well but only their mass-weighted sum, \chi_eff = (m_1*a_1 + m_2*a_2)/(m_1 + m_2). If you treat S1z and S2z as free parameters, your matrix is likely degenerate. Might want to double-check. Also, for a BBH, you don't need to extend the signal much higher than \omega ~ 0.4/M_tot ~ 10^4 Hz * (Ms/M_tot). So if the total mass is ~ 100 Ms, then the highest frequency should be ~ 100 Hz. Above this number there is no signal.

 

ELOG V3.1.3-