40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log  Not logged in ELOG logo
Entry  Thu May 23 15:37:30 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection 6x
    Reply  Sat May 25 20:29:08 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection residue_normalised_x.pdfresidue_normalised_x.pdfresidue_normalised_x.pdfpredicted_motion_x.pdfnormalised_comparison_y.pdf
       Reply  Wed Jun 12 22:02:04 2019, Milind, Update, Cameras, Simulation enhancements  simulated_motion0.mp4simulated_motion0.mp4simulated_motion0.mp4
          Reply  Mon Jun 17 14:36:13 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
             Reply  Tue Jun 18 22:54:59 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                Reply  Tue Jun 25 00:25:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 8x
                   Reply  Tue Jun 25 22:14:10 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                      Reply  Thu Jun 27 20:48:22 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking readme.txtframe0.pdfLearning_curves.pngMotion.png
                         Reply  Thu Jul 4 18:19:08 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking Motion.pdfError.pdfLearning_curves.pdf
                            Reply  Mon Jul 8 17:52:30 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking Motion.pdf
                               Reply  Tue Jul 9 22:13:26 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking predicted_motion_first.pdfpcdev5_time.png
                                  Reply  Wed Jul 10 22:32:38 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                               Reply  Mon Jul 15 14:09:07 2019, Milind, Update, Cameras, CNN LSTM for beam tracking cnn-lstm.pngfft_yaw.pdfyaw_motion.pdf
                                  Reply  Fri Jul 19 16:47:06 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results 7x
                                     Reply  Sat Jul 20 12:16:39 2019, gautam, Update, Cameras, CNNs for beam tracking || Analysis of results 
                                        Reply  Sat Jul 20 14:43:45 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results frame0.pdfsubplot_yaw_test.pdfintensity_histogram.mp4network2.pdf
                                           Reply  Wed Jul 24 20:05:47 2019, Milind, Update, Cameras, CNNs for beam tracking || Tales of desperation saturation_percentage.pdf
                   Reply  Thu Jul 25 00:26:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking centroid.pdfsubplot_yaw_test.pdf
          Reply  Mon Jun 17 22:19:04 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
             Reply  Mon Jul 1 20:18:01 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
                Reply  Tue Jul 2 12:30:44 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
                   Reply  Sun Jul 7 17:54:34 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
          Reply  Tue Jun 25 23:52:37 2019, Milind, Update, Cameras, Simulation enhancements  
             Reply  Mon Jul 1 20:11:34 2019, Milind, Update, Cameras, Simulation enhancements  
Message ID: 14787     Entry time: Sat Jul 20 14:43:45 2019     In reply to: 14786     Reply to this: 14807
Author: Milind 
Type: Update 
Category: Cameras 
Subject: CNNs for beam tracking || Analysis of results 

<Adding details>

See Attachment #2.

Quote:

Make the MSE a subplot on the same axes as the time series for easier interpretation.

Training dataset:

  1. Peak to peak amplitue in physical units: ?
  2. Dither frequency: 0.2 Hz
  3. Video data: zoomed in video of the beam spot obtained from GigE camera 198.162.113.153 at 500us exposure time. Each frame has a resolution of 640 x 480 which I have cropped to 350 x 350. Attachment #1 is one such frame.
  4. Yes, therefore I am going to obtain video at lower amplitudes. I think that should help me avoid the problem of not-nominal-maximum value?
  5. Other details of the training dataset:
    1. Dataset created from four vides of duration ~ 30, 60, 60, 60 s at 25 FPS.
    2. 4032 training data points
      1. Input (one example/ data point): 10 successive frames stacked to form a 3D volume of shape 350 x 350 x 10
      2. Output (2 dimensional vector): QPD readings (C1:IOO-MC_TRANS_PIT_ERR, C1:IOO-MC_TRANS_YAW_ERR)
    3. Pre-processing: none
    4. Shuffling: Dataset was shuffled before every epoch
    5. No thresholding: Binary images are gonna be of little use if the expectation is that the network will learn to interpret intensity variations of pixels.

Do I need to provide any more details here?

Quote

Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.

?

Quote:

What is the minimum detectable motion given the CCD resolution?

see attachment #4.

Quote:
  1. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully

 

I wrote what I think is a handy script to observe if the frames are saturated. I thought this might be handy for if/when I collect data with higher exposure times. I assumed there was no saturation in the images because I'd set the exposure value to something low. I thought it'd be useful to just verify that. Attachment #3 has log scale on the x axis.

Quote:

What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.

 

Quote:
  1. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  2. Is the time-sync problem Koji raised limiting this approach?

 

Attachment 1: frame0.pdf  88 kB  Uploaded Sat Jul 20 15:50:01 2019  | Hide | Hide all
frame0.pdf
Attachment 2: subplot_yaw_test.pdf  30 kB  Uploaded Sat Jul 20 17:28:36 2019  | Hide | Hide all
subplot_yaw_test.pdf
Attachment 3: intensity_histogram.mp4  264 kB  Uploaded Sat Jul 20 18:01:34 2019
Attachment 4: network2.pdf  12 kB  Uploaded Tue Jul 23 22:05:28 2019  | Hide | Hide all
network2.pdf
ELOG V3.1.3-