40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log  Not logged in ELOG logo
Entry  Thu May 23 15:37:30 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection 6x
    Reply  Sat May 25 20:29:08 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection residue_normalised_x.pdfresidue_normalised_x.pdfresidue_normalised_x.pdfpredicted_motion_x.pdfnormalised_comparison_y.pdf
       Reply  Wed Jun 12 22:02:04 2019, Milind, Update, Cameras, Simulation enhancements  simulated_motion0.mp4simulated_motion0.mp4simulated_motion0.mp4
          Reply  Mon Jun 17 14:36:13 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
             Reply  Tue Jun 18 22:54:59 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                Reply  Tue Jun 25 00:25:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 8x
                   Reply  Tue Jun 25 22:14:10 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                      Reply  Thu Jun 27 20:48:22 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking readme.txtframe0.pdfLearning_curves.pngMotion.png
                         Reply  Thu Jul 4 18:19:08 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking Motion.pdfError.pdfLearning_curves.pdf
                            Reply  Mon Jul 8 17:52:30 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking Motion.pdf
                               Reply  Tue Jul 9 22:13:26 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking predicted_motion_first.pdfpcdev5_time.png
                                  Reply  Wed Jul 10 22:32:38 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                               Reply  Mon Jul 15 14:09:07 2019, Milind, Update, Cameras, CNN LSTM for beam tracking cnn-lstm.pngfft_yaw.pdfyaw_motion.pdf
                                  Reply  Fri Jul 19 16:47:06 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results 7x
                                     Reply  Sat Jul 20 12:16:39 2019, gautam, Update, Cameras, CNNs for beam tracking || Analysis of results 
                                        Reply  Sat Jul 20 14:43:45 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results frame0.pdfsubplot_yaw_test.pdfintensity_histogram.mp4network2.pdf
                                           Reply  Wed Jul 24 20:05:47 2019, Milind, Update, Cameras, CNNs for beam tracking || Tales of desperation saturation_percentage.pdf
                   Reply  Thu Jul 25 00:26:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking centroid.pdfsubplot_yaw_test.pdf
          Reply  Mon Jun 17 22:19:04 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
             Reply  Mon Jul 1 20:18:01 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
                Reply  Tue Jul 2 12:30:44 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
                   Reply  Sun Jul 7 17:54:34 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
          Reply  Tue Jun 25 23:52:37 2019, Milind, Update, Cameras, Simulation enhancements  
             Reply  Mon Jul 1 20:11:34 2019, Milind, Update, Cameras, Simulation enhancements  
Message ID: 14694     Entry time: Tue Jun 25 00:25:47 2019     In reply to: 14682     Reply to this: 14697   14809
Author: Milind 
Type: Update 
Category: Cameras 
Subject: Convolutional neural networks for beam tracking 

In the previous meeting, Koji pointed out (once again) that I should determine if the displacement values and frames are synchronized before training a network. Pooja did the following last time. Koji also suggested that I first predict the motion (a series of x and y coordintates) and then slide resulting plots around until I get the best match for the original motion. This is however not possible with a neural network based approach as the network learns exactly what you show it and therefore it will learn any mismatch between the labels and the frames and predict exactly that. Therefore I came up with what Koji described as "hacky" method to achieve the same using the opencv work described previously in this elog (the only addition being the application of a mask to block out the OSEMs and work only with the beam spot) .

Hacky technique to sync frames and labels:

  1. I ran the OpenCV algorithm on the data to obtain a plot for predicted motion depicted in Attachment #2. As is evident, the predicted motion is only an approximate of the actual motion and also displays a shift . However, a plot of the fourier transform of the signal (see Attachment #1) shows that the components present are the same. However, the predominant frequency component is 0.22 Hz rather than 0.2 Hz as stated by Pooja in her elog. I wonder if this is of any consequence. Therefore, this predicted motion can be slid around until it overlaps with the applied sinusoidal dither signal "well".
  2. Defining "well": I computed an error signal as the differnece between the predicted signal and the actual motion with each signal being normalized by subtracting the mean and then dividing the resulting signal by the maximum value (see Attachment #3). The lower the power of the resulting signal, the better the synchronization of the predicted and actual signal. Note: To achieve this overlap of signals, datapoints are removed from either the start or end of the signals and this effectively reduces the number of data points available for training by 36 pionts (see Attachment #4, positive and negative shifts merely indicate if the predicted signal is being moved right or left).
  3. Attachments #5,#6 show the resutls of shifting the data by 36 samples. it is evident that there is far greater overlap of the prediction and the actual values.
  4. Well, what now? I will use the mapping between labels and frames obtained by the above steps to train a neural network.

[Koji, Milind - 21/06/2019]

  1. Well, the above is fine, but why is contour detection really necessary? Why not take a weighted sum of all the pixel values (in a rectangular region obtained, say, after blocking out the OSEMs) to see what the centroid motion is? Black areas (0 pixel intensity values) will not contribute to this sum anyway. Perhaps that can be used for the sliding instead of the above (fallible!) approach, specially for cases in which the beam "spot"  is just a collection of random speckles?
    1. Something like this was done by Pooja where she computed the sum of pixel intensities in a rectangular region containing the beam spot. However, she did this for very noisy data and observed intensity variation at a frequency double that of the applied signal.
    2. Results of applying a median filter and doing the same are presented in Attachment #7. Clearly, they can't be used for this sliding task.
    3. Results of computing the weighted sum of all the coordinates (with pixel intensities as the weights are presented in Attachment #8. Clearly, for this data and for this task, the contour approach seems to be a better method. Further, these resutls just serve to prove Rana's point that such simple, unsophisticated, naive approaches will not produce desired results and therefore, shall be presented in this very context in the report that is due.
  2. The contour detection technique does not work if the beam spot is just a cokllection of speckles. In that case Koji suggested that we use a bounding convex hull instead of a contour. Alternately, for a bunch of speckles I can perform dilation to reduce it to the same problem.
  3. Using gpstime for time stamping: To determine the absolute time which a frame is grabbed. However, the time between the time being recorded and grabbing of frame needs to be determined for this which should be doable using linux/python commands.
Quote:

Worked further on this. I skimmed through a few resources to look for details of what pre-processing can be done. Here (am planning to convert all these resources, particularly those I come across for GANs into either a README on the repo or a Wiki soon) are some of the useful things I found during today's reading. The work I skimmed through today mostly pointed to the use of a median filter for pre-processing, if any is to be done. I am presently using the Sequential() API in Keras to set up the neural network. I will train it tomorrow.

 


Upcoming work (in the order of priority):

  1. Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.
  2. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
  3. Simulation:
    1. Putting the physics in: Previously, I worked on adding point scatterers. I shall add the effect of surface roughness and incorporate the BRDF next. Just as Gautam did, Rana also reccommended that I go through Hiro Yamamoto's work to improve my understanding of this.
    2. GANs: I will put together a readme (which I will turn into a wiki later) for all the material that I am using to develop my ideas about GAN training. Currently, my understanding of GANs is that they take as input noise vectors which are fed to the generative networks which then produce the fakes. This clearly isn't the only way to do it as GANs are used for several applications such as image generation from text. I am referring to these papers to set up the necessary architecture.
  4. PMC autolocker: I will convert the existing autolocker script to python. Rana also suggested that it would be interesting to see what the best settings of the hyperparameters would be to lock the PMC the fastest. I will write a script to do that and plot a 3D surface plot of the average time taken to lock the PMC as a function of the PZT scan speed and the Servo gain to determine the optimal setting of these "hyperparameters".
  5. Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
  6. Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.

 

Attachment 1: Spectra.pdf  13 kB  Uploaded Mon Jun 24 14:07:37 2019  | Hide | Hide all | Show all
Spectra.pdf
Attachment 2: normalised_comparison_y.pdf  21 kB  Uploaded Mon Jun 24 14:15:19 2019  | Hide | Hide all | Show all
normalised_comparison_y.pdf
Attachment 3: residue_normalised_y.pdf  20 kB  Uploaded Mon Jun 24 14:15:26 2019  | Show | Hide all | Show all
Attachment 4: error_power_sliding.pdf  12 kB  Uploaded Mon Jun 24 15:12:46 2019  | Hide | Hide all | Show all
error_power_sliding.pdf
Attachment 5: normalised_comparison_y.pdf  21 kB  Uploaded Mon Jun 24 15:16:56 2019  | Hide | Hide all | Show all
normalised_comparison_y.pdf
Attachment 6: residue_normalised_y.pdf  20 kB  Uploaded Mon Jun 24 15:17:40 2019  | Show | Hide all | Show all
Attachment 7: intensum.pdf  59 kB  Uploaded Mon Jun 24 16:50:05 2019  | Hide | Hide all | Show all
intensum.pdf
Attachment 8: centroid.pdf  64 kB  | Hide | Hide all | Show all
centroid.pdf
ELOG V3.1.3-