40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log  Not logged in ELOG logo
Entry  Thu May 23 15:37:30 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection 6x
    Reply  Sat May 25 20:29:08 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection residue_normalised_x.pdfresidue_normalised_x.pdfresidue_normalised_x.pdfpredicted_motion_x.pdfnormalised_comparison_y.pdf
       Reply  Wed Jun 12 22:02:04 2019, Milind, Update, Cameras, Simulation enhancements  simulated_motion0.mp4simulated_motion0.mp4simulated_motion0.mp4
          Reply  Mon Jun 17 14:36:13 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
             Reply  Tue Jun 18 22:54:59 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                Reply  Tue Jun 25 00:25:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 8x
                   Reply  Tue Jun 25 22:14:10 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                      Reply  Thu Jun 27 20:48:22 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking readme.txtframe0.pdfLearning_curves.pngMotion.png
                         Reply  Thu Jul 4 18:19:08 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking Motion.pdfError.pdfLearning_curves.pdf
                            Reply  Mon Jul 8 17:52:30 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking Motion.pdf
                               Reply  Tue Jul 9 22:13:26 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking predicted_motion_first.pdfpcdev5_time.png
                                  Reply  Wed Jul 10 22:32:38 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
                               Reply  Mon Jul 15 14:09:07 2019, Milind, Update, Cameras, CNN LSTM for beam tracking cnn-lstm.pngfft_yaw.pdfyaw_motion.pdf
                                  Reply  Fri Jul 19 16:47:06 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results 7x
                                     Reply  Sat Jul 20 12:16:39 2019, gautam, Update, Cameras, CNNs for beam tracking || Analysis of results 
                                        Reply  Sat Jul 20 14:43:45 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results frame0.pdfsubplot_yaw_test.pdfintensity_histogram.mp4network2.pdf
                                           Reply  Wed Jul 24 20:05:47 2019, Milind, Update, Cameras, CNNs for beam tracking || Tales of desperation saturation_percentage.pdf
                   Reply  Thu Jul 25 00:26:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking centroid.pdfsubplot_yaw_test.pdf
          Reply  Mon Jun 17 22:19:04 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
             Reply  Mon Jul 1 20:18:01 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
                Reply  Tue Jul 2 12:30:44 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
                   Reply  Sun Jul 7 17:54:34 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker 
          Reply  Tue Jun 25 23:52:37 2019, Milind, Update, Cameras, Simulation enhancements  
             Reply  Mon Jul 1 20:11:34 2019, Milind, Update, Cameras, Simulation enhancements  
Message ID: 14697     Entry time: Tue Jun 25 22:14:10 2019     In reply to: 14694     Reply to this: 14706
Author: Milind 
Type: Update 
Category: Cameras 
Subject: Convolutional neural networks for beam tracking 

I discussed this with Gautam and he asked me to come up with a list of signals that I would need for my use and then design the data acquisition task at a high level before proceeding. I'm working on that right now. We came up with a very elementary sketch of what the script will do-

  1. Check the MC is locked.
  2. Choose an exposure value.
  3. Choose a frequency and amplitude value for the applied sinusoidal dither (check warning by Gabriele below).
  4. Apply sinusoidal dither to optic.
  5. Timestamping: Record gpstime, instantaneous channel values and a frame. These frames can later be put together in a sequence and a network can be trained on this. (NEED TO COME UP WITH SOMETHING CLEVERER THAN THIS!)

Tomorrow I will try and prepare a dummy script for this before the meeting at noon. Gautam asked me to familiarize myself with the awg, cdsutils (I have already used ezca before) to write the script. This will also help me do the following two tasks-

  1. IFO test scripts that Rana asked me to work on a while ago
  2. The PMC autolocker scripts that Rana asked me work on
Quote:
 

Upcoming work (in the order of priority):

  1. Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.

 

I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.

I am pushing the code that I wrote for

  1. Kruthi's exposure variation - ccd calibration experiment
  2. modified camera_client_movie.py code (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)
  3. interact.py (to interact with the GigE in viewing or recording mode) (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)

to the GigEcamera repository.

 

Gautam also asked me to look at Jigyasa's report and elog 13443 to come up with the specs of a machine that would accomodate a dedicated camera server.

 

Quote:
 
  1. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
  2. Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
  3. Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.

 

 

ELOG V3.1.3-