40m
QIL
Cryo_Lab
CTN
SUS_Lab
TCS_Lab
OMC_Lab
CRIME_Lab
FEA
ENG_Labs
OptContFac
Mariner
WBEEShop
|
40m Log |
Not logged in |
 |
|
Thu May 23 15:37:30 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection 6x
|
Sat May 25 20:29:08 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection    
|
Wed Jun 12 22:02:04 2019, Milind, Update, Cameras, Simulation enhancements  
|
Mon Jun 17 14:36:13 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Tue Jun 18 22:54:59 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Tue Jun 25 00:25:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 8x
|
Tue Jun 25 22:14:10 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Thu Jun 27 20:48:22 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking   
|
Thu Jul 4 18:19:08 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking  
|
Mon Jul 8 17:52:30 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Tue Jul 9 22:13:26 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
|
Wed Jul 10 22:32:38 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Mon Jul 15 14:09:07 2019, Milind, Update, Cameras, CNN LSTM for beam tracking  
|
Fri Jul 19 16:47:06 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results 7x
|
Sat Jul 20 12:16:39 2019, gautam, Update, Cameras, CNNs for beam tracking || Analysis of results
|
Sat Jul 20 14:43:45 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results   
|
Wed Jul 24 20:05:47 2019, Milind, Update, Cameras, CNNs for beam tracking || Tales of desperation
|
Thu Jul 25 00:26:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
|
Mon Jun 17 22:19:04 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker
|
Mon Jul 1 20:18:01 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker
|
Tue Jul 2 12:30:44 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker
|
Sun Jul 7 17:54:34 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker
|
Tue Jun 25 23:52:37 2019, Milind, Update, Cameras, Simulation enhancements
|
Mon Jul 1 20:11:34 2019, Milind, Update, Cameras, Simulation enhancements
|
|
Message ID: 14697
Entry time: Tue Jun 25 22:14:10 2019
In reply to: 14694
Reply to this: 14706
|
Author: |
Milind |
Type: |
Update |
Category: |
Cameras |
Subject: |
Convolutional neural networks for beam tracking |
|
|
I discussed this with Gautam and he asked me to come up with a list of signals that I would need for my use and then design the data acquisition task at a high level before proceeding. I'm working on that right now. We came up with a very elementary sketch of what the script will do-
- Check the MC is locked.
- Choose an exposure value.
- Choose a frequency and amplitude value for the applied sinusoidal dither (check warning by Gabriele below).
- Apply sinusoidal dither to optic.
- Timestamping: Record gpstime, instantaneous channel values and a frame. These frames can later be put together in a sequence and a network can be trained on this. (NEED TO COME UP WITH SOMETHING CLEVERER THAN THIS!)
Tomorrow I will try and prepare a dummy script for this before the meeting at noon. Gautam asked me to familiarize myself with the awg, cdsutils (I have already used ezca before) to write the script. This will also help me do the following two tasks-
- IFO test scripts that Rana asked me to work on a while ago
- The PMC autolocker scripts that Rana asked me work on
Quote: |
Upcoming work (in the order of priority):
- Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.
|
I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.
I am pushing the code that I wrote for
- Kruthi's exposure variation - ccd calibration experiment
- modified camera_client_movie.py code (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)
- interact.py (to interact with the GigE in viewing or recording mode) (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)
to the GigEcamera repository.
Gautam also asked me to look at Jigyasa's report and elog 13443 to come up with the specs of a machine that would accomodate a dedicated camera server.
Quote: |
- Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
- Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
- Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.
|
|