40m
QIL
Cryo_Lab
CTN
SUS_Lab
TCS_Lab
OMC_Lab
CRIME_Lab
FEA
ENG_Labs
OptContFac
Mariner
WBEEShop
|
40m Log |
Not logged in |
 |
|
Thu May 23 15:37:30 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection 6x
|
Sat May 25 20:29:08 2019, Milind, Update, Cameras, Simulation enhancements and performance of contour detection    
|
Wed Jun 12 22:02:04 2019, Milind, Update, Cameras, Simulation enhancements  
|
Mon Jun 17 14:36:13 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Tue Jun 18 22:54:59 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Tue Jun 25 00:25:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 8x
|
Tue Jun 25 22:14:10 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Thu Jun 27 20:48:22 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking   
|
Thu Jul 4 18:19:08 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking  
|
Mon Jul 8 17:52:30 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Tue Jul 9 22:13:26 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
|
Wed Jul 10 22:32:38 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking
|
Mon Jul 15 14:09:07 2019, Milind, Update, Cameras, CNN LSTM for beam tracking  
|
Fri Jul 19 16:47:06 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results 7x
|
Sat Jul 20 12:16:39 2019, gautam, Update, Cameras, CNNs for beam tracking || Analysis of results
|
Sat Jul 20 14:43:45 2019, Milind, Update, Cameras, CNNs for beam tracking || Analysis of results   
|
Wed Jul 24 20:05:47 2019, Milind, Update, Cameras, CNNs for beam tracking || Tales of desperation
|
Thu Jul 25 00:26:47 2019, Milind, Update, Cameras, Convolutional neural networks for beam tracking 
|
Mon Jun 17 22:19:04 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker
|
Mon Jul 1 20:18:01 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker
|
Tue Jul 2 12:30:44 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker
|
Sun Jul 7 17:54:34 2019, Milind, Update, Computer Scripts / Programs, PMC autolocker
|
Tue Jun 25 23:52:37 2019, Milind, Update, Cameras, Simulation enhancements
|
Mon Jul 1 20:11:34 2019, Milind, Update, Cameras, Simulation enhancements
|
|
Message ID: 14760
Entry time: Mon Jul 15 14:09:07 2019
In reply to: 14734
Reply to this: 14779
|
Author: |
Milind |
Type: |
Update |
Category: |
Cameras |
Subject: |
CNN LSTM for beam tracking |
|
|
I've set up network with a CNN encoder (front end) feeding into a single LSTM cell followed by the output layer (see attachment #1). The network requires significantly more memory than the previous ones. It takes around 30s for one epoch of training. Attached are the predicted yaw motion and the fft of the same. The FFT looks rather curious. I still haven't done any tuning and these are only the preliminary results.
Quote: |
Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).
|
Well, what about the previous conv nets?
What I did:
- Extensive tuning - of learning rate, batch size, dropout ratio, input size using a grid search
- Trained each network for 75 epochs and obtained weights, predicted motion and corresponding FFT, error etc.
What I observed:
- Loss curves look okay, validation loss isn't going up, so I don't think overfitting is the issue
- Training for over (even) 75 epochs seems to be pointless.
What I think is going wrong:
- Input size- relatively large input size: 350 x 350. Here, the input image size seems to be 128 x 128.
- Inadequate pre-processing.
- I have not applied any filters/blurs etc. to the frames.
- I have also not tried dimensionality reduction techniques such as PCA
What I will try now:
- Collect new data: with smaller amplitudes and different frequencies
- Tune the LSTM network for the data I have
- Try new CNN architectures with more aggressive max pooling and fewer parameters
- Ensembling the models (see this and this). Right now, I have multiple models trained either with same architecture and different hyperparameters or with different architectures. As a first pass, I intend to average the predictions of all the models and see if that improves performance.
|
|
|
|
|
|
|