40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log  Not logged in ELOG logo
Entry  Thu Sep 20 08:50:14 2012, Masha, Update, MachineLearning, Machine Learning Update 
    Reply  Thu Sep 20 22:52:38 2012, Den, Update, MachineLearning, Feedback controller 
       Reply  Fri Nov 2 13:20:35 2012, Masha, Update, MachineLearning, Feedback controller standard_BATCH_0p35_ref_plant_lc.pngstandard_QUICKPROP_0p35_ref_plant_lc.pngstandard_RPROP_0p35_ref_plant_lc.pngstandard_INCREMENTAL_0p35_ref_plant_lc.pngstandard_INCREMENTAL_0p9_0p9_ref_plant_lc.png
Message ID: 7661     Entry time: Fri Nov 2 13:20:35 2012     In reply to: 7424
Author: Masha 
Type: Update 
Category: MachineLearning 
Subject: Feedback controller 



I have uploaded to my directory a directory neural_plant. The most important file is reference_plant.c, which compiles with the command

 We would appreciate some plots. Learning curves of recurrent NN working as a plant are interesting. For harmonic oscillator your RNN should not contain any hidden layers - only 1 input and 1 output node and 2 delays at each of them. Activation function should be linear. If your code is correct, this configuration will match oscillator perfectly. The question is how much time does it take to adapt.

Does FANN support regularization? I think this will make your controller more stable. Try to use more advanced algorithms then gradient descent for adaptation. They will increase convergence speed. For example, look at fminunc function at Matlab.

Hi everyone,

I've been on break this week, so in addition to working at my lab here, I've done some NN stuff. In response to Den's response to my last post, I've included learning curve plotting capabilities, 

I've explored all of the currently documented capabilities of FANN (Fast Artificial Neural Network - it's a C library) (most likely, there are additions to the library floating around in open-source communities, but I have yet to look into those). There is extensive FANN documentation on the FANN website (http://leenissen.dk/fann/html/files/fann-h.html), but I'll cut it down to the basics here:

FANN Neural Network Architectures

standard: This creates a fully connected network, useful for small networks, as in the reference plant case 

sparse: This creates a sparsely connected network (not all of the connections between all neurons exist at all times), useful for large networks, but not useful in the reference plant case, since the number of neurons is relatively small

shortcut: This creates some connections in the network which skip over various hidden layers. Not useful in the harmonic oscillator case since there are no hidden layers. Probably won't be useful in a better-modeled referrence plant since this reduces the non-linear capabilities of the model.

FANN Training  

TRAIN_INCREMENTAL: updates the weights after every iteration, rather than after each epoch. This is faster than the other algorithms for the reference plant.

TRAIN_BATCH: updates the weights after training on the whole set. This should not be used on batches of data for the reference plant, seeing as the time history dependence of the plant is smaller than the size of the entire data set. 

TRAIN_RPROP: batch training algorithm which updates the learning parameter.

TRAIN_QUICKPROP: updates the learning parameter, and uses second derivative information, instead of just first derivative, for backpropagation. 

FANN Activation Functions

FANN offers a bunch of activation functions, including a function FANN_ELLIOT, which is essentially the "signmoid like" activation function Den and I used this summer, which runs in the order of multiplication and addition. The function parameters (steepness) can also be set.

FANN Parameters

As usual, the learning parameter can be set. While over the summer we worked with lower learning parameters, in the case of the harmonic oscillator reference plant, since the error is low after the first iteration, higher learning parameters (0.9, for example), work better. However, this is a very isolated case, and, in general, lower parameters, though convergence is slower, produce more optimal results. 

The learning momentum is another parameter that can be set - the momentum factor is a coefficient in the weight adjustment equation which allows for the difference in weights beyond the previous weight to be factored in. In the case of the reference plant, a higher learning momentum (0.9) is optimal, although in most cases, a lower learning momentum is optimal so that the learning curve doesn't oscillate terribly. 


FANN does not explicitly include regularization, but this can be implemented by checking the MSE  at each iteration against the MSE at the n previous iterations, where n is the regularization parameter, and stopping training if there is no significant decrease (also determined by a parameter). The error bound I specified during training was 0.0001

The best result for the reference plant was obtained using FANN_TRAIN_INCREMENTAL, a "standard" architecture, a learning rate of 0.9 (as explained above) and a learning momentum of 0.9 (these values should NOT be used for highly non-linear and more complicated systems). 

I have included plots of the learning curves - each title includes the architecture, the learning algorithm, the learning parameter, and the learning momentum if I modified it explicitly.

All of my code (and more plots!) can be found in /users/masha/neural_plant

On the whole, FANN has rather limited capabilities, especially in terms of learning algorithms, where it only has 4 (+ all of the changes one can make to parameters and rates). It is, however, much more intuitive to code with and faster han the Matlab NN library, although the later has more algorithms. I'll browse around for more open-source packages. 





ELOG V3.1.3-