40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop | ||||||||||

| ||||||||||

| ||||||||||

| ||||||||||

| ||||||||||

Hi everyone, I've been on break this week, so in addition to working at my lab here, I've done some NN stuff. In response to Den's response to my last post, I've included learning curve plotting capabilities, I've explored all of the currently documented capabilities of
standard: This creates a fully connected network, useful for small networks, as in the reference plant case sparse: This creates a sparsely connected network (not all of the connections between all neurons exist at all times), useful for large networks, but not useful in the reference plant case, since the number of neurons is relatively small shortcut: This creates some connections in the network which skip over various hidden layers. Not useful in the harmonic oscillator case since there are no hidden layers. Probably won't be useful in a better-modeled referrence plant since this reduces the non-linear capabilities of the model.
TRAIN_INCREMENTAL: updates the weights after every iteration, rather than after each epoch. This is faster than the other algorithms for the reference plant. TRAIN_BATCH: updates the weights after training on the whole set. This should not be used on batches of data for the reference plant, seeing as the time history dependence of the plant is smaller than the size of the entire data set. TRAIN_RPROP: batch training algorithm which updates the learning parameter. TRAIN_QUICKPROP: updates the learning parameter, and uses second derivative information, instead of just first derivative, for backpropagation.
FANN offers a bunch of activation functions, including a function FANN_ELLIOT, which is essentially the "signmoid like" activation function Den and I used this summer, which runs in the order of multiplication and addition. The function parameters (steepness) can also be set.
As usual, the learning parameter can be set. While over the summer we worked with lower learning parameters, in the case of the harmonic oscillator reference plant, since the error is low after the first iteration, higher learning parameters (0.9, for example), work better. However, this is a very isolated case, and, in general, lower parameters, though convergence is slower, produce more optimal results. The learning momentum is another parameter that can be set - the momentum factor is a coefficient in the weight adjustment equation which allows for the difference in weights beyond the previous weight to be factored in. In the case of the reference plant, a higher learning momentum (0.9) is optimal, although in most cases, a lower learning momentum is optimal so that the learning curve doesn't oscillate terribly.
FANN does not explicitly include
I have included plots of the learning curves - each title includes the architecture, the learning algorithm, the learning parameter, and the learning momentum if I modified it explicitly. All of my code (and more plots!) can be found in /users/masha/neural_plant On the whole, FANN has rather limited capabilities, especially in terms of learning algorithms, where it only has 4 (+ all of the changes one can make to parameters and rates). It is, however, much more intuitive to code with and faster han the Matlab NN library, although the later has more algorithms. I'll browse around for more open-source packages. Best, Masha |