40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop | ||||||||||

| ||||||||||

| ||||||||||

| ||||||||||

The main thing that I did this week was write a C block that, given static weights, would classify seismic signals (into one of three categories - Earthquake, Truck, Quiet). I have successfully debugged the C block so that it works without segmentation faults, etc, and have made various versions - one that uses a recurrent neural network, and one that uses a time-delayed input vector (rather than keeping recurrent weights). I've timed my code, and it works very fast (to the point where clock() differences in <time.h> are 0.000000 seconds per iteration). This is good news, because it means that operations can be performed in real-time, whether we are sampling at 2048 Hz, or, as Rana suggested, 256 Hz (currently, my weights are for 256 Hz, and I can decimate the incoming signal which is at 2048 Hz right now). In order to optimize my code, since at the core it involves a lot of matrix multiplications, I considered how the data is actually stored in the computer, and attempted to minimize pointer movement. Suppose you have an array in C of the form A[num_row][num_col] - the way this array is actually stored on the stack or heap is row_1 / row_2 / row_3 / ... / row_num_row, so it makes sense to move across a matrix from left to right and then down (as though reading on a page). Likewise, there's no efficient algorithm for matrix multiplication which is less that O(N^2) (I think), so it's essentially impossible to avoid double for loops (however, the way I process the matricies, as mentioned before, minimizes this time). The code is also fast because, rather than using an actual e^-u operation for the sigmoidal activation function, it uses a parametrized hyperbola - this arithmetic operations are the only ones that occur, and this is much faster than exponentiation (which I believe is just computer by Taylor series in the C math library..) The weight vectors for this block are static (since they're made from training data where the signal type is already known). I am not currently satisfied with the performance of the block on data read from a file, so I am retraining my network. I realized that what is currently happening is that, given a time-dependent desired output vector, the network first trains to output a "quiet" vector, then a "disturbance" vector, and then retrains again to output a "quiet vector" and completely forgets how to classify disturbance. Currently, I am trying to get around this problem by shifting my earthquake data time-series, so that when I train in batch (on all of my data), there is probably an earthquake at all time points, so that the network does not only train on "quiet" at certain iterations. Likewise, I realized that I should perform several epochs (not just one) on the data - I tried this last night, and training performance MSE decreased by a factor of 1 per iteration (when on average, it's about 40, and 20 at best). After I input the static weight vectors (which shouldn't take long since my code is very generalized), the C block can be added to the c1pem frame, and a channel can be made for the seismic disturbance class. I've made sure to keep with all of the C block rules when writing my code (both in terms of function input/output, and in terms of not using any C libraries). As for neural networks for control, I talked to Denis about the controller block, and he realized that we should, instead of adding noises, at first attempt to use a reference plant with a lower Q pendulum and a real plant with a higher Q pendulum (since we want pendulum motion to be damped). I've tried training the controller block several times, but each time so far the plant pendulum has started oscillating greatly. My current guess at fixing this is training more. Also, Jenne and I made a cable for Guralp 1 (I soldered, she explained how to do it), and it seems to work well, as mentioned in my previous E-log. Hopefully it can be used to permanently keep the seismometer all the way down the arm. |