As promised in previous e-log, this log is all about the current online seismic noise classification system.
While we had the BLRMS system already in place (which I helped make), Den realized that we would need better filters for the BLRMS channels, as we wanted a strong cut-off, but we also wanted a short step-response so that we could quickly classify seismic signals. Likewise, having a step response which oscillates is also undesirable as this could lead to false classifications of post-truck signal as trucks as a filter adjusts and then dips back down. Thus, after experimenting with many different filters, Den chose to use a combination of
chebyl("LowPass", 1, 1, 0.03)*chebyl("LowPass", 1, 1, 0.03)
as our low-pass filter. The step response and bode plot are below.

The next step was to write C code that would implement the feedforward neural network with my newly generated weights.
Next, I had to implement the code in the c1pem model, and normalize the inputs. Below is an overview of the model, and a close up of the C block section.


The above close-up includes the process of normalization (dividing by the square of the incoming signal), feeding through the neural network, and classifying.
Each seismometer channel set (GUR1X, GUR1Y, GUR1Z, GUR2X, GUR2Y, GUR2Z, STS1X, STS1Y, STS1Z) now has channels (and corresponding DQ channels) of the following form:
SEIS_CLASS : The class of seismic noise 1.0 means Earthquake, 0.5 means Quiet, and 0.0 means Truck. (There are only these 3 digital values).
SEIS_CLASS_EQ, SEIS_CLASS_TRUCK, SEIS_CLASS_QUIET: These channels represent the confidence of the neural network's classification. The class of the current signal will have an output of 1, where the other two channels will have an output between 0 and 1 representing the ratio of the neural network's output in that class neuron to the output in the classification vector neuron. To simply - suppose the neural network classified an earthquake. Ideally, the neural network output neurons would have the value [1, 0, 0], and SEIS_CLASS would equal 1.0 for earthquake. However, the output neurons probably read something along the lines of [0.9, 0.3, 0.5] - SEIS_CLASS is still 1.0, but SEIS_CLASS_EQ would be 1.0, and SEIS_CLASS_TRUCK would be 0.5 / 0.9 and SEIS_CLASS_QUIET would be 0.3 / 0.9. The lower the other two signals are, the better - this means that we are more confident in our classification.
The MEDM screen for this system (in the RMS system) has the following form for all seismometer channels (this one is GUR1X):

These are the screens I edited earlier in the summer, with modifications. The bottom filter banks represent the norm of the seismometer signal, which we use to normalize the inputs to the neural network.
Here a close-up of the most important part:

The orange meter on the right points to the current signal type. Here it reads truck - this is ok because it's the middle of the day, and there are a lot of trucks around. The left side represents our confidence in the signal - the signal is classified as a truck, so the "Truck" bar is saturated. The quiet signal bar is very low, which is good since it means that the neural network thinks that it's definitely not quiet. The earthquake bar has some magnitude, since earthquake signals and trucks have some degree of linear non-separability.
How has this been performing? Firstly, all of the seismometer channels have the same classification readout, which is good. Last night, all of the classes were "quiet", with an "earthquake" which occurred when Den jumped around GUR1 to simulate an EQ. This morning it was on "truck" as expected. The filters are still not fine enough to detect individual trucks, but I will continue to monitor the performance over the coming days.
If anyone has ideas on how better to represent this information, please let me know. This was the first thing that came into my head that would work with my MEDM monitor options, and I'm open to suggestions! |