40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  SUS Lab eLog, Page 18 of 38  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  684   Mon Jul 29 14:22:04 2013 GiorgosDailyProgressSUSADC & DAC bits/volts conversion, HE output and saturation of Transfer Functions


The bits/volts conversion factor is different for our ADC and DAC. Specifically, I measured the voltage output of the ADC and DAC and, by comparing it to the input and output readings--in bits-- of the computer respectively, I found this relationship to be 1.64bit/mV for the ADC and 3.3bit/mV for the DAC.

HE sensors output range

We also measured the output of the HE to fluctuate at most 100mV in response to the movement of the plate. Given that, a small displacement of the plate that produces roughly 30mV would bring approximately a 18bit change in the ADC output. With the already inherent noise and fluctuation of the bits reading, it is therefore difficult to detect small movements of the plate; it is necessary to boost the HE output after subtracting the HE sensors offset.
The HE sensors signal goes a voltage offset and then a high-pass filter. We will adjust our resistors' values only in the first state, so that the voltage offset more accurately corrects the inherent offset of the sensors and amplifies the output even more. Currently, as described in my first research report, the gain was 1; we will now aim for a gain of 50. I calculated the expression for Vout in the voltage offset configuration to be:

Vout=-(R4/R3)Vin + 5* [R2(R4+R3)/R3(R1+R2)].

A gain of 50 would also increase the inherent offset of the sensors, which would now be about (50*2.5)=12.5V; we also need to fix that. I calculated that if we use R4=50*R3 and R1=19.4*R2, we can get the desired gain, while also appropriately correcting for the offset.

Transfer functions saturation

We measured the transfer function of our damping transfer function: (s+Zo)/[(s+Po)(s+P1)] where Zo<Po<P1. We noticed that the voltage source setting of the spectrum analyzer affected our transfer functions. I extracted and plotted the transfer functions for three different voltage sources: 10mV, 100mV and 500mV which are shown in this order below. We are unsure as to why that happens.




Attachment 3: TF_500mV.png
  685   Tue Jul 30 09:20:18 2013 GiorgosDailyProgressSUSHE output & DAC output/voltage follower

HE output

We tested the HE output and found that all the outputs were fine, except for the S1 board which saturated. This board was also the one with the odd phase function (when measured earlier in the summer). We realized that some components may have been disconnected (not enough soldering) and we easily fixed the problem.

DAC output - Voltage Follower

The output of DAC was lower than the one we expect. We measured the DAC output in V for different feedback filters and compared it to the value we would expect by converting the digital feedback of our computer from bits to volts. However, when the ribon wires connecting the DAC and the actuation conditioning boards (coils) is disconnected, we get sensible readings; the ribon seems to limit our output. It might be the case that the internal resistance of the DAC is large and that causes a huge voltage drop. We will then use a voltage follower configuration with an OP27, so that the current is not sourced from the DAC. We introduced a voltage follower in the 10 actuation boards (7 coils & 3 DC magnetic offset). Following, you can see the voltage follower configuration.


Simulink Model

I also created a draft model to model our system, using Simulink. The matrix that determines the coupling of the 6 degrees of freedom still remains to be calculated.


  688   Thu Aug 1 11:43:15 2013 GiorgosDailyProgressSUSSensing Matrix and Calibration

Sensing Matrix
I calculated the entries of our sensing matrix S x = y, where S is the 6x6 sensing matrix, x is 6x1 vector signal from six degrees of freedom and y is the 6x1 signal sensed by six HE sensors. Haixing told me to ignore the 7th sensor (N), because in practice we could levitate the system using only the rest six. The sensing matrix contains many unknown coefficients, which we will find by calibrating the system and adjusting the values. I will post the matrix, once we know the entries are correct.


Coil - Force

We want to get an idea of how much force is applied on the plate for different signals from the digital filter. So, we measure the DC motors output (in mV) and we also have previous measurements of how the DC motors voltage correlates to the force applied. Therefore, knowing how the digital filter output corresponds to the DC motors voltage offset, we can infer the relationship between digital signal output and force on the plate. We first worked with the first top coils: AC1, AC2, and AC3. While changing the output for a single coil, we measured the response of all the motors to consider cross-coupling effects. While measuring the response of the bottom motors, we have to make sure there is always contact betweem them and the plate; otherwise, the data will not show how the force on the motors changes.    

Calibration of Coils - Force
    AC1     AC2     AC3  
V/mV B1 B2 B3 B1 B2 B3 B1 B2 B3
0 -276 170 246 -268 -44 285 -268 -32 259
-9 -263 -28 281 -265 -38 256 -265 -27 249
-7 -262 -32 281 -265 -40 259 -267 -28 251
-5 -263 -36 283 -265 -41 260 -267 -29 253
-3 -264 -38 283 -266 -43 263 -266 -28 254
-1 -267 -41 283 -267 -44 264 -266 -30 258
1 -268 -44 285 -268 -45 266 -266 -30 260
3 -271 -46 286 -268 -46 268 -265 -29 263
5 -272 -49 287 -270 -47 271 -266 -31 265
7 -274 -52 288 -272 -45 276 -265 -31 268
9 -274 -54 287 -272 -44 277 -264 -34 275












When we apply feedback, current runs through the coils and creates/adjust the ambient magnetic field. However, the magnetic force the HE sensors feel from the levitated plate is so coupled with the one created by the coils. We also want to know how sensitive the sensors are to our coils, when we apply feedback and so we again took measurements.

Calibration of Coils - Sensors
    AC1     AC2     AC3  
Source/Bits AC1 AC2 AC3 AC1 AC2 AC3 AC1 AC2 AC3
0 3400 -2800 2050 3400 -2820 2040 3440 -2790 2040
-9 -680 -2810 2050 3430 -7140 2050 3430 -2790 -2040
-7 240 -2810 2050 3440 -6170 2090 3440 -2790 -1140
-5 1160 -2800 2050 3430 -5220 2075 3430 -2790 -220
-3 2075 -2810 2050 3430 -4250 2060 3410 -2820 690
-1 2980 -2800 2050 3440 -3300 2050 3430 -2810 1600
1 3880 -2810 2040 3440 -2350 2060 3410 -2820 2510
3 4780 -2840 2040 3430 -1400 2060 3410 -2800 3420
5 5680 -2840 2040 3420 -450 2050 3420 -2800 4310
7 6590 -2840 2050 3400 520 2050 3420 -2820 5230
9 7480 -2840 2040 3420 1490 2040 3410 -2800 6130












We also want to know how sensitive the sensors are to a vertical displacement of the plate:

HE output - plate displacement
  Inches/bits AC1 AC2 AC3
Group1 0'' 3290 -3110 1975
  0.3'' 3510 -2830 2190
Group2 0'' 3280 -3110 1940
  0.3'' 3540 -2850 2160

In all our measurements, we noticed some hysteresis so we had to follow the same order when recording the data. In case of a mistake, we had to repeat from the beginning.

  692   Sat Aug 3 14:50:51 2013 GiorgosDailyProgressSUSCoupling from coil's signal

Coupling from coils' signal

On Thursday, I measured the HE sensors sensitivity to the magnetic field provided by the feedback coils. Unfortunately, I discovered that there was significant coupling; the sensors feel equally the plate's displacement and the magnetic field from the sensors. This effect changes our feedback loop, which now looks as drawn below. (the line between GP and GC boxes is mistakenly showing up after converting from



I calculated the output of the plate based on this configuration and found that .

If our GY is big, I can rewrite this equation as x= GpF, also assuming that the transfer function of the plant Gp is very small (as desired). In this case, our system will not be stable, because no feedback is essentially used. To reduce the coupling from the coils' signal, we changed the arrangement of the plant. To this point, the ACs feedback coils were at the same place as the ACs sensors. So, we decided to switch the DC and AC coils (not physically) and provide the feedback from the DCs coils instead, which lie further away from the sensors. In this way, we hope that the sensors would not feel as strong of a field from the AC coils as before. Our DC actuation boards did not have the transfer function (low-pass filter) that we included in the AC actuation boards, so we had to make adjustments in the digital feedback. We added dewhitening for the DC coils (now used for feedback) and whitening for the AC coils (now used for DC magnetic offset).





  693   Sun Aug 4 22:44:36 2013 GiorgosSummarySUSVolts to N conversion factors & Correction of Transfer Function

As I mentioned in my previous post, the signal from the plate's displacement is strongly coupled with the coil's signal, so that our system is unstable. In fact, I calculated the transfer function of this "feedback loop" of the coils and found it to be about 2mV per V, roughly the magnitude of the feedback signal of the plate. We now use DCs coils to provide the feedback loop and want to find the conversion between volts applied from the DC coils and the force and only care about certain readings.


In the above figure, I represent the plate with the circle. Sensors and coils are in black and lie above the plate, while motors are in purple and lie below. As you can see in our arrangement, the DC coils are above the DC motors, so it is safe to ignore readings from the strain gauges that are not at or neighboring with the coils. Then, I calculated the conversion factors between applied V in the coils and applied force on the plate.

If prior post I showed the measurements between Volts in coils and measured mV for the strain gauges [mV=Volts*slope (mV/V)]
I also posted the measurements between weight/force and measured mV for the strain gauges [mV=Force*slope(mV/N)]
I found how volts in the coils correlate to applied force by combinging the two equations:

F=Volts*slope(mV/V) / slope (mV/N) = Volts * slope (N/V)

To give an example, I look at the AC1 coil. I have measured the response of the B1, B2, and B3 strain gauges. I also know how B1 and B2 strain gauges responded to the weights I put on AC1 (here, I ignore the 3rd reading from B3 strain gauge, because it is further away as seen in the above figure). Thus, I will get two readings (one through each, B1 and B2, motor) for how AC1 coil signal correlates to force applied by the AC1 coil. These numbers should in principle agree, or at least be close. Here are my findings:

ΔN (by DC3 coil) = 0.0239 (N/V) * V (by DC3 coil) ; measured through B2
ΔN (by AC3 coil) = 0.00055 (N/V) * V (by AC3 coil) ; measured through B2
ΔN (by AC3 coil) = 0.0023   (N/V) * V (by AC3 coil) ; measured through B3
ΔN (by DC2 coil) = 0.0023 (N/V) * V (by DC2 coil) ; measured through B3
ΔN (by AC2 coil) = 0.0022   (N/V) * V (by AC2 coil) ; measured through B1
ΔN (by AC2 coil) = 0.00198 (N/V) * V (by AC2 coil) ; measured through B3
ΔN (by DC1 coil) = 0.0016   (N/V) * V (by DC1 coil) ; measured through B1
ΔN (by AC1 coil) = 0.00078 (N/V) * V (by AC1 coil) ; measured through B1
ΔN (by AC1 coil) = 0.00363 (N/V) * V (by AC1 coil) ; measured through B2

The coefficient for DC3 seems not to fit the norm shown by the rest data.

Correction of coupling signal

I thought that, knowing the signal from the coils, we could feed its opposite to the sensors to cancel its effect. In practice, we would our feedback loop to look as the picture on the left part in the figure below. I can rearrange it to show it more clearly that the Gc and -Gc would simply add and cancel. We can do this cancellation within our digital feedback loop. Specifically, we can add the term -GcGS to cancel the coupling signal of the coils. Haixing agreed and we will try this tomorrow.




  697   Tue Aug 6 09:27:43 2013 GiorgosDailyProgressSUSNegation of Cross Coupling with Feedback

Negation of Cross Coupling with Feedback

In my previous post, I commented on how it is possible to negate the coupling from the coils' signal. This can adequately happen only if we know the amount of coupling. To measure this quantitatively, Haixing removed the plate from our setup, so that any signal reported by the sensors would be a product of the magnetic field created directly from the coils; this is what we want to subtract. Also, Haixing believes this technique is sufficient enough so that we no longer need to move the coils further from the sensors. That being said, we switched back to the original behavior of the ACs (for feedback) and DCs coils (DC magnetic offset).

In previous measurements, I calculated the correlation between the coils voltage and the sensors voltage for the AC1 coil and sensor (that are next to each other; we ignore rest of cross-coupling between coils-sensors) and found it to be around -11.28 dB for our DC (f about 0Hz) signal. We also measured the transfer function between the AC1 coil and sensor and found it to be around -11.133 at low frequency; the data are in close agreement. Then, we introduced a factor -GCGs in our feedback (Gc was measured to be about 400mV/V and Gs is known from the whitening filter) and measured the transfer function again. The magnitude dropped to -40dB(shown below).


At low frequencies, we need this value to drop even more, to approximately -70dB since the transfer function of the plate is around 2mV/V or -54dB. Further, we only cancelled the coils' coupling in the low frequency range and we should modify our feedback so that we improve the system's behavior over all frequency range.

Cross Coupling between Coils-Sensors

Here I summarize my findings for the calculated cross-coupling: for AC1 (coil-sensor), I found -11.28dB, for AC2 -10.808dB,and for AC3 -11.258dB.

Simulink Model

For some unknown reason, the Simulink Model for the feedback needs at least 2 filter modules and one subsystem in order to work; otherwise it fails to operate. In order to work, we also need to include a time delay so that the coil's output is not at once fed into the feedback. I worked and finished a generic Simulink model for all six degrees of freedom, however all the coefficients are unknown. Even so, I will post it along with some description of what each components does.

  702   Thu Aug 8 11:32:00 2013 GiorgosSummarySUSCompensation for Cross-Coupling and Vector Fitting

Cross-Coupling Measurements

Before measurig the cross-coupling effects, we put the plate back to our setup. Since the plate has magnets attached to it, we expect that it will affect the magnetic field produced by the coils. Therefore, a more realistic measurement of the cross-coupling effect that will resemble the real-time feedback control should take place in the presence of the plate. Of course, we want the plate to be still during the measurements, so that our Hall-effect sensors are not affected by their displacement (if the plate is still, it will only affect the reading of the sensors at 0 frequency, but we are looking at the transfer function over the whole frequency range). To fix the plate, we pressed all motors against it, ignoring any small fluctuations.


In this configuration, we used Diagnostic Tools software to measure the transfer function beween ACs coils' output and ACs sensors' reading. Then, we used Vector Fitting to obtain a fitting transfer function with zeros, poles, and gain that resembles the behavior of the cross-coupling. In few words, Vector Fitting uses a guess function σ(s)=, whose zeros are the poles of the desired function

. Finding the zeros of σ(s) equals to finding the poles of the function and vector fitting manages to find a linear least-squares fit model by trying values for the poles and solving for the linear coefficients. More information can be found at http://www.sintef.no/Projectweb/VECTFIT/Algorithm/.

We developed a code for the vector fitting and found the best model with 20 pairs of poles (40 poles, since they are complex conjugates) and about 10 iterations. At the end, we applied compensation feedback given by the modelling transfer function so as to cancel the cross-coupling effects.Below, one can see the cross-coupling measurements before (left figure) and after (right figure) applying the modelled function. We modelled the coupling transfer functions only for ACi_Sensor - ACi_coil (i=1,2,3) pairs, because cross-coupling between e.g. AC1 coil and AC2 sensor was very small (as is evident in the figures). Using this technique, we minimized coupling effects from roughly -10dB to roughly -70dB.





  703   Fri Aug 9 08:43:37 2013 GiorgosSummarySUSSimulink Model (1 DOF) - Step Response

After minimizing the coupling effects, I created a 1 DOF Simulink Model to test whether the setup is stable in the vertical direction. Below appears the model. We have not included yet Eddy Current Damping (simulations showed no great changes).


We create noise with a pulse generator and recorded the step response of the system with the oscilloscope, which appears at Output (1). 


The behavior of the system appears stable and so we tried to levitate the plate starting from AC3. The plate would jiggle back and forth back after some time the DAC would saturate. Going back to the Simulink Model, the relationship between saturation limits and noise amplitude seem to determine the stability of the system as well. Looking at other factors that might affect our system, we consider the following:

  • The force of the gain directly depends on the location of the plate. The coils are not strong enough and the magnetic field created is non-uniform around the plate. Prior to this summer, Haixing had acquired measurements for the correlation between the plate's distance from the coils and the force exerted on it and we intend to use this information in our model.
  • ADC and DAC background noise may limit the sensitivity of our system
  • We need a more precise measurement of the system's rigidity. For a linear degree of freedom, such as the vertical direction, the rigidity can be measured as F/distance. Haixing's measuremts of the force exerted as a function of the distance from the plate will be used.
Attachment 2: StepResponseofACs.bmp
  705   Mon Aug 12 07:59:58 2013 GiorgosSummarySUSDAC and ADC saturation issues

Instability due to Saturation Issues

Using the Simulink model, I realized that saturation of the ADC & DAC prevents us from acquiring stability, since the signal quickly builds up after a couple of turns within the feedback loop. Explicitly, if the gain of the digital feedback filter is large, either the output signal from the DAC saturates or a big feedback signal is produced, which then -together with the plate- saturates the ADC. To prevent saturation of the DAC, I suggested implementing our gain outside the feedback filter, at the coil signal conditioning stage. Similarly, to avoid saturation of the ADC, the gain of the sensors signal conditioning stage is to be altered. At the end, we are looking for a stable system with balanced signal that effects a maximum vertical displacement of the plate around 0.1mm.

Gains of the feedback loop

There are three different gains in our feedback loop. The gain of the digital feedback filter, the gain of the sensors signal conditioning (also constrained by saturation limits around 12V because of the OP27), and the gain of the coils signal conditioning which acts twice; through the coupling and the plate. I calculated the desired gain for each part, such that the plate will ideally not move beyond 0.1mm. To achieve this, we introduced a gain of 25 in the coils conditioning board (changed resistors to R6=51kΩ and R5=2kΩ) which -along with the 100nF capacitor used- effects a new pole around 30Hz (1/RC=200 rad/s). Modeling the new configuration for very low-noise power(0.003Nm/s), we get stable behavior.



Including bigger noise (0.03Nm/s), however, destabilizes the system once more.


  709   Tue Aug 13 21:09:44 2013 GiorgosSummarySUSOvercoming Saturation: Feedback through DC Coils and Mu-metal

In my previous post, I explained the saturation issues of the ADC and DAC we faced. To prevent saturation of the DAC, we will implement our gain after the feedback filter -we already introduced a gain of 25 at the coil's conditioning stage-and use a gain of approximately 1/25 for the digital filter. In this way, even if the ADC saturates at 10V, the feedback filter will send maximum 400mV (10V/25) to the DAC. However, the possibility that the ADC saturates still exists.

To make matters worse, we cannot even exploit the 10V saturation range of our ADC. The reason is that the undesired cross-coupling between coils and sensors is almost as large as the signal produced by the plate. Although vector fitting was very successful at cancelling cross-coupling, this method can only be implemented inside the digital control, after the ADC. That been said, if we are to stay within the 10V range, only 5V can come from the plate; the rest 5V will inevitably come from cross-coupling. Practically, this means that the signal from the plate is successfully sensed and transferred to the digital control for very small displacements, beyond which the ADC saturates and feedback control is impossible. We use two ways to tackle this obstacle:

  1. We revert back to using DC coils for feedback control. They are located further away from the AC sensors and cross-coupling is smaller, such that a larger proportion of the ADC' 10V range can be dedicated to the signal from the plate.
  2. We use mu-metal to cover all sensors and coils. We hope that mu metal's high permeability will shield off the magnetic field that comes from cross-coupling. It should also leave the magnetic field changes produced by the plate's movement almost intact, since the plate's magnets are located directly below and above the sensors, the only place we did not cover with mu metal.

We measured cross-coupling effects before and after the use of mu metal and noticed a drastic reduction in cross-coupling (about a factor of 3). The figures below show the measured transfer function before and after the use of mu-metal.


One of the things to also note is that the bottom strain gauge motors readings are damaged and no longer worth looking at to determine when the plate is near equilibrium. This is probably the result of excessive weight on them from the plate. The top strain gauge motors seem to still be working fine.

  710   Tue Aug 13 22:49:45 2013 GiorgosDailyProgressSUSModel of 3 DoF

DCs/ACs Matrices

In the previous post ("Feedback through DC coils"), I described the reasons for using DC coils for our feedback control. This trick significantly decreases cross-coupling before the feedback control and so allows us to afford a larger signal from the plate, without the ADC ever saturating. However, it comes with a cost; the feedback force is not applied directly at our sensors. Specifically, the DC coils greatly affect two of the sensors surrounding it -that are equally far. That creates the need to develop a matrix to calculate how the force from each DC coil affects each AC sensor and, further, how each sensor should "distribute" its signal to different DC coils in the feedback control. Using the geometry of our plate and the arrangement of the coils and sensors, I calculated these two matrices, which should also be inverse of each other. Here is what I found:

To measure how much of the coils' signal each sensor reads: [A][DC1;DC2;DC3]=[AC1;AC2;AC3], where A=[2 -1 2; 2 2 -1; -1 2 2]. Similarly, to convert each sensor's signal to the feedback channels of the DC coils, [B][AC1;AC2;AC3]=[DC1;DC2;DC3], where B=[2 2 -1; -1 2 2;2 -1 2].

Simulink for 3 DoF

Then, I created a Simulink model for three degrees of freedom (sensors: AC1-3, coils: DC1-3). The effect of noise was catastrophical for the stability of the model. So, I first tried to replicate 1 DoF stability, by "nulling" DC2 and DC3 coils. Below, one can see the Bode Plots of the model (resonance around 2Hz) and the displacement of the plate in 1DoF (by AC1).



Attached is also the ADC input, which clearly shows how the signal is well below saturation. Three different colous represent the signal from the three sensors AC1, AC2, AC3 (they are all affected by DC1 coil).

Attachment 3: ADCinput_forDC1only_in3DOF_13Aug13.bmp
  711   Wed Aug 14 00:05:00 2013 GiorgosDailyProgressSUS3DoF Stability in Simulink

Today, I achieved stability in Simulink for 3DoF, including noise to the hall-effect sensors and the coil's conditioning. We had measured the noise at the ADC to be max 20mV, but that value is amplified by the gain (91) of the HE conditioning boards. So, I included noise of 20/91 mV. I attached the final model and the script.   Maglev_3DoF_Giorgos.slx.

I also used vector fitting to find the transfer functions of the coupling between DC1 coil to all sensors. An example of the successful resemblance is shown in the figures below (DC1 coil to W sensor). The figure on the left shows the modelling of the coupling and the deviation between the fitting and the data. The right figure shows a body plot of the modelled and measured transfer function.


I also calculated the amount of cross-coupling and noise inside our system in order to find the allowed gain to avoid any saturation.

Since the OP27 in the coil conditioning board also saturates at 10V, the DAC should provide no more than 400mV; beyond that point, the gain of 25 we introduced in the coils would saturate the OP27.

We had also found the cross coupling to be around 0.01 for two nearby sensors and 0.001 for the third one (in the 3DoF case we ignore all others). If our DAC never exceeds 400mV, cross coupling would get at most 0.0084V (8.4mV) at the ADC.
Similarly, if the coils get at most 10V, the maximum force they provide is 0.02N, which translates to 0.0113m (1.13cm) maximum displacement of the plate. Such displacement would produce 0.1989V at the ADC. Adding noise to these, our signal is only 0.2273V, well below the saturation of the ADC.
Inside the feedback filter, the cross coupling is cancelled down to -60dB (0.001V/V), so only 0.0012V remains, given a 400mV DAC output). The signal is thus 0.2201V. To avoid saturation of the DAC, we can afford a maximum gain of 1.8.

Attachment 2: mag_lev_20130809Giorgos3Channels.m
%%%%%%%%%%%%%%%%%% Magnet %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

mag_mass = 0.5;
mag = 1/mag_mass;

%%%%%%%%%%%%%%%%% Negative Spring %%%%%%%%%%%%%%%%%%%%%%

omegam = 2*pi*0.3;            % resonant frequency
K = -mag_mass*omegam^2;      % F=-K*x

... 54 more lines ...
  714   Fri Aug 16 18:37:56 2013 GiorgosDailyProgressSUSBringing Sensors closer to Plate, Correcting Offset and Gain

Yesterday, we tried levitating for one degree of freedom; we failed. The plate would move back and forth the equilibrium, but not settle there for more than a second. Haixing suggested moving the sensors closer to the plate, so that our signal is larger. This way, we can be less sensitivity to noise. However, that entails that we recalibrate the HE offset, the cross coupling measurement and the gains of our feedback loop.

So, today we moved the AC1, AC2, AC3 sensors closer to the plate and removed the DC1, DC2, DC3 sensors from our setup, since they are of no use. We also replaced the 91 gain of the HE sensors with a gain of 11 (11k and 1k are the resistors we used).
Then, we measured the response of the HE sensors and found that the signal produced changes by about 120mV for each mm; that is about a factor of 6 better than before.

Since the sensors are now at a different location, they have a different voltage offset; this should be larger since they are now closer to the levitated plate. To measure the new offset, we moved the motors such that the equilibrium would lie between the top and bottom motors. Then, we measured the offset reading of the sensors when the plate was stuck at the top and at the bottom and averaged the two. We used our measurements and, by taking into account the offset we had already applied in the first place, we replaced the resistors in our circuit such that the offset of the AC1-3 sensors would be no greater than 50mV. For AC1: R1=6.2K, R2=5.1K, for AC2: R1=13K, R2=11K, for AC3: R1=20K, R2=16K.

  715   Sat Aug 17 18:51:39 2013 GiorgosDailyProgressSUSNoise Reduction when sensors are closer


We moved all 7 HE sensors closer to the plate. The AC sensors were burnt out and we replaced them with the DC ones, which we had removed yesterday because they were of no use.
The sensors in their current position feel different magnetic field and their offset is also different (around 100mV, compared to 2.4V before). The smaller value of the voltage offset sounds counterintuitive--because the sensors are now closer and the field around them larger--but it only indicates the direction of the field. The HE sensors read 2.5 at the presence of no field; far away their offset was 2.4V, so they only produced 100mV (2.5V - 2.4V) signal. Now that they are closer, their offset is huge (roughly 2.5V). We are worried they might actually saturate, once the plate starts moving.

For future reference, here are the values of the resistors we use:

AC1: R1= 110K, R2=2.2K

AC2: R1=130K, R2=2.2K

AC3: R1=62K, R2=1.2K

S1: R1=7.5K, R2=2.2K

S2: R1=20K, R2=7.5K

N: R1=20K, R2=6.2K

E: R1=4.3K, R2=1.5K

Noise reduction and debated Cross-Croupling

The new gain for all sensors conditioning board is now 11. The noise would therefore be amplified less (by a factor of 9). Indeed, the improve noise effects were apparent, since signal fluctuated only a couple of mV (see Figure below).


The cross coupling between sensors would now also be smaller (200mV/V for feedback through AC coils would now become almost 20mV; 20mV/V for feedback through DC coils would now be about 2mV/V).
We will measure cross coupling and try to levitate tomorrow for all six degrees of freedom

  642   Sun Jun 23 14:01:00 2013 Giorgos MamakoukasDailyProgressSUSThursday, June 20th 2013 - Gain of circuitry and calculations of proper resistors

We started by covering the basics of op-amps and their gain. We then talked about differential amplifiers and calculated Vout, in terms of the four resistances of the circuit and Vin1, Vin2. We found the gain (-R4/R3) and the offset, which should ideally be zero. For a gain of about 50, we introduced an uncertainty for the Rresistor, which in turns leads to an uncertainty for the output voltage:

ΔV/V=(ΔR2)/R2 ∙(1/G)

Then, we calculated the resistances for the desired outcome and wired the circuit on the PCB board for signal conditioning, powered it and made sure it gave sensible readings on a digital screen that read the output voltage.


In fact, for one board Vin=0.012V and Vout=0.058V and another Vin=0.014 and Vout=0.69 (where Vin(2)=5V), showing a gain of roughly fifty. At the end, we connected one strain gauge "wire" that supports the levitated plate to our Wheatstone bridge circuitry and read an output voltage of about 144mV (ideally -and with some feedback- it will be 0). Pressing on the strain gauge wire increased our voltage reading, clearly showing that our system responds to pressuring the strain gauge wire.

Tomorrow, we will replace R2 with an adjustable resistor, until we get our voltage reading to be 0 (or close to that). After we find our value, we will use a normal resistor as R2 and finish our system.

I am still trying to find how to insert equations

  643   Sun Jun 23 14:39:28 2013 Giorgos MamakoukasDailyProgressSUSFinished 6 PCB Boards

On Friday, we tried to get zero offset for the voltage output of the strain gauge, by replacing our fixed-value R2 resistors with adjustable ones. At R2=22.74kΩ, we achieved a 0.01mV offset, which we would then correct with our DC motors.

We measured the voltage required to drive our motors at a slow, steady pace; 5V seemed sufficient. To get that voltage, we used the voltage divider method. The internal resistance of the motors were roughly 100Ω, so we added a 500Ω resistor in series to get close to our desired Vout, using our 15V source. Our motors were "broken" and would however move at all times once connected and pause only when the control buttons were pressed; we ordered new ones.

Since we successfully built one circuit for one strain gauge and DC motor, we wired the rest 5 PCB boards. At the end of the day, we were happy to look at the following:


  645   Mon Jun 24 23:32:04 2013 Giorgos MamakoukasDailyProgressSUSPower PCB boards and design of the box

We wanted to wire up different PCB boards aiming to power our PCB boards that are connected to the strain gauge and amplify the signal. We looked at the schematics and picked resistors' values so that we have negative (using LM2991 adjustable regulator) and positive (with a LM2941 adjustable regulator) voltage inputs of -15V and +15V respectively. We built six of these PCB boards, missing only very few components, which we ordered.

Then, we looked at the box in which we will place the PCB "amplifying" boards. We also figured out how to drill holes on the cover of the box to get all the wires through. Our final measurements are shown below:Photo0446.jpg

  286   Tue Aug 9 10:25:25 2011 HaixingDailyProgressSUSmaglev circuit board

In order to use Labview for maglev, we need to have an analog interface for the input (OSEM) and output (coil).
I have designed a new board based upon the old circuit design we had previously for the maglev. Here we only
keep the LED drive and coil drive part. The LED drive is a second-order low pass filter with Sallen-Key topology
with a corner frequency around 4.5Hz, designed by Rana.The coil drive is a voltage follower with Gain of 2 where
we use BUF634 to boost the current of quad opamp L1125.

The schematics for the LED drive is given by the figure below:


The schematics for the coil drive is given by the figure below:


The final board is


The Altium file for this board is in the attachment titled: analogy_circuit.zip


Attachment 4: analog_circuit.zip
  323   Thu Aug 18 21:38:41 2011 HaixingDailyProgressSUSissues in digital control of single DOF

Thank you very much for your reply.

>> - The previous entry showed that the sampling rate is 1ms.

Yes, indeed it was. Actually, even in the current setup, the sampling rate for the channel is set to be 1kHz.
Jan told me the highest sampling frequency that we can get is of the order of 100kHz.

>> - If the loop is really running at 50ms, you should see an error output from the "timed-loop structure".

When we changed the loop time constant from 1ms to 100ms, it seems that there is no change at all. The control signal
still behaves like that. Maybe we do not know how to set it up correctly. However, there is no error output from the "timed-loop structure".
We will look into this more carefully.  Right now, we really have very poor knowledge of the digital system. I will come up to 40m
to bother you with few more questions tomorrow, if you will be around.

  342   Tue Sep 6 00:08:07 2011 HaixingDailyProgressSUSLevitation of two degrees of freedom

After fine tuning the working point, we successfully levitate two degrees of freedom of the
plate---the second stage before we can levitate the entire plate which has three degrees
of freedom that need to be controlled (other three are stable). The configuration is shown
schematically by the figure below. The two degrees freedom of the levitated plate are the
tilt motion [differential motion of magnet 2 and 3] and the pitch motion [the motion of magnet 4].
The plate is held by a spring at where the magnet 1 is located. The control force is applied
onto the magnet 2 and magnet 4. The reason why we choose these two degrees of freedom
is that they are weakly coupled to each other---we do not need to worry about the cross coupling
of these two at the moment.

The photo for the levitated plate and the control signal on the oscilloscope are:

levitation.png            oscilloscope.png
(The green and blue are the control signals (green without low-pass filter); yellow and magenta curves are
the sensor outputs for the OSEMs)

The experimental setup is shown by figure below. We a national instrument card PCI-6289 for ADC and DAC.
The feedback controller is a hybrid of digital [for tuning the gain and the integral controller] and analog
[for lead compensation].


We replaced the old computer with a new one. Now we can simultaneously run four channels with a time
loop of nearly 2ms [we had 5ms with the old computer]. For the feedback, we used a simple proportional
and integral (PI) control in Labview [the derivative control is realized by using analog circuit instead] the
with a proportional gain equal to 1 and the integration time of 0.1 min.

Before we try to levitate the third degree of freedom, we will firs measure the open-loop transfer for this
scheme to get a better understanding of the transfer function of the plant. In addition, we will measure
cross coupling among different sensors.

  577   Thu Oct 4 11:34:03 2012 HaixingSummarySUSUpdate on Maglev

Here I give an update on the current status of the maglev project.


The setup:

The solidworks schematics (false-colored) for the setup is shown by the figure below, from top to bottom:


  1. Top fixed plate (in gray): It is used to mount the coil bobbins and also three linear DC motors (not shown in this schematics) for pushing the floating plate to the working position.
  2. Six coil bobbin (white cylinder): Three of coils (in red-orange) are for counteracting the DC mismatch of the magnets; the other are for controlling the vertical motion of the floating plate. On the bottom of each coil bobbin, there is a magnet which attracts the magnet mounted on the floating plate [mentioned later]. In the hollow center of the bobbin, there is the hall-effect sensor which has a large linear dynamical range and is for acquiring first-stage locking of the floating plate before switching to the short-range optical-lever sensing.
  3. Floating plate (in green): This is the central part of the setup and there are six magnets mounted on the plate (push-fit). We want to levitate it and lock it around the local extrema of the magnetic force between the magnets on the bobbin and on the floating plate, which ideally would have a very low rigidity and achieve a low resonant frequency levitation (for seismic isolation).
  4. Corner reflector (in gold): There are three corner reflectors, and together with the laser form the optical lever to sense the six degrees of freedom of the floating plate. The sensing of different DOFs are coupled to each other, and we need to diagonalize the sensing matrix. This is also where the long-range hall-effect sensing comes into play and it allows us to first lock the floating plate, and we can then diagnose the coupling for the optical-lever sensing.
  5. Small-coil bobbins (small white cylinder): These are for the first-stage sensing and controlling the horizontal motion of the floating plate before switching to the optical-lever sensing. In each of them, there is also a hall-effect sensor.
  6. Collimator  (on the gray mirror mount): This is to fix the optical fiber for the 635nm laser light, which is part of the optical-lever sensing mentioned earlier.
  7. Mirror (on the green mirror mount): In the next-stage experiment, this will be one of the mirror for the Fabry-Perot cavity. On the side of the mirror, we have mounted two small magnets, which are for sensing and controlling the angular degree of freedom of the floating plate.
  8. Middle fixed plate (in between the poles): This plate is to mount the two small-coil bobbins (for sensing and controling the angular DOF). In addition, there are three linear DC motors (not shown in this schematics) mounted on this plate, together with the three DC motors mounted on the top fixed plate, we can place the floating place to be near the working position and also prevent the floating plate stuck to the magnets (very strong) on the coil bobbins.
  9. Quadrant photo-diode (QPD) box (in blue on the bottom fixed plate): In each of box, there is a QPD which is to sense the reflected laser light from the corner reflector on the floating plate. We are using Hamamatsu 4-element photodiode S4349 . Together with the corner reflectors, they form the short-range optical-lever sensing.

Current status:


  1. Mechanical parts: There are two parts not yet ready: the small bobbins and auxiliary components attached on the linear DC motors, due to later modifications to the earlier design; all the other parts are in place. The coil bobbins are now winded with coils.
  2. Optical parts: Apart from the mirror mounts, the main components Laser diode: LPS-635-FC - 635 nm, 2.5 mW, A Pin Code, SM Fiber-Pigtailed Laser Diode, FC/PC; Diode mount: TCLDM9 - TE-Cooled Mount For 5.6 & 9 mm Lasers;  Driver: LDC201CU - Benchtop LD Current Controller, ±100 mA; Coupler (for splitting): FCQ632-FC - 1x4 SM Coupler, 632 nm, 25:25:25:25 Split, FC/PC; Collimator: F280SMA-B - 633 nm, f = 18.24 mm, NA = 0.15 SMA Fiber Collimation Pkg; Collimator adapter: AD11NT - Ø1" (Ø25.4 mm) Unthreaded Adapter for Ø11 mm Collimators are now ready.
  3. Analogy Electronic parts: The pcb boards for the hall-effect sensors and QPD box have been fabricated, and now need to be stuffed. The coil drivers are not yet ready.
  4. Digital parts: The binary input-output box has not yet powered up. The pcb board for the chassis power just arrived, and needs to be stuffed. Rana and I worked out the schematics for AA/AI but not yet the pcb layout.

Plan (Assembly stage):


  1. Mechanical parts: The small bobbins and auxiliary components for the linear DC motors need to be fabricated. The lead time is around three to four weeks. During this period, I will mostly work on the the electronics and also try to get the digital part ready (for this I need helps from Rana and Jamie). In addition, I will design a closure for covering the setup to reduce some noise from the air and acoustics.
  2. Optical parts: I plan to work on them once I finish the electronics. The tasks are: (i) designing the optical layout; (ii) test and diagnose different components, especially the laser diode.
  3. Electronics: I will mostly focus on this part in the near term: (i) stuffing the pcb board for the hall-effect sensors and QPD box; (ii) modifying the old coil driver circuits to accommodate this new setup with more input and outputs; (iii) powering up the Binary input-output box and test it for prototyping; (iv) working together with Rana and Jamie on the AA/AI.
  4. Digital part: This would heavily rely on the help of Jamie and Rana.


Plan (Testing stage):


  1. Acquiring lock of the floating plate by using the hall effect sensors. This is relatively easy compared with the optical-lever sensing and control, as different degrees of freedom are not coupled strongly.
  2. Characterizing the cross coupling among different degrees of freedom in the optical-lever sensing scheme.
  3. Measuring the resonant frequency of the levitation, and testing the tunability of this resonant frequency by locking the plate at different locations to see how low we can achieve.



  579   Fri Oct 5 15:17:47 2012 HaixingNoise HuntingSUSEstimate of gas damping and associated noise in maglev

Here I make an estimate of the gas damping for the maglev.

The damping rate:

I use the formula given by Cavalleri et al. presented in the article titled Gas damping force noise on a macroscopic test body in an infinite gas reservoir [PLA 374, 3365–3369 (2010)]. According to Eq. (18) in their paper, the viscous damping coefficient for a rigid body is given by


Here p≈10^5Pa is the air pressure, S is the surface area of the rigid body (0.05m^2 the floating plate in our case), m_0≈28  is the air molecular mass, T≈273K is the environmental temperature. Plug in the number, we get


Given mass of the floating plate to be around 0.56 kg, we get mechanical damping rate to be:


which is very large. This means the floating plate is an strongly over-damped oscillator if the resonant frequency is around the design value of 0.1Hz. To have a quality factor of even order of unity, we need to pump down by one hundredth of the air pressure.

The displacement noise:

We can use the fluctuation-dissipation theorem to estimate the displacement noise. The force spectrum is given by


The displacement noise around the mechanical resonant frequency reads:


Given a resonant frequency to be around 0.1 Hz, we have


This is smaller than the seismic noise which is approximately three orders of magnitude higher.


  844   Tue Sep 16 09:50:06 2014 Horng Sheng ChiaSummaryCrackleSURF report: Optimization of Michelson Interferometer Signals in Crackle Noise Detection

 Optimization of Michelson Interferometer Signals in Crackle Noise Detection

The search for crackle noise has been limited by the presence of various sources of noise, including laser frequency noise, laser intensity noise and the misalignment of end mirrors in the Michelson interferometer. We developed an optimization algorithm that finds the best parameters that minimize the coupling of these noises into the output. These parameters include microscopic and macroscopic length difference of Michelson arms, and the angular alignment of end mirrors.


  153   Thu Jul 8 16:41:29 2010 JanMiscSeismometryRanger Pics

This Ranger is now lying in pieces in West Bridge:



First we removed the lid. You can see some isolation cylinder around the central metal 
part. This cylinder can be taken out of the dark metal enclosure together with the interior 
of the Ranger.


Magnets are fastened all around the isolation cylinder. One of the magnets was missing 
(purposefully?). The magnets are oriented such that a repelling force is formed between 
these magnets and the suspended mass. The purpose of the magnets is to decrease the 
resonance frequency of the suspended mass.


Next, we opened the bottom of the cylinder. You can now see the suspended mass. 
On some of the following pictures you can also find a copper ring (flexure) that was 
partially screwed to the mass and partially to the cylinder. Another flexure ring is 
screwed to the top of the test mass. I guess that the rings are used to fix the horizontal 
position of the mass without producing a significant force in vertical direction. The 
bottom part also has the calibration coil.


Desoldering the wires from the calibration coil, we could completely remove the mass 
from the isolation cylinder. We then found how the mass is suspended, the readout 
coil, etc.:

DSC02509.JPG DSC02513.JPG

  154   Sat Jul 17 13:41:45 2010 JanMiscSeismometryRanger

Just wanted to mention that the Ranger is reassembled. It was straight-forward except for the fact that the Ranger did not work when we put the pieces together the first time. The last (important) screws that you turn fasten the copper rings to the mass (at bottom and top). We observed a nice oscillation of the mass around equilibrium, but only before the very last screw was fixed. Since the copper rings are for horizontal alignment of the mass, I guess what happens is that the mass drifts a little towards the walls of the Ranger while turning the screws. Eventually the mass touches the walls. You can fix this problem since the two copper rings are not perfectly identical in shape, and/or they are not perfectly circular. So I just changed the orientation of one copper ring and the mass kept oscillating nicely when all screws were fastened.

  157   Tue Jul 27 00:22:04 2010 JanMiscSeismometryRanger

The Ranger is in West Bridge again. This time we will keep it as long as it takes to add capacitive sensors to it.

  169   Wed Jan 19 17:13:11 2011 JanComputingSeismometryFirst steps towards full NN Wiener-filter simulator

I was putting some things together today to program the first lines of a 2D-NN-Wiener-filter simulation. 2D is ok since it is for aLIGO where the focus lies on surface displacement. Wiener filter is ok (instead of adaptive) since I don't want to get slowed down by the usual finding-the-minimum-of-cost-function-and-staying-there problem. We know how to deal with it in principle, one just needs to program a few more things than I need for a Wiener filter.

My first results are based on a simplified version of surface fields. I assume that all waves have the same seismic speed. It is easy to add more complexity to the simulation, but I want to understand filtering simple fields first. I am using the LMS version of Wiener filtering based on estimated correlations.


The seismic field is constructed from a number of plane waves. Again, one day I will see what happens in the case of spherical waves, but let's forget it about it for now. I calculate the seismic field as a function of time, calculate the NN of a test mass, position a few seismometers on the surface, add Gaussian noise to all seismometer data and test mass displacement, and then do the Wiener filtering to estimate NN based on seismic data. A snapshot of the seismic field after one second is shown in the contour plot.


Seismometers are placed randomly around the test mass at (0,0) except for one seismometer that is always located at the origin. This seismometer plays a special role since it is in principle sufficient to use data from this seismometer alone to estimate NN (as explained in P0900113). The plot above shows coherence between seismometer data and test-mass displacement estimated from the simulated time series.


The seismometers measure displacement with SNR~10. This is why the seismometer data looks almost harmonic in the time series (green curve). The fact that any of the seismometer signal is harmonic is a consequence of the seismic waves all having the same speed. An arbitrary sum of these waves produce harmonic displacement at any point of the surface (although with varying amplitude and phase). The figure shows that the Wiener filter is doing a good job. The question is if we can do any better. The answer is 'yes' depending on the insturmental noise of the seismometers.


So what do I mean? Isn't the Wiener filter always the optimal filter? No, it is not. It is the optimal filter only if you have/know nothing else but the seismometer data and the test-mass displacement. The title of the last plot shows two numbers. These are related to coherence via 1/(1/coh^2-1). So the higher the number, the higher the coherence. The first number is calculated fromthe coherence of the estimated NN displacement of the test mass and the true NN displacement. Since there is other stuff shaking the mirror, I can only know in simulations what the true NN is. The second number is calculated from coherence between the seismometer at the origin and true NN. It is higher! This means that the one seismometer at the origin is doing better than the Wiener filter using data from the entire array. How can this be? This can be since the two numbers are not derived from coherence between total test-mass displacement and seismometer data, but only between the NN displacement and seismometer data. Even though this can only be done in simulation, it means that even in reality you should only use the one seismometer at the origin. This strategy is based on our a priori knowledge about how NN is generated by seismic fields. Now I am simulating a rather simple seismic field. So it still needs to be checked if this conclusion is true for more realistic seismic fields.

But even in this simulated case, the Wiener filter performs better if you simulate a shitty seismometer (e.g. SNR~2 instead of 10). I guess this is the case because averaging over many instrumental noise realizations (from many seismometers) gives you more advantage than the ability to produce the NN signal from seismometer data.

  170   Thu Jan 20 16:05:14 2011 JanComputingSeismometryWiener v. seismometer 0

So I was curious about comparing the performance of the array-based NN Wiener filter versus the single seismometer filter (the seismometer that sits at the test mass). I considered two different instrumental scenarios (seismometers have SNR 10 or SNR 1), and two different seismic scenarios (seismic field does or does not contain high-wavenumber waves, i.e. speed = 100m/s). Remember that this is a 2D simulation, so you can only distinguish between the various modes by their speeds. The simulated field always contains Rayleigh waves (i.e. waves with c=200m/s), and body waves (c=600m/s and higher).

There are 4 combinations of instrumental and seismic scenarios. I already found yesterday that the array Wiener filter is better when seismometers are bad. Here are two plots, left figure without high-k waves, right figure with high-k waves, for the SNR 1 case:


'gamma' is the coherence between the NN and either the Wiener-filtered data or data from seismometer 0. There is not much of a difference between the two figures, so mode content does not play a very important role here. Now the same figures for seismometers with SNR 10:


Here, the single seismometer filter is much better. A value of 10 in the plots mean that the filter gets about 95% of NN power correctly. A value of 100 means that it gets about 99.5% correctly. For the high SNR case, the single seismometer filter is not so much better as the Wiener filter when the seismic field contains high-k waves. I am not sure why this is the case.

The next steps are
A) Simulate spherical waves
B) Simulate wavelets with plane wavefronts (requires implementation of FFT and multi-component FIR filter)
C) Simulate wavelets with spherical wavefronts

Other goals of this simulation are
A) Test PCA
B) Compare filter performance with quality of spatial spectra (i.e. we want to know if the array needs to be able to measure good spatial spectra in order to do good NN filtering)

  171   Fri Jan 21 12:43:17 2011 JanComputingSeismometrycleaned up code and new results

It turns out that I had to do some clean-up of my NN code:

1) The SNRs were wrong. The problem is that after summing all kinds of seismic waves and modes, the total field should have a certain spectral density, which is specified by the user. Now the code works and the seismic field has the correct spectral density no matter how you construct it.

2) I started with a pretty unrealistic scenario. The noise on the test mass, and by this I mean everything but the NN, was too strong. Since this is a simulation of NN subtraction, we should rather assume that NN is much stronger than anything else.

3) I filtered out the wrong kind of NN. I am now projecting NN onto the direction of the arm, and then I let the filter try to subtract it. It turns out, and it is fairly easy to prove this with paper and pencil, that a single seismometer CANNOT never ever be used to subtract NN. This is because of a phase-offset between the seismic displacement at the origin and NN at the origin. It is easy to show that the single-seismometer method only works for the vertical NN or underground for body waves.


This plot is just the prove for the phase-offset between horizontal NN and gnd displacement at origin. The offset is depends on the wave content of the seismic field:


The S0 points in the  following plot are now obsolete. As you can see, the Wiener filter performs excellently now because of the high NN/rest ratio of TM dispalcement. The numbers in the titel now tell you how much NN power is subtracted. So a '1' is pretty nice...


One question is why the filter performance varies from simulation to simulation. Can't we guarantee that the filter always works? Yes we can. One just needs to understand that the plot shows the subtraction efficiency. Now it can happen that a seismic field does not produce much NN, and then we don't need to subtract much. Let's check if the filter performance somehow correlates with NN amplitude:


As you can see, it seems like most of the performance variation can be explained by a changing amplitude of the NN itself. The filter cannot subtract much only in cases when you don't really need to subtract. And it subtracts nicely when NN is strong.

  172   Sat Jan 22 20:19:51 2011 JanComputingSeismometrySpatial spectra

All right. The next problem I wanted to look at was if the ability of the seismic array to produce spatial spectra is somehow correlated with its NN subtraction performance. Now whatever the result is, its implications are very important. Array design is usually done to maximize its accuracy to produce spatial spectra. So the general question is what our guidelines are going to be? Information theory or good spatial spectra? I was always advertizing the information theory approach, but it is scary if you think about it, because the array is theoretically not good for anything useful to seismology, but it may still somehow provide the information that we need for our purposes.

Ok, who wins? Again, the current state of the simulation is to produce plane waves all at the same frequency, but with varying speeds. The challenge is really the mode variation (i.e. varying speeds) and not so much varying frequencies. You can always switch to fft methods as soon as you inject waves at a variety of frequencies. Also, I am simulating arrays of 20 seismometers that are randomly located (within a 30m*30m area) including one seismometer that is always at the origin. One of my next steps will be to study the importance of array design. So here is how well these arrays can do in terms of measuring spatial spectra:


The circles indicate seismic speeds of {100,250,500,1000}m/s and the white dots the injected waves (representing two modes, one at 200m/s, the other at 600m/s). The results are not good at all (as bad as the maps from the geophone array that we had at LLO). It is not really surprising that the results are bad, since seismometer locations are random, but I did not expect that they would be so pathetic. Now, what about NN subtraction performance?


 The numbers indicate the count of simulation runs. The two spatial spectra above have indices 3 (left figure) and 4 (right figure). So you see that everything is fine with NN subtraction, and that spatial spectra can still be really bad. This is great news since we are now deep in information theory. We should not get to excited at this point since we still need to make the simulation more realistic, but I think that we have produced a first powerful clue that the strategy to monitor seismic sources instead of the seismic field may actually work.

  173   Sun Jan 23 09:03:43 2011 JanComputingSeismometryphase offset NN<->xi

I just want to catch up on my conclusion that a single seismometer cannot be used to do the filtering of horizontal NN at the surface. The reason is that there is 90° phase delay of NN compared to ground displacement at the test mass. The first reaction to this shoulb be, "Why the hack phase delay? Wouldn't gravity perturbations become important before the seismic field reaches the TM?". The answer is surprising, but it is "No". The way NN builds up from plane waves does not show anything like phase advance. Then you may say that whatever is true for plane waves must be true for any other field since you can always expand your field into plane waves. This however is not true for reasons I am going to explain in a later post. All right, but to say that seismic dispalcement is 90° ahead of NN really depends on which directoin of NN you look at. The interferometer arm has a direcion e_x. Now the plane seismic wave is propagating along e_k. Now depending on e_k, you may get an additional "-" sign between seismic dispalcement and NN in the direction of e_x. This is the real show killer. If there was a universal 90° between seismic displacement and NN, then we could use a single seismometer to subtract NN. We would just take its data from 90° into the past. But now the problem is that we would need to look either 90° into the past or future depending on propagation direction of the seismic wave. And here two plots of a single-wave simulation. The first plots with -pi/2<angle(e_x,e_k)<pi/2, the second with pi/2<angle(e_x,e_k)<3*pi/2:



  174   Sun Jan 23 10:27:07 2011 JanComputingSeismometryspiral v. random

A spiral shape is a very good choice for array configurations to measure spatial spectra. It produces small aliasing. How important is array configuration for NN subtraction? Again: plane waves, wave speeds {100,200,600}m/s, 2D, SNR~10. The array response looks like Stonehenge:


A spiral array is doing a fairly good job to measure spatial spectra:


The injected waves are now represented by dots with radii proportional to the wave amplitudes (there is always a total of 12 waves, so some dots are not large enough to be seen). The spatial spectra are calculated from covariance matrices, so theory goes that spatial spectra get better using matched-filtering methods (another thing to look at next week...).

Now the comparison between NN subtraction using 20 seismometers, 19 of which randomly placed, one at the origin, and NN subtraction using 20 seismometers in a spiral:


A little surprising to me is that the NN subtraction performance is not substantially better using a spiral configuration of seismometers. The subtraction results show less variation, but this could simply be because the random configuration is changing between simulation runs. So the result is that we don't need to worry much about array configuration. At least when all waves have the same frequency. We need to look at this again when we start injecting wavelets with more complicated spectra. Then it is more challenging to ensure that we obtain information at all wavelengths. The next question is how much NN subtracion depends on the number of seismometers.

  175   Sun Jan 23 12:59:18 2011 JanComputingSeismometryLess seismometers, less subtraction?

Considering equal areas covered by seismic arrays, the number of seismometers relates to the density of seismometers and therefore to the amount of aliasing when measuring spatial spectra. In the following, I considered four cases:

1) 10 seismometers randomly placed (as usual, one of them always at the origin)
2) 10 seismometers in a spiral winding one time around the origin
3) The same number winding two times around the origin (in which case the array does not really look like a spiral anymore):

4) And since isotropy issues start to get important, the forth case is a circular array with one of the 10 seismometers at the origin, the others evenly spaced on the circle. 

Just as a reminder, there was not much of a difference in NN subtraction performance when comparing spiral v. random array in case of 20 seismometers. Now we can check if this is still the case for a smaller number of seismometers, and what the difference is between 10 seismometers and 20 seismometers. Initially we were flirting with the idea to use a single seismometer for NN subtraction, which does not work (for horiz NN from surface fields), but maybe we can do it with a few seismometers around the test mass instead of 20 covering a large area. Let's check.

Here are the four performance graphs for the four cases (in the order given above):



All in all, the subtraction still works very well. We only need to subtract say 90% of the NN, but we still see average subtractions of more than 99%. That's great, but I expect these numbers to drop quite a bit once we add spherical waves and wavelets to the field. Although all arrays perform pretty well, the twice-winding spiral seems to be the best choice. Intuitively this makes a lot of sense. NN drops with 1/r^3 as a function of distance r to the TM, and so you want to gather information more accurately from regions very close to the TM, which leads you to the idea to increase seismometer density close to the TM. I am not sure though if this explanation is the correct one.


  176   Mon Jan 24 15:50:38 2011 JanComputingSeismometryMulti-frequency and spherical

I had to rebuild some of the guts of my simulation to prepare it for the big changes that are to come later this week. So I only have two results to report today. The code can now take arbitrary waveforms. I tested it with spherical waves. I injected 12 spherical waves into the field, all originating 50m away from the test mass with arbitrary azimuths. The 12 waves are distributed over 4 frequencies, {10,14,18,22}Hz with equal spectral density (so 3 waves per frequency). The displacement field is far more complex than the plane-wave fields and looks more like a rough endoplasmic reticulum:


The spatial spectra are not so much different from the plane-wave spectra:


The white dots now indicate the back-azimuth of the injected waves, not their propagation direction. And we can finally compare subtraction performance for plane-wave and spherical-wave fields:


Here the plane-wave simulation is done with 12 plane waves at the same 4 frequencies as the spherical waves, and in both cases I chose a 20 seismometer 4*pi spiral array. Note that the subtraction performance is pretty much identical since the NN was generally stronger in the spherical-wave simulation (dots 5 and 20 in the right figure lie somewhere in between the upper right group of dots in the left figure). This makes me wonder if I shouldn't switch to some absolute measure for the subtraction performance, so that the absolute value of NN does not matter anymore. In the end, we don't want to achieve a subtraction factor, but a subtraction level (i.e. the target sensitivity of the GW detectors).

Anyway, the result is very interesting. I always thought that spherical waves (i.e. local sources) would make everything far more complicated. In fact, it does not. And also the fact that the field consists of waves at 4 different frequencies does not do much harm. (subtration performance decreased a little). Remember that I am currently using a single-tap FIR filter if you want. I thought that you need more taps once you have more frequencies. I was wrong. The next step is the wavelet simulation. This will eventually lead to a final verdict about single-tap v. mutli-tap filtering.


  177   Tue Jan 25 11:57:23 2011 JanComputingSeismometrywavelets

Here is the hour of truth (I think). I ran simulations of wavelets. These are not anymore characterized by a specific frequency, but by a corner frequency. The spectra of these wavelets almost look like a pendulum transfer function, where the resonance frequency now has the meaning of a corner frequency. The width of the peak at the corner frequency depends on the width of the wavelets. These wavelets propagate (without dispersion) from somewhere at some time into and out of the grid. There are always 12 wavelets at four different corner frequencies (same as for the other waves in my previous posts). The NN now has the following time series:


You can see that from time to time a stronger wavelet would pass by and lead to a pulse like excitation of the NN. Now, the first news is that the achieved subtraction factor drops significant compared to the stationary cases (plane waves and spherical waves):


And the 4*pi, 10 seismometer spiral dropped below an average factor of 0.88. But I promised to introduce an absolute figure to quantify subtraction performance. What I am now doing is to subtract the filtered array NN estimation from the real NN and take its standard deviation. The standard deviation of the residual NN should not be larger than the standard deviation of the other noise that is part of the TM displacement. In addition to NN, I add a 1e-16 stddev noise to the TM motion. Here is the absolute filter performance:


As you can see, subtraction still works sufficiently well! I am now pretty much puzzled since I did not expect this at all. Ok, subtraction factors decreased a lot, but they are still good enough. REMINDER: I am using a SINGLE-TAP (multi input channel) Wiener filter to do the subtraction. It is amazing. Ideas to make the problem even more complex and to challenge the filter even more are welcome.


  178   Wed Jan 26 10:34:53 2011 JanSummarySeismometryFIR filters and linear estimation

I wanted to write down what I learned from our filter discussion yesterday. There seem to be two different approaches, but the subject is sufficiently complex to be wrong about details. Anyway, I currently believe that one can distinguish between real filters that operate during run time, and estimation algorithms that cannot be implemented in this way since they are acausal. For simplicity, let's focus on FIR filter and linear estimation to represent the two cases.

A) FIR filters


A FIR filter has M tap coefficients per channel. If the data is sampled, then you would take the past M samples (including sample at present time t) of each channel, run them through the FIR and subtract the FIR output from the test-mass sample at time t. This can also be implemented in a feed-forward system so that the test-mass data is not sampled. Test-mass data is only used initially to calclulate the FIR coefficients, unless the FIR is part of an adaptive algorithm. For adaptive filters, you would factor out anything from the FIR that you know already (e.g. your best estimates of transfer functions) and only let it do the optimization around this starting value.

The FIR filter can only work if transfer functions do not change much over time. This is not the case though for Newtonian noise. Imagine the following case:


where you have two seismometers around a test mass along a line, one of them can be closer to the test mass than the other. We need to monitor the vertical displacement to estimate NN parallel to the line (at least when surface fields are dominant). If a plane wave propagates upwards, perpendicular to the line, then there will be no NN parallel to this line (because of symmetry). The seismic signals at S1 and S2 are identical. Now a plane wave propagating parallel to the line will produce NN. If the distance between the seismometers happens to be the length of the plane wave, then again, the seismometers will show identical seismic signals, but this time there is NN. An FIR filter would give the same NN prediction in these two cases, but NN is actually different (being absent in the first case). So it is pretty obvious that FIR alone cannot handle this situation.

What is the purpose of the FIR anyway? In the case of noise subtraction, it is a clever time-domain representation of transfer functions. Clever means optimal if the FIR is a Wiener filter. So it contains information of the channels between sensors and test mass, but it does not care at all about information content in the sensor data. This information is (intentionally if you want) averaged out when you calculate the FIR filter coefficients.

B) Linear estimation


So how to deal with information content in sensor data from multiple input channels? We will assume that an FIR can be applied to factor out the transfer functions from this problem. In the surface NN case, this would be the 1/f^2 from NN acceleration to test-mass displacement, and the exp(-2*pi*f*h/c) - h being the height of the test mass above ground - which accounts for the frequency-dependent exponential suppression of NN. Since the information content of the seismic field changes continuously, we cannot train a filter that would be able to represent this information for all times. So it is obvious, that this information needs to be updated continuously.

The problem is very similar to GW data analysis. What we are going to do is to construct a NN template that depends on a few template parameters. We estimate these parameters (maximum likelihood) and then we subtract our best-estimate of the NN signal from the data. This cannot be implemented as feed forward and relies on chopping the data into stretches of M samples (not necessarily the same value for M as in the FIR case). Now what are the template parameters? These are the coefficients used to combine the data stretches of the N sensors. This is great since the templates depend linearly on these parameters. And it is trivial to calculate the maximum-liklihood estimates of the template parameters. The formula is in fact analogous to calculating the Wiener-filter coefficients (optimal linear estimates). If we only use one parameter per channel (as discussed yesterday) or if one should rather chop the sensor data into even smaller stretches and introduce additional template coefficients will depend on the sensor data and how nature links them to the test mass. Results of my current simulation suggest that only one parameter per channel is required.

When I realized that the NN subtraction is a linear estimation problem with templates etc, I immediately realized that one could do higher-order noise subtraction so that we will never be limited by other contributions to the test mass displacement (and here I essentially mean GWs since you don't need to subtract NN below other GWD noise, but maybe below the GW spectrum if other instrumental noise is also weaker). Something to look at in the future (if this scenario is likely or not, i.e. NN > GW > other noise).

  179   Thu Jan 27 14:51:41 2011 JanComputingSeismometryapproaching the real world / transfer functions

The simulation is not a good representation of a real detector. The first step to make it a little more realistic is to simulate variables that are actually measured. So for example, instead of using TM acceleration in my simulation, I need to simulate TM displacement. This is not a big change in terms of simulating the problem, but it forces me to program filters that correct the seismometer data for any transfer functions between seismometers and GWD data before the linear estimation is calculated. This has been programmed now. Just to mention, the last more important step to make the simulation more realistic is to simulate seismic and thermal noise as additional TM displacement. Currently, I am only adding white noise to the TM displacement. If the TM displacement noise is not white, then you would have to modify the optimal linear estimator in the usual way (correlations substituted by integrals in frequency domain using freqeuncy-dependent noise weights).

I am now also applying 5Hz high-pass filters here and there to reduce numerical errors accumulating in time-series integrations. The next three plots are just a check that the results still make sense after all these changes. The first plot is shows the subtraction residuals without correcting for any frequency dependence in the transfer functions between TM displacement and seismometer data:


The dashed line indicates the expected minimum of NN subtraction residuals, which is determined by the TM-displacement noise (which in reality would be seismic noise, thermal noise and GW). The next plot is shows the residuals if one applies filters to take the conversion from TM acceleration into displacement into account:


This is already sufficient for the spiral array to perform more or less optimally. In all simulations, I am injecting a merry mix of wavelets and spherical waves at different frequencies. So the displacement field is as complex as it can get. Last but not least, I modified the filters such that they also take the frequency-dependent exponential suppression of NN into account (because of TM being suspended some distance above ground):


The spiral array was already close to optimal, but the performance of the circular array did improve quite a bit (although 10 simulation runs may not be enough to compare this convincingly with the previous case).

  180   Fri Jan 28 11:22:34 2011 JanComputingSeismometryrealistic noise model -> many problems

So far, the test mass noise was white noise such that SNR = NN/noise was about 10. Now the simulation generates more realistic TM noise with the following spectrum:


The time series look like:


So the TM displacement is completely dominated by the low-frequency noise (which I cut off below 3Hz to avoid divergent noise). None of the TM noise is correlated with NN. Now this should be true for aLIGO since it is suspension-thermal and radiation-pressure noise limited at lowest frequencies, but who knows. If it was really limited by seismic noise, then we would also deal with the problem that NN and TM noise are correlated.

Anyway, changing to this more realistic TM noise means that nothing works anymore. The linear estimator tries to subtract the dominant low-frequency noise instead of NN. You cannot solve this problem simply by high-pass filtering the data. The NN subtraction problem becomes genuinely frequency-dependent. So what I will start to do now is to program a frequency-dependent linear estimator. I am really curious how well this is going to work. I also need to change my figures of merit. A simple plot of standard-deviation subtraction residuals will always look bad. This is because you cannot subtract any of the NN at lowest frequencies (since TM noise is so strong there). So I need to plot spectra of subtraction noise and make sure that the residuals lie below or at least close to the TM noise spectrum.

  181   Fri Jan 28 14:50:27 2011 JanComputingSeismometryColored noise subtraction, a first shot

Just briefly, my first subtracition spectra:


Much better than I expected, but also not good enough. All spectra in this plot (except for the constant noise model) are averages over 10 simulation runs. The NN is the average NN, and the two "res." curves show the residual after subtraction. It seems that the frequency-dependent linear estimator is working since subtraction performance is consistent with the (frequency-dependent) SNR. Remember that the total integrated SNR=NN/noise is much smaller than 1 due to the low-frequency noise, and therefore you don't achieve any subtraction using the simple time-domain linear estimators. Now the final step is to improve the subtraction performance a little more. I don't have clever ideas how to do this, but there will be a way.

  182   Fri Jan 28 15:28:52 2011 JanComputingSeismometryHaha, success!

Instead of estimating in the frequency domain, I now have a filter that is defined in frequency domain, but transformed into time domain and then applied to the seismometer data. The filtered seismometer data can then be used for the usual time-domain linear estimators. The results is perfect:


So what's left on the list? Although we don't need this, "historically" I had interest in PCA. Although it is not required anymore, analyzing the eigenvalues of the linear estimators may tell us something about the number of seismometers that we need. And it is simply cool to understand estimation of information in seismic noise fields.

  183   Fri Jan 28 21:09:56 2011 JanSummarySeismometryNN subtraction diagram

This is how Newtonian-noise subtraction works:


  184   Wed Mar 9 18:32:45 2011 JanDailyProgressNoise BudgetLimits to NN subtraction

I wanted to push the limits and see when NN subtraction performance starts to break by changing the number of seismometers and the size of the array. For aLIGO, 10 seismometers in a doubly-wound spiral around the test mass with outer radius 8m is definitely ok. Only if I simulate a seismic field that is stronger by a factor 20 than the 90 percentile curve observed at LHO does it start to get problematic. The subtraction residuals in this case look like


The 20 seismometer spiral is still good, but the 10 seismometer spiral does not work anymore. It gets even worse when you consider arrays with circular shape (and one seismometer at the center near the test mass):


This result is in agreement with previous results that circular arrays have trouble in general to subtract NN from locally generated seismic waves or seismic transients (wavelets).

I should emphasize that the basic assumption is that I know what the minimum seismic wavelength is. Currently I associate the minimum wavelength with a Rayleigh overtone, but scattering could make a difference. It is possible that there are scattered waves with significantly smaller wavelength.

Attachment 1: Residuals_Spirals.pdf
  185   Thu Mar 10 14:59:54 2011 JanDailyProgressSeismometryThoughts about how to optimize feed-forward for NN

If the plan is to use feed-forward cancellation instead of noise templates, then the way to optimize the array design is to understand where gravity perturbations are generated. The following plot shows a typical gravity-perturbation field as seen by the test mass. It is a snapshot at a specific moment in time. The gravity-perturbation force is projected onto the line along the arm (Y=0). Green means no gravity perturbation along the arm generated at this point.


The plot shows that the gravity perturbations along the direction of the arm seen by the test mass are generated very close to the test mass (most of it within a radius of 10m), and that it is generated "behind" and "in front of" the mirror. This follows directly from projecting onto the arm direction. As we already know, for feed-forward, we can completely neglect the existence of seismic waves and focus on actual gravity perturbations. In short, for feed-forward, you would place the seismometers inside the blue-red region and don't worry about any locations in the green. The distance between seismometers should be equal to or less than the distance between red and blue extrema. So even though I haven't simulated feed-forward cancellation yet, I already know how to make it work. Obviously, if subtraction goals are more ambitious than what we need for aLIGO, then feed-forward cancellation of NN would completely fail generating more problems than solving problems. Unless someone wants to deploy hundreds to a few thousand seismometers around each test mass.

  203   Fri May 13 10:52:27 2011 JanMiscSeismometrySTS-2 guts

I was flabbergasted when I saw this. There are many really good seismometers with very simple mechanical design and electronics. This is a nice one with complicated mechanics and electronics.

RA: Awesome.

Attachment 1: STS-2_dissassembly.pdf
STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf
  309   Sun Aug 14 22:47:26 2011 JanDailyProgressCrackleMeasurement of seismic noise & mass dumping


I'm really surprised if that is the real noise spectrum. Seems too low. I suggest discussing it with Jan to see if he knows about a different calibration factor.

The gain should be correct (it is the standard GS-13 gain without amplification). I am not sure what's wrong. Yes, the spectrum is weaker by about a factor 10 compared to spectra that we measured a while ago with the T240. Ignoring the T240 results, we don't really know what to expect at frequencies above a few Hz, but the 0.3Hz value from the GS-13 is certainly way too weak (even when you include a response correction from the 1Hz pole).

  312   Mon Aug 15 19:55:10 2011 JanDailyProgressCrackleSeismic noise


We measured the seismic noise again.

The result is the attached file.

Difference from yesterday (yesterday -> today):
    1. Input voltage (5V -> +-12V)
    2. Adjustment of the spring in the seismometer
    3. Consideration of preamp factor in the seismometer (1 -> 100)
The calibration factor of coil is still 2150 V/(m/s)

From the result, it looks like no change on the high frequency, but more noisy on the low frequency.


 I think this agrees pretty well with the T240 spectra if I am not mistaken. Notice that the f<1Hz spectrum is too small since calibration has been done with a constant factor.

  356   Sat Sep 17 15:07:49 2011 JanComputingCrackleSimple model

I wanted to find out how difficult it is to program crackling-noise simulations. The good news is that it is not difficult at all. I spent 30min to generate this video. I am sure that a cool student can make a good blade-simulation in one month that can run on the cluster.

How to do these simulations is explained in PRB 66, 024430. The basic idea is to calculate an effective field strength at each grid point (any type of field can be considered, not only magnetic fields) and then to let spins (or whatever can be effectively described as spin) flip when the effective field changes sign. This happens since we always consider an external field contributing to the effective field at each grid location which is for example oscillating with large amplitude.

The video shows the spin orientation and efective fields at all grid points. I assumed a zero-range spin interaction with random field (the nominal  Ising model has nearest-neighbor interaction). So there are no avalanches, but it would be very easy to program a proper (random field) Ising model that has spin interaction and shows avalanches. I could easily go to 100^3 grid points and still run it on my laptop. But that's about the limit using Matlab. More realistic spin interactions with more grid points need to be simulated on our cluster.

For blades, a spin flip would probably lead to a localized point-force excitation and associated displacement response propagating through the blade. Programming this would probably take more than 30min, but it should be a nice project and can certainly be simplified if done cleverly. But this together with a good model for the spin interaction is all you need to calculate the crackling noise spectra in blades.

Attachment 1: Ising_n0.mp4
  357   Sat Sep 17 21:46:28 2011 JanComputingCrackleFull Ising

Here a full Ising model with avalanches. As expected, edge defects lead  to avalanches that creep from the corners and edges into the grid since edge spins have less neighbors. The 2D video is just a plane intersection through the middle of the 3D grid. Now, the next step would be to simulate a periodic variation of the external field, and then the number of spin flips per time step gives you the amplitude of the crackling noise (very simplified model). So one could simulate a very long time series and then calculate the spectrum of the noise. It should be something like 1/f^(1.77/2), but certainly the small dimensions of the simulated grid will lead to a small frequency cut-off. Also, the grid density will lead to a high-frequency cut-off. This is why we need the cluster. The geometry of the blade as well as a better model of the spin interaction is required to make a more accurate prediction of crackling noise. Unfortunately, we don't know the spin interaction, but we could try a few models and compare with our measurements (if we ever measure crackling noise).

Attachment 1: Ising_2D.mp4
Attachment 2: Ising_3D.mp4
  359   Sun Sep 18 15:56:22 2011 JanComputingCrackleFull Ising


I think there are 2 problems with this approach:

1) It doesn't consider the case of finite temperature. I guess that the spin flip probability has a statistical nature which depends on the stress of the domain and the temperature.

2) In the literature for martensitic materials people talk about the distribution of domain sizes. Shouldn't there be an effect due there being lots of small domain flips and not many big ones?

 About 1) Yes, but finite temperature effects can be incorporated into the simulation without much trouble. Finite temperature is more a theory problem.

About 2) Well, so I think this would come out naturally from the simulation when you first tune your parameters and second when you don't have this simplified case with an external field ramping up from -inf to +inf. If you consider a finite amplitude periodic external field, then I would guess that small domain flips will be observed in simulation.

Changing the external field to a periodic force and introducing a random element like finite temperature effects, I am sure that the results will look more interesting. However, I am not sure yet how large the grid needs to be so that you have at least some region inside the grid that is not dominated by surface effects.

  158   Mon Aug 23 22:07:39 2010 JenneThings to BuySeismometryBoxes for Seismometer Breakout Boxes

In an effort to (1) train Jan and Sanjit to use the elog and (2) actually write down some useful info, I'm going to put some highly useful info into the elog.  We'll see what happens after that....

The deal:  we have a Trillium, an STS-2, a GS-13 and the Ranger Seismometers, and we want to make nifty breakout boxes for each of them.  These aren't meant to be sophisticated, they'll just be converter boxes from the many-pin milspec connectors that each of the seismometers has to several BNCs so that we can read out the signals.  These will also give us the potential to add active control for things like the mass positioning at some later time.  For now however, the basics only.

I suggest buying several boxes which are like Pomona Boxes, but cheaper.  Digi-Key has them.  I don't know how to link to my search results, so I'll list off the filters I applied / searched for in the Digi-Key catalog:

Hammond Manufacturing, Boxes, Series=1550 (we don't have to go for this series of boxes, but it seems sensible and middle-of-the-line), unpainted, watertight.

Then we have a handy-dandy list of possible sizes of nice little boxes. 

The final criteria, which Sanjit is working on, is how big the boxes need to be.  Sanjit is taking a look at the pinouts for each seismometer and determining how many BNC connectors we could possibly need for each breakout box.  Jan's guess is 8, plus power.  So we need a box big enough to comfortably fit that many connectors. 

ELOG V3.1.3-