40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  SUS Lab eLog, Page 1 of 37  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  110   Wed Dec 19 23:10:27 2007 waldmanComputingOMCFramebuilder broken
The framebuilder on OMS isn't doing anything. I can't get data using dataviewer or DTT. Thankfully, I can still use an analog scope to get data.

sam
  111   Thu Dec 20 11:38:46 2007 aivanovComputingOMCframe builder fixed
I put the things back the way they were before we borrowed the ADC board. I can see the TPT data now. Alex
  112   Thu Dec 20 19:11:40 2007 waldmanComputingOMCHoWTO build front ends
For instance, to build the TPT front end code.

  • Save your file /cvs/cds/advLigo/src/epics/simLink/tpt.mdl
  • go to /cvs/cds/advLIGO on the TPT machine
  • do make clean-tpt tpt install-tpt
  • do rm /cvs/cds/caltech/chans/daq/C2TPT.ini (this step is needed because the DAQ install code isn't quite right at the time of this writing.
  • do make install-daq-tpt
  • run starttpt to restart the tpt computer.

Enjoy.
  169   Wed Jan 19 17:13:11 2011 JanComputingSeismometryFirst steps towards full NN Wiener-filter simulator

I was putting some things together today to program the first lines of a 2D-NN-Wiener-filter simulation. 2D is ok since it is for aLIGO where the focus lies on surface displacement. Wiener filter is ok (instead of adaptive) since I don't want to get slowed down by the usual finding-the-minimum-of-cost-function-and-staying-there problem. We know how to deal with it in principle, one just needs to program a few more things than I need for a Wiener filter.

My first results are based on a simplified version of surface fields. I assume that all waves have the same seismic speed. It is easy to add more complexity to the simulation, but I want to understand filtering simple fields first. I am using the LMS version of Wiener filtering based on estimated correlations.

SeismicField.jpg

The seismic field is constructed from a number of plane waves. Again, one day I will see what happens in the case of spherical waves, but let's forget it about it for now. I calculate the seismic field as a function of time, calculate the NN of a test mass, position a few seismometers on the surface, add Gaussian noise to all seismometer data and test mass displacement, and then do the Wiener filtering to estimate NN based on seismic data. A snapshot of the seismic field after one second is shown in the contour plot.

Coherence_Array.jpg

Seismometers are placed randomly around the test mass at (0,0) except for one seismometer that is always located at the origin. This seismometer plays a special role since it is in principle sufficient to use data from this seismometer alone to estimate NN (as explained in P0900113). The plot above shows coherence between seismometer data and test-mass displacement estimated from the simulated time series.

TimeSeries_TM.jpg

The seismometers measure displacement with SNR~10. This is why the seismometer data looks almost harmonic in the time series (green curve). The fact that any of the seismometer signal is harmonic is a consequence of the seismic waves all having the same speed. An arbitrary sum of these waves produce harmonic displacement at any point of the surface (although with varying amplitude and phase). The figure shows that the Wiener filter is doing a good job. The question is if we can do any better. The answer is 'yes' depending on the insturmental noise of the seismometers.

Coh_Filt.jpg

So what do I mean? Isn't the Wiener filter always the optimal filter? No, it is not. It is the optimal filter only if you have/know nothing else but the seismometer data and the test-mass displacement. The title of the last plot shows two numbers. These are related to coherence via 1/(1/coh^2-1). So the higher the number, the higher the coherence. The first number is calculated fromthe coherence of the estimated NN displacement of the test mass and the true NN displacement. Since there is other stuff shaking the mirror, I can only know in simulations what the true NN is. The second number is calculated from coherence between the seismometer at the origin and true NN. It is higher! This means that the one seismometer at the origin is doing better than the Wiener filter using data from the entire array. How can this be? This can be since the two numbers are not derived from coherence between total test-mass displacement and seismometer data, but only between the NN displacement and seismometer data. Even though this can only be done in simulation, it means that even in reality you should only use the one seismometer at the origin. This strategy is based on our a priori knowledge about how NN is generated by seismic fields. Now I am simulating a rather simple seismic field. So it still needs to be checked if this conclusion is true for more realistic seismic fields.

But even in this simulated case, the Wiener filter performs better if you simulate a shitty seismometer (e.g. SNR~2 instead of 10). I guess this is the case because averaging over many instrumental noise realizations (from many seismometers) gives you more advantage than the ability to produce the NN signal from seismometer data.

  170   Thu Jan 20 16:05:14 2011 JanComputingSeismometryWiener v. seismometer 0

So I was curious about comparing the performance of the array-based NN Wiener filter versus the single seismometer filter (the seismometer that sits at the test mass). I considered two different instrumental scenarios (seismometers have SNR 10 or SNR 1), and two different seismic scenarios (seismic field does or does not contain high-wavenumber waves, i.e. speed = 100m/s). Remember that this is a 2D simulation, so you can only distinguish between the various modes by their speeds. The simulated field always contains Rayleigh waves (i.e. waves with c=200m/s), and body waves (c=600m/s and higher).

There are 4 combinations of instrumental and seismic scenarios. I already found yesterday that the array Wiener filter is better when seismometers are bad. Here are two plots, left figure without high-k waves, right figure with high-k waves, for the SNR 1 case:

Performance_SNR1_A.jpgPerformance_SNR1_B.jpg

'gamma' is the coherence between the NN and either the Wiener-filtered data or data from seismometer 0. There is not much of a difference between the two figures, so mode content does not play a very important role here. Now the same figures for seismometers with SNR 10:

Performance_SNR10_A.jpgPerformance_SNR10_B.jpg

Here, the single seismometer filter is much better. A value of 10 in the plots mean that the filter gets about 95% of NN power correctly. A value of 100 means that it gets about 99.5% correctly. For the high SNR case, the single seismometer filter is not so much better as the Wiener filter when the seismic field contains high-k waves. I am not sure why this is the case.

The next steps are
A) Simulate spherical waves
B) Simulate wavelets with plane wavefronts (requires implementation of FFT and multi-component FIR filter)
C) Simulate wavelets with spherical wavefronts

Other goals of this simulation are
A) Test PCA
B) Compare filter performance with quality of spatial spectra (i.e. we want to know if the array needs to be able to measure good spatial spectra in order to do good NN filtering)

  171   Fri Jan 21 12:43:17 2011 JanComputingSeismometrycleaned up code and new results

It turns out that I had to do some clean-up of my NN code:

1) The SNRs were wrong. The problem is that after summing all kinds of seismic waves and modes, the total field should have a certain spectral density, which is specified by the user. Now the code works and the seismic field has the correct spectral density no matter how you construct it.

2) I started with a pretty unrealistic scenario. The noise on the test mass, and by this I mean everything but the NN, was too strong. Since this is a simulation of NN subtraction, we should rather assume that NN is much stronger than anything else.

3) I filtered out the wrong kind of NN. I am now projecting NN onto the direction of the arm, and then I let the filter try to subtract it. It turns out, and it is fairly easy to prove this with paper and pencil, that a single seismometer CANNOT never ever be used to subtract NN. This is because of a phase-offset between the seismic displacement at the origin and NN at the origin. It is easy to show that the single-seismometer method only works for the vertical NN or underground for body waves.

 

This plot is just the prove for the phase-offset between horizontal NN and gnd displacement at origin. The offset is depends on the wave content of the seismic field:

TimeSeries_TM.jpg

The S0 points in the  following plot are now obsolete. As you can see, the Wiener filter performs excellently now because of the high NN/rest ratio of TM dispalcement. The numbers in the titel now tell you how much NN power is subtracted. So a '1' is pretty nice...

Performance_SNR10_B.jpg 

One question is why the filter performance varies from simulation to simulation. Can't we guarantee that the filter always works? Yes we can. One just needs to understand that the plot shows the subtraction efficiency. Now it can happen that a seismic field does not produce much NN, and then we don't need to subtract much. Let's check if the filter performance somehow correlates with NN amplitude:

Performance_cNN_SNR10_B.jpg

As you can see, it seems like most of the performance variation can be explained by a changing amplitude of the NN itself. The filter cannot subtract much only in cases when you don't really need to subtract. And it subtracts nicely when NN is strong.

  172   Sat Jan 22 20:19:51 2011 JanComputingSeismometrySpatial spectra

All right. The next problem I wanted to look at was if the ability of the seismic array to produce spatial spectra is somehow correlated with its NN subtraction performance. Now whatever the result is, its implications are very important. Array design is usually done to maximize its accuracy to produce spatial spectra. So the general question is what our guidelines are going to be? Information theory or good spatial spectra? I was always advertizing the information theory approach, but it is scary if you think about it, because the array is theoretically not good for anything useful to seismology, but it may still somehow provide the information that we need for our purposes.

Ok, who wins? Again, the current state of the simulation is to produce plane waves all at the same frequency, but with varying speeds. The challenge is really the mode variation (i.e. varying speeds) and not so much varying frequencies. You can always switch to fft methods as soon as you inject waves at a variety of frequencies. Also, I am simulating arrays of 20 seismometers that are randomly located (within a 30m*30m area) including one seismometer that is always at the origin. One of my next steps will be to study the importance of array design. So here is how well these arrays can do in terms of measuring spatial spectra:

Map_3.jpgMap_4.jpg

The circles indicate seismic speeds of {100,250,500,1000}m/s and the white dots the injected waves (representing two modes, one at 200m/s, the other at 600m/s). The results are not good at all (as bad as the maps from the geophone array that we had at LLO). It is not really surprising that the results are bad, since seismometer locations are random, but I did not expect that they would be so pathetic. Now, what about NN subtraction performance?

Performance_cNN_SNR10_A.jpg

 The numbers indicate the count of simulation runs. The two spatial spectra above have indices 3 (left figure) and 4 (right figure). So you see that everything is fine with NN subtraction, and that spatial spectra can still be really bad. This is great news since we are now deep in information theory. We should not get to excited at this point since we still need to make the simulation more realistic, but I think that we have produced a first powerful clue that the strategy to monitor seismic sources instead of the seismic field may actually work.

  173   Sun Jan 23 09:03:43 2011 JanComputingSeismometryphase offset NN<->xi

I just want to catch up on my conclusion that a single seismometer cannot be used to do the filtering of horizontal NN at the surface. The reason is that there is 90° phase delay of NN compared to ground displacement at the test mass. The first reaction to this shoulb be, "Why the hack phase delay? Wouldn't gravity perturbations become important before the seismic field reaches the TM?". The answer is surprising, but it is "No". The way NN builds up from plane waves does not show anything like phase advance. Then you may say that whatever is true for plane waves must be true for any other field since you can always expand your field into plane waves. This however is not true for reasons I am going to explain in a later post. All right, but to say that seismic dispalcement is 90° ahead of NN really depends on which directoin of NN you look at. The interferometer arm has a direcion e_x. Now the plane seismic wave is propagating along e_k. Now depending on e_k, you may get an additional "-" sign between seismic dispalcement and NN in the direction of e_x. This is the real show killer. If there was a universal 90° between seismic displacement and NN, then we could use a single seismometer to subtract NN. We would just take its data from 90° into the past. But now the problem is that we would need to look either 90° into the past or future depending on propagation direction of the seismic wave. And here two plots of a single-wave simulation. The first plots with -pi/2<angle(e_x,e_k)<pi/2, the second with pi/2<angle(e_x,e_k)<3*pi/2:

TimeSeries_fwd.jpgTimeSeries_bwd.jpg

 

  174   Sun Jan 23 10:27:07 2011 JanComputingSeismometryspiral v. random

A spiral shape is a very good choice for array configurations to measure spatial spectra. It produces small aliasing. How important is array configuration for NN subtraction? Again: plane waves, wave speeds {100,200,600}m/s, 2D, SNR~10. The array response looks like Stonehenge:

Coherence_spiral.jpgSpiral_resp.jpg

A spiral array is doing a fairly good job to measure spatial spectra:

Map_6.jpgMap_7.jpg

The injected waves are now represented by dots with radii proportional to the wave amplitudes (there is always a total of 12 waves, so some dots are not large enough to be seen). The spatial spectra are calculated from covariance matrices, so theory goes that spatial spectra get better using matched-filtering methods (another thing to look at next week...).

Now the comparison between NN subtraction using 20 seismometers, 19 of which randomly placed, one at the origin, and NN subtraction using 20 seismometers in a spiral:

Performance_cNN_SNR10_B_random.jpgPerformance_cNN_SNR10_B_spiral.jpg

A little surprising to me is that the NN subtraction performance is not substantially better using a spiral configuration of seismometers. The subtraction results show less variation, but this could simply be because the random configuration is changing between simulation runs. So the result is that we don't need to worry much about array configuration. At least when all waves have the same frequency. We need to look at this again when we start injecting wavelets with more complicated spectra. Then it is more challenging to ensure that we obtain information at all wavelengths. The next question is how much NN subtracion depends on the number of seismometers.

  175   Sun Jan 23 12:59:18 2011 JanComputingSeismometryLess seismometers, less subtraction?

Considering equal areas covered by seismic arrays, the number of seismometers relates to the density of seismometers and therefore to the amount of aliasing when measuring spatial spectra. In the following, I considered four cases:

1) 10 seismometers randomly placed (as usual, one of them always at the origin)
2) 10 seismometers in a spiral winding one time around the origin
3) The same number winding two times around the origin (in which case the array does not really look like a spiral anymore):

Coherence_spiral_A2.jpg
4) And since isotropy issues start to get important, the forth case is a circular array with one of the 10 seismometers at the origin, the others evenly spaced on the circle. 

Just as a reminder, there was not much of a difference in NN subtraction performance when comparing spiral v. random array in case of 20 seismometers. Now we can check if this is still the case for a smaller number of seismometers, and what the difference is between 10 seismometers and 20 seismometers. Initially we were flirting with the idea to use a single seismometer for NN subtraction, which does not work (for horiz NN from surface fields), but maybe we can do it with a few seismometers around the test mass instead of 20 covering a large area. Let's check.

Here are the four performance graphs for the four cases (in the order given above):

Performance_cNN_SNR10_B_N10_random.jpgPerformance_cNN_SNR10_B_N10_spiral.jpg

Performance_cNN_SNR10_B_N10_A2_spiral.jpgPerformance_cNN_SNR10_B_N10_circ.jpg

All in all, the subtraction still works very well. We only need to subtract say 90% of the NN, but we still see average subtractions of more than 99%. That's great, but I expect these numbers to drop quite a bit once we add spherical waves and wavelets to the field. Although all arrays perform pretty well, the twice-winding spiral seems to be the best choice. Intuitively this makes a lot of sense. NN drops with 1/r^3 as a function of distance r to the TM, and so you want to gather information more accurately from regions very close to the TM, which leads you to the idea to increase seismometer density close to the TM. I am not sure though if this explanation is the correct one.

 

  176   Mon Jan 24 15:50:38 2011 JanComputingSeismometryMulti-frequency and spherical

I had to rebuild some of the guts of my simulation to prepare it for the big changes that are to come later this week. So I only have two results to report today. The code can now take arbitrary waveforms. I tested it with spherical waves. I injected 12 spherical waves into the field, all originating 50m away from the test mass with arbitrary azimuths. The 12 waves are distributed over 4 frequencies, {10,14,18,22}Hz with equal spectral density (so 3 waves per frequency). The displacement field is far more complex than the plane-wave fields and looks more like a rough endoplasmic reticulum:

Field_SW4f3.jpg

The spatial spectra are not so much different from the plane-wave spectra:

Map_10Hz_SW4f3.jpg

The white dots now indicate the back-azimuth of the injected waves, not their propagation direction. And we can finally compare subtraction performance for plane-wave and spherical-wave fields:

Performance_Spiral20_PW4f3.jpgPerformance_Spiral20_SW4f3.jpg

Here the plane-wave simulation is done with 12 plane waves at the same 4 frequencies as the spherical waves, and in both cases I chose a 20 seismometer 4*pi spiral array. Note that the subtraction performance is pretty much identical since the NN was generally stronger in the spherical-wave simulation (dots 5 and 20 in the right figure lie somewhere in between the upper right group of dots in the left figure). This makes me wonder if I shouldn't switch to some absolute measure for the subtraction performance, so that the absolute value of NN does not matter anymore. In the end, we don't want to achieve a subtraction factor, but a subtraction level (i.e. the target sensitivity of the GW detectors).

Anyway, the result is very interesting. I always thought that spherical waves (i.e. local sources) would make everything far more complicated. In fact, it does not. And also the fact that the field consists of waves at 4 different frequencies does not do much harm. (subtration performance decreased a little). Remember that I am currently using a single-tap FIR filter if you want. I thought that you need more taps once you have more frequencies. I was wrong. The next step is the wavelet simulation. This will eventually lead to a final verdict about single-tap v. mutli-tap filtering.

 

  177   Tue Jan 25 11:57:23 2011 JanComputingSeismometrywavelets

Here is the hour of truth (I think). I ran simulations of wavelets. These are not anymore characterized by a specific frequency, but by a corner frequency. The spectra of these wavelets almost look like a pendulum transfer function, where the resonance frequency now has the meaning of a corner frequency. The width of the peak at the corner frequency depends on the width of the wavelets. These wavelets propagate (without dispersion) from somewhere at some time into and out of the grid. There are always 12 wavelets at four different corner frequencies (same as for the other waves in my previous posts). The NN now has the following time series:

Data_WA4f3.jpg

You can see that from time to time a stronger wavelet would pass by and lead to a pulse like excitation of the NN. Now, the first news is that the achieved subtraction factor drops significant compared to the stationary cases (plane waves and spherical waves):

Perf_rel_Spiral20_WA4f3.jpg

And the 4*pi, 10 seismometer spiral dropped below an average factor of 0.88. But I promised to introduce an absolute figure to quantify subtraction performance. What I am now doing is to subtract the filtered array NN estimation from the real NN and take its standard deviation. The standard deviation of the residual NN should not be larger than the standard deviation of the other noise that is part of the TM displacement. In addition to NN, I add a 1e-16 stddev noise to the TM motion. Here is the absolute filter performance:

Perf_abs_Spirals_WA4f3.jpg

As you can see, subtraction still works sufficiently well! I am now pretty much puzzled since I did not expect this at all. Ok, subtraction factors decreased a lot, but they are still good enough. REMINDER: I am using a SINGLE-TAP (multi input channel) Wiener filter to do the subtraction. It is amazing. Ideas to make the problem even more complex and to challenge the filter even more are welcome.

 

  179   Thu Jan 27 14:51:41 2011 JanComputingSeismometryapproaching the real world / transfer functions

The simulation is not a good representation of a real detector. The first step to make it a little more realistic is to simulate variables that are actually measured. So for example, instead of using TM acceleration in my simulation, I need to simulate TM displacement. This is not a big change in terms of simulating the problem, but it forces me to program filters that correct the seismometer data for any transfer functions between seismometers and GWD data before the linear estimation is calculated. This has been programmed now. Just to mention, the last more important step to make the simulation more realistic is to simulate seismic and thermal noise as additional TM displacement. Currently, I am only adding white noise to the TM displacement. If the TM displacement noise is not white, then you would have to modify the optimal linear estimator in the usual way (correlations substituted by integrals in frequency domain using freqeuncy-dependent noise weights).

I am now also applying 5Hz high-pass filters here and there to reduce numerical errors accumulating in time-series integrations. The next three plots are just a check that the results still make sense after all these changes. The first plot is shows the subtraction residuals without correcting for any frequency dependence in the transfer functions between TM displacement and seismometer data:

Perf_abs_WA4f2_SW4f1_noTF.jpg

The dashed line indicates the expected minimum of NN subtraction residuals, which is determined by the TM-displacement noise (which in reality would be seismic noise, thermal noise and GW). The next plot is shows the residuals if one applies filters to take the conversion from TM acceleration into displacement into account:

Perf_abs_WA4f2_SW4f1_acc.jpg

This is already sufficient for the spiral array to perform more or less optimally. In all simulations, I am injecting a merry mix of wavelets and spherical waves at different frequencies. So the displacement field is as complex as it can get. Last but not least, I modified the filters such that they also take the frequency-dependent exponential suppression of NN into account (because of TM being suspended some distance above ground):

Perf_abs_WA4f2_SW4f1_acex.jpg

The spiral array was already close to optimal, but the performance of the circular array did improve quite a bit (although 10 simulation runs may not be enough to compare this convincingly with the previous case).

  180   Fri Jan 28 11:22:34 2011 JanComputingSeismometryrealistic noise model -> many problems

So far, the test mass noise was white noise such that SNR = NN/noise was about 10. Now the simulation generates more realistic TM noise with the following spectrum:

NoiseModel_TM.jpg

The time series look like:

Data_WA4f2_SW4f1.jpg

So the TM displacement is completely dominated by the low-frequency noise (which I cut off below 3Hz to avoid divergent noise). None of the TM noise is correlated with NN. Now this should be true for aLIGO since it is suspension-thermal and radiation-pressure noise limited at lowest frequencies, but who knows. If it was really limited by seismic noise, then we would also deal with the problem that NN and TM noise are correlated.

Anyway, changing to this more realistic TM noise means that nothing works anymore. The linear estimator tries to subtract the dominant low-frequency noise instead of NN. You cannot solve this problem simply by high-pass filtering the data. The NN subtraction problem becomes genuinely frequency-dependent. So what I will start to do now is to program a frequency-dependent linear estimator. I am really curious how well this is going to work. I also need to change my figures of merit. A simple plot of standard-deviation subtraction residuals will always look bad. This is because you cannot subtract any of the NN at lowest frequencies (since TM noise is so strong there). So I need to plot spectra of subtraction noise and make sure that the residuals lie below or at least close to the TM noise spectrum.

  181   Fri Jan 28 14:50:27 2011 JanComputingSeismometryColored noise subtraction, a first shot

Just briefly, my first subtracition spectra:

Subtraction_ff.jpg

Much better than I expected, but also not good enough. All spectra in this plot (except for the constant noise model) are averages over 10 simulation runs. The NN is the average NN, and the two "res." curves show the residual after subtraction. It seems that the frequency-dependent linear estimator is working since subtraction performance is consistent with the (frequency-dependent) SNR. Remember that the total integrated SNR=NN/noise is much smaller than 1 due to the low-frequency noise, and therefore you don't achieve any subtraction using the simple time-domain linear estimators. Now the final step is to improve the subtraction performance a little more. I don't have clever ideas how to do this, but there will be a way.

  182   Fri Jan 28 15:28:52 2011 JanComputingSeismometryHaha, success!

Instead of estimating in the frequency domain, I now have a filter that is defined in frequency domain, but transformed into time domain and then applied to the seismometer data. The filtered seismometer data can then be used for the usual time-domain linear estimators. The results is perfect:

Subtraction_ff.jpg

So what's left on the list? Although we don't need this, "historically" I had interest in PCA. Although it is not required anymore, analyzing the eigenvalues of the linear estimators may tell us something about the number of seismometers that we need. And it is simply cool to understand estimation of information in seismic noise fields.

  356   Sat Sep 17 15:07:49 2011 JanComputingCrackleSimple model

I wanted to find out how difficult it is to program crackling-noise simulations. The good news is that it is not difficult at all. I spent 30min to generate this video. I am sure that a cool student can make a good blade-simulation in one month that can run on the cluster.

How to do these simulations is explained in PRB 66, 024430. The basic idea is to calculate an effective field strength at each grid point (any type of field can be considered, not only magnetic fields) and then to let spins (or whatever can be effectively described as spin) flip when the effective field changes sign. This happens since we always consider an external field contributing to the effective field at each grid location which is for example oscillating with large amplitude.

The video shows the spin orientation and efective fields at all grid points. I assumed a zero-range spin interaction with random field (the nominal  Ising model has nearest-neighbor interaction). So there are no avalanches, but it would be very easy to program a proper (random field) Ising model that has spin interaction and shows avalanches. I could easily go to 100^3 grid points and still run it on my laptop. But that's about the limit using Matlab. More realistic spin interactions with more grid points need to be simulated on our cluster.

For blades, a spin flip would probably lead to a localized point-force excitation and associated displacement response propagating through the blade. Programming this would probably take more than 30min, but it should be a nice project and can certainly be simplified if done cleverly. But this together with a good model for the spin interaction is all you need to calculate the crackling noise spectra in blades.

  357   Sat Sep 17 21:46:28 2011 JanComputingCrackleFull Ising

Here a full Ising model with avalanches. As expected, edge defects lead  to avalanches that creep from the corners and edges into the grid since edge spins have less neighbors. The 2D video is just a plane intersection through the middle of the 3D grid. Now, the next step would be to simulate a periodic variation of the external field, and then the number of spin flips per time step gives you the amplitude of the crackling noise (very simplified model). So one could simulate a very long time series and then calculate the spectrum of the noise. It should be something like 1/f^(1.77/2), but certainly the small dimensions of the simulated grid will lead to a small frequency cut-off. Also, the grid density will lead to a high-frequency cut-off. This is why we need the cluster. The geometry of the blade as well as a better model of the spin interaction is required to make a more accurate prediction of crackling noise. Unfortunately, we don't know the spin interaction, but we could try a few models and compare with our measurements (if we ever measure crackling noise).

  358   Sun Sep 18 14:03:40 2011 ranaComputingCrackleFull Ising

I think there are 2 problems with this approach:

1) It doesn't consider the case of finite temperature. I guess that the spin flip probability has a statistical nature which depends on the stress of the domain and the temperature.

2) In the literature for martensitic materials people talk about the distribution of domain sizes. Shouldn't there be an effect due there being lots of small domain flips and not many big ones?

  359   Sun Sep 18 15:56:22 2011 JanComputingCrackleFull Ising

Quote:

I think there are 2 problems with this approach:

1) It doesn't consider the case of finite temperature. I guess that the spin flip probability has a statistical nature which depends on the stress of the domain and the temperature.

2) In the literature for martensitic materials people talk about the distribution of domain sizes. Shouldn't there be an effect due there being lots of small domain flips and not many big ones?

 About 1) Yes, but finite temperature effects can be incorporated into the simulation without much trouble. Finite temperature is more a theory problem.

About 2) Well, so I think this would come out naturally from the simulation when you first tune your parameters and second when you don't have this simplified case with an external field ramping up from -inf to +inf. If you consider a finite amplitude periodic external field, then I would guess that small domain flips will be observed in simulation.

Changing the external field to a periodic force and introducing a random element like finite temperature effects, I am sure that the results will look more interesting. However, I am not sure yet how large the grid needs to be so that you have at least some region inside the grid that is not dominated by surface effects.

  367   Mon Oct 24 16:17:21 2011 Giordon StarkComputingGeneralCompute eigenfrequencies for multiple time values (COMSOL+Matlab)

 

COMSOL+Matlab

 Given all the input parameters such as the geometry, material properties, and thermal source beam - it will loop through and calculate the eigenfrequencies for the heat fluctuation at all time.


  631   Mon Apr 8 13:17:46 2013 ericqComputingCracklesvn directory

A folder on the 40m svn server has been created to store crackle related files

The location is: /trunk/crackle

This currently includes the latest poster, a MATLAB subdirectory which holds all of the code from my last elog, a Lit subdirectory with a couple of papers in it, and an ExpChar directory which houses the bulk of the measurements I've made regarding the experiment.

Specifically listing what is in each sub-dir

/trunk/crackle/ExpChar/:

  • Blades: transfer functions and ringdowns of the blades via shadow sensor
  • Laser RIN: measurements of RIN in different configurations (fiber, no fiber, dark noise, etc.)
  • Loop: Measurements of the Loop TF
  • Mich: spectra of the Michelson error signal
  • Servo: LISO files of the servo circuit, and transfer functions (liso and measured)
  • ShadowSensor: Circuit of the shadow sensors and TF of the damping
  • circuit.graffle: OmniGraffle drawing of the crackle NIM box circuitry
  728   Tue Sep 10 11:15:31 2013 nicolasComputingDAQX1KRK model restarted

I restarted the DAQ and the KRK model today at about 11AM local time to increase the acquire rate of the accelerometer channels.

The autolocker script died when it couldn't access the epics channels, I restarted it.

  729   Thu Sep 12 00:29:23 2013 ranaComputingDAQX1KRK model restarted

Quote:

I restarted the DAQ and the KRK model today at about 11AM local time to increase the acquire rate of the accelerometer channels.

The autolocker script died when it couldn't access the epics channels, I restarted it.

857

  746   Tue Nov 5 21:37:11 2013 ranaComputingGeneralAndroid app for getting GPIB data?

 http://wiki.processing.org/w/Android

  762   Sat Dec 7 20:48:29 2013 ericqComputingCrackleCymac Rebooted

Absurd amounts of channel hopping was going on. (Displacement excitation DQ channel wrote error signal data to disk, DTT was getting filter module input as its output...)

Restaring X1KRK didn't help. Restarting daqd didn't help. Rebooted cymac, seems to have worked.

  775   Mon Feb 24 14:36:43 2014 ericqComputingCrackleX1KRK Model Update

I edited the KRK model to do common-mode excitations without having to go through awggui by adding an oscillator block to the CEXIT input. I exposed the settings on the krkmaster medm screen, and set the oscillator's CLK output to be acquired.

The relevant channels are X1:KRK-EXIT_CDRIVE_FREQ, etc. for the oscillator and X1:KRK-EXIT_CDEMOD_DQ for the CLK output. (This is in phase with the common excitation, but always at amplitude 1, instead of whatever amplitude the SIN output to the CEXIT block is set to, which makes it more useful for demodulation.)

  871   Fri Dec 19 15:38:43 2014 ericqComputingCrackleCymac2 Woes

 As Gabriele previously mentioned, cymac2 was having some troubles...

First symptom: the IOP was intermittently taking too long for its cycles, even with no user models running. I attempted a reboot , and was greeted with a host of HDD error messages, and the system wouldn't boot. I flashed an ubuntu rescue USB drive, and followed the instructions here: smartmontools, to find some bad blocks on the hard drive. 

It turned out that there was a corrupted file, libm.so, which is the main C math library. Zeroing out the offending blocks let me boot the system normally, though just barely; almost every program imaginable segfaulted all over the place. From cymac1, I scp'd over a fresh version of the library. 

The machine was now running again, but the IOP was still timing out. I compared the BIOS settings to what we had previously determined as working for the cymacs (ATFwiki), and found some of the settings to be different. After resetting them, the IOP seems to be happy. 

I'm running more S.M.A.R.T. tests on the HDD, to see if there are more timebombs; we should set up some sort of backup for the cymac soon, so we don't have to rebuild the system from scratch if the main disk fails. 

  987   Thu Jun 11 17:55:05 2015 GabrieleComputingCrackleNew suspension MEDM screen

I created three new MEDM screen to easily access the suspension sensing, control and driving filter banks and matrices.

They can be opened from any terminals with the following commands:

kr2_suspension_sensing
kr2_suspension_control
kr2_suspension_driving

Screenshots attached

  991   Fri Jun 12 17:09:58 2015 GabrieleComputingCrackleMaster screen

With the command kr2_master, now you can open a master MEDM screen for crackle2

  1006   Tue Jun 23 08:57:31 2015 GabrieleComputingCrackleDisk cloning concluded, and failed

[Gabriele, EricQ]

After at least 9 hours, the disk cloning finished. The output of the 'dd' command is shown below, there is some 'input/output error' but it looked like all sectors were copied.

I installed the new disk and rebooted, and bummer: it's not working:

So, I installed back the old disk, booted the cymac2, and restarted all processes. Again, all gains were screwed up! How can we set a safe snapshot for when cymac restarts?

  1012   Wed Jun 24 16:34:11 2015 GabrieleComputingCrackleNew screens and aliases

I implemented some modification in the medm screeens:

  • added buttons to switch on and off all dampings on the main screen
  • created screens with all the sensing, control and coil filter banks, for the block OSEMs. This was already there for the suspension damping

I also created new aliases for the terminal, to quickly open dataviewer:

  • susp_signals: opens a dataviewer with all suspension sensing signals
  • block_signals: opens a dataviewer with all block sensing signals
  • lock_signals: opens a dataviewer with signals relevant for locking

The templates are saved in ~/Templates

 

  1021   Tue Jun 30 11:31:28 2015 GabrieleComputingCrackleSome useful scripts

In ~controls/scripts

  • susp_to_um.py switch on offsets and calibration of all suspension shadow sensors, such that the SUSP_SENS_*_OUT signals are in microns (standard working condition)
  • susp_to_counts.py switch off offsets and calibration of all suspension shadow sensors, such that the SUSP_SENS_*_OUT signals are in counts (useful for centering)
  • blocks_to_um.py switch on offsets and calibration of all block shadow sensors, such that the SUSP_SENS_*_OUT signals are in microns (standard working condition)
  • blocks_to_counts.py switch off offsets and calibration of all block shadow sensors, such that the SUSP_SENS_*_OUT signals are in counts (useful for centering)
  • susp_sensing_matrix_to_unity.py changes the sensing matrix for the suspension shadow sensors to diagonal unity
  • susp_driving_matrix_to_unity.py: changes the driving matrix for the suspension coils  to diagonal unity

 

  1037   Thu Jul 9 13:13:50 2015 GabrieleComputingCrackleModel change and scripts
  • Fixed suspension coil samping frequency, two were acquired at 8kHz. Now all are acquired at 2 kHz
  • Added a monitor of the Michelson error signal RMS (squared value of the error signal, lowpassed at 1 Hz in the BBOARD_LOCK_ERR_RMS filter bank).
  • Updated accordingly Lock MEDM screen
  • Created  a script 'set_optical_gain.py' to measure and set automatically the optical gain. It can be called from the calibration screen
  1045   Fri Jul 10 18:04:10 2015 GabrieleComputingCrackleAutolocker logic

I added to the model and the master screen some logic for an autolocker script. There are two new EPICS variables: KR2-BBOARD_LOCK_AUTOLOCKER_ENABLED and KR2-BBOARD_LOCK_AUTOLOCKER_STATUS.

There's a toggle button on the master screen to enable the autolocker.

So far, there is no autolocker script...

  1054   Tue Jul 14 18:18:15 2015 GabrieleComputingCrackleModification to model and screens
  • Added a path to inject a calibration line directly into the SERVO_OUT signal, and added the demodulation paths to extract the I and Q signals in real time. Set a low pass filter at 10 mHz to track only very low frequency drifts of the optical gain. Amplitude of the I signal is somehow arbitrary right now
  • Tried to add autoscaling of the calibrated signal based on the line amplitude. The code was running, but the result was somehow fishy, so I removed it for the moment being. More investigations are needed to understand what's wrong
  • Move the calibrated signal test point to the top level model, so now it's simply KR2-MICH_DZ instead of KR2-BBOARD_LOCK_MICH_DZ
  • Added test filters to send signals to DAC 7 and 8, called TEST_DAC1 and TEST_DAC2
  • Calibrated the TEST ADC input bank in volts
  • Added 5 filter band-limited RMS monitors of the MICH signal. Not set up yet.
  1063   Wed Jul 15 16:32:03 2015 GabrieleComputingCrackleScripts migration

All scripts have been moved from ~/scripts to /opt/rtcds/userapps/CyMAC/src

In this way they are shared with cymac2

MEDM screens updated

  1089   Fri Jul 24 19:04:29 2015 GabrieleComputingCrackleStarted cymac disk cloning

At 7:00pm I started the cloning of the cymac2 disk, using the following command:

sudo pv -tpreb /dev/sda | sudo dd of=/dev/sdc bs=4096 conv=noerror,sync

The pv command shows a real time display of the amount of data transferred to dd. The noerror,sync options are (according to the web lore) necessary to avoid dd to stop when there is a disk error. The data is copied in chunks of 4096 bytes (if there is an error in one of those blocks, dd will pad it with zeros...)

The actual data rate is 30 Mb/s, which means it should take about 10 hours to finish.

  1090   Sat Jul 25 13:11:21 2015 GabrieleComputingCrackleStarted cymac disk cloning

Cloning was successfull and I could restart the system with the new disk.

I tried to install another new 1TB disk on the second slot, but as before I wasn't able to force the BIOS to see it.

In addition, I enabled the serial ports at the BIOS level. Serial 1 is configured as 3F8/IRQ4 and Serial 2 as 2F8/IRQ3 (COM). However, there seems to be only one serial port on the back. If you're asking yourself why I did this, the idea is to connect the Newport picomotor driver to the serial port and control it remotely. In this way we may be able to have an automatic Michelson re-alignment script.

Quote:

At 7:00pm I started the cloning of the cymac2 disk, using the following command:

sudo pv -tpreb /dev/sda | sudo dd of=/dev/sdc bs=4096 conv=noerror,sync

The pv command shows a real time display of the amount of data transferred to dd. The noerror,sync options are (according to the web lore) necessary to avoid dd to stop when there is a disk error. The data is copied in chunks of 4096 bytes (if there is an error in one of those blocks, dd will pad it with zeros...)

The actual data rate is 30 Mb/s, which means it should take about 10 hours to finish.

 

  1091   Sat Jul 25 13:46:41 2015 GabrieleComputingCrackleExplanation of the KR2 model

The goal of this elog is to explain the logic behind the Crackle2 real time model, as of today

Top level

There are two main blocks at the top level

  • SUSP contains all the sensing, control and driving for the suspension damping
  • BBOARD contains all the sensing and control for the blocks, as well as the photodiode readout and Michelson locking and calibration

As you can see there are a few test filter banks, that can be used to read temporary signal from the ADC and send temporary excitations to some DACs

Finally, the calibrated Michelson signal (which is generated inside the BBOARD box) is sent to a test point at this top level. The BLRMS block can be used to monitor the band-limited RMS of this signal in different bands.

The box on the left tells the system which signals must be saved to disk and what the sampling rate should be.

SUSP

This is quite straighforward: signals from the OSEMs enter on the left, and then go through three blocks, before exiting and being sent to the coils.

  • SENS is simply a collection of filter banks to calibrate the signals from counts to microns and dewhiten
  • CTRL contains a bit more stuff. First a sensing matrix is used to convert from the OSEM basis to a X,Y,Z,PIT,YAW,ROLL basis. Then six filter banks can be used to host the filters for the damping. Finally a driving matrix converts again the signals to OSEM basis:
  • COIL is again quite simple: it just contains six filter banks to covnert the signals from micro newtons to counts and whiten them

BBOARD

This is the most complex subsystem. It contains two main sections: the block OSEM readout and control, and the Michelson lock.

Let's start from the OSEM sensing and control part:

  • SENS simply contains six filter banks to convert the OSEM signals from counts to microns and dewhiten them.
  • CTRL is a bit more complex:

    There are six inputs for the six block OSEM signals, plus one input for the lock control signal. The six OSEMs are mixed using a sensing matrix. Here there are eight outputs: in addition to the single degrees of freedom (X1/2, Y1/2, Z1/2, there are two additional paths called DIFF and COMM: originally they were intended to be used to implement a common mode and differential mode damping of the Z directions. However, for the moment bein they are unused, and in principle you could use them to damp any combination of modes.
    The eight signals coming out of the matrix are then sent to eight filter banks that can host the damping filters. The control signals, plus the locking signal, are then combined with the driving matrix and sent to the six single coils
  • COIL is simply a collection of six filter banks to convert the coil signals from micro netwons to counts and whiten them

The LOCK part is fairly complex too. It contains three main parts: the photodiodes readout, the lock filter and the Michelson calibrated signal reconstruction.

At the left side there are four inputs for the photodiodes. Both the symmetric and antisymmetric photodiodes are acquired twice. There is a flat acquisition path (to be used during lock acquisition) and a whitened signal path. These four signal are combined using the PDMIX matrix, which returns two outputs: one is the difference of the two diodes (lockind servo error signal) and the other is the sum (null signal, insensitive to Michelson displacement).

The locking error signal is then sent ot the SERV filter bank which hosts the control filters for the Michelson lock. The pink box in the middle of the screen generates a sinusoidal signal which is added to the lock control signal: this is used to add a permanent calibration line. The  products and filter banks directly on the  right of the sine generator are used to demodulated the calibrated Michelson signal in real time and provide a slow monitoring of the Michelson optical gain.

The Michelson calibration signal is reconstructed as a combination of the error and control signal. Basically, the error signal is a good estimation of the Michelson displacement above the loop UGF, while the control signal is good below the UGF. The filter bank CALIB_ERR contains the inverse of the Michelson optical gain, while the filter bank CALIB_CTRL contains a fit to the differential actuator. The combination of the two gives a good estimate of the free Michelson motion. There is an additional filter bank that can be used to implement whitening, if needed (for example to high pass the signal and avoid bin leakage in FFTs). For the moment being there is no filter engaged there.

Finally, two EPICS signals are used to handle the autolocker logic. Nothing happens here, everything is controlled by an external python script.

BLRMS

This block takes the Michelson calibrated signal and computes the band-limited RMS in five different bands.

Basically, the signal is first band passed, then squared, and then low passed.

  1093   Sat Jul 25 18:27:05 2015 GabrieleComputingCrackleRemote control of picomotors

I finally managed to communicate with the Newfocus Picomotor driver 8732 using the cymac2 serial port. See here for the user manual and the list of commands.

I wrote a python script to allow sending simple commands (basically to move both picomotors in both directions). The script is in /opt/rtcds/userapps/CyMAC/src/picomotor.py

 

  1105   Wed Jul 29 14:37:53 2015 GabrieleComputingCrackleThe source of our locking problems...

[EricQ, Xiaoyue, Gabriele]

We finally found what was the source of all our damping glitches and locking problems. In brief, the IOP process was overloading from time to time, driving the DAC to zero. The reason is tricky, but finally EricQ helped us unederstand that it was due to the BIOS enabled serial port (the port was enabled last week to interface with the Newfocus picomotor driver).

We disabled it, and the problem disappeared. Now we can lock.

This morning the excess noise and the larger 60 Hz harmonics were not there anymore, so for the moment we are not hunting for ground loops.
 

  1116   Fri Jul 31 17:06:01 2015 GabrieleComputingCrackleAutolocker operative

Today I finished setting up the autolocker. To run it, just execute the following script on the cymac: /opt/rtcds/userapps/CyMAC/src/autolocker.py. There is no automatic respawn of the script, so you'll have to run it manually.

The autolocker is normally idle untile enabled by the X1:KR2-BBOARD_LOCK_AUTOLOCK_ENABLE variable, that can be set to zero or one from the master MEDM screen. The same screen shows a set of green and red lights showing the status of the autolocking procedure. Basically when everything is green the Michelson inerferometer is locked in low noise state.

The autolocker is a state machine. Each state is a separate function, that when executed performs some actions, possibly wait for some thresholds to be met, and then returns, providing the information of the next state to which autolocker should transition.

Once launched, the initial state is IDLE. In this state the autolocker waits for the enable signal. Once received, it transition to CHECK_LOCKED. This state checks if the laser is on. If not, it transition to LASER_OFF and wait there for the laser to be back (the check is performed on the sum of the two non whitened photodiode signals). If the laser is on, CHECK_LOCKED checks if the Michelson is alreasy locked (small residual RMS of error signal, small control signal). If so, it transition to LOCKED. If not locked, the transition happens to RESET, which switches on all damping, reset the locking filter history and switch the error signal to non whitened channels. Then autolocker transition to ACQUIRING. Here is the entire lock acquisition procedure. ACQUIRING checks that we are locked on a Michelson fringe that corresponds to small control signal. If not, it tries again and again, until the correction is low. Then it switches on boosts and other filters, switches off the damping, and transition to the whitened signals. If during this process we lost lock, the autolocker goes back to RESET. If everything is successfull, we go to LOCKED. In this state the autolocker continuosly checks the state. When we loose lock, if still enabled and if the laser is on, it  transition to RESET.

I tested the autolocker few times, an found no problem so far. There some missing features: a check of the effectiveness of the damping loop, and a limit on the number of trials. In the future, we could also expand the state adding a re-alignment algorithm.

  1166   Fri Aug 21 11:31:22 2015 GabrieleComputingCracklex1kr2 restarted

At about 11:25 LT I checked that no measurement was running, disabled the autolocker, broke the lock, restarted the x1kr2 model to add the epics channel X1:KR2-BBOARD_LOCK_AUTOLOCKER_REQUEST that will be used in the next version of the autolocker script.

At 11:30 LT the autolocker was up and running again, the IFO is locked again.

  1167   Fri Aug 21 15:50:48 2015 GabrieleComputingCrackleNew automation

I modified the autolocker script and the master medm screen to implement a new logic for the automation.

Now the autolocker can accept requests for different states, through the KR2-BBOARD_LOCK_AUTOLOCK_REQUEST epics variable.

  • IDLE: do nothing
  • NOT_LOCKED: actively unlock the IFO and switch on damping loops
  • LOCKED: lock the IFO all the way to low noise
  • ALIGN: start the automatic alignment procedure (not implemented yet)
  • CRACKLE: start crackling noise measurements (not implemented yet)

In addition, there is a maximum number of 10 locking attempts, after this the autolocker will stop trying. To restart the attempts, one has to request to IDLE or NOT_LOCKED and then request again LOCKED

The master MEDM screen has been modified accordingly:

The middle panel controls the automation system. The buttons on the left can be used to request a new state. The green lights immediately on the right of the buttons show which is the requested state. Finally, the lights on the right side show the actual state of the automation. So for example in the attached figure I requested 'LOCKED' and the Michelson is actually in the 'LOCKED' state.

  1301   Tue Oct 6 11:46:32 2015 GabrieleComputingCrackleSome data archived

Before it gets out of our buffer, I archived some data related to acoustic emission tests and Crackle2 data runs:

  • Acoustic emission tests on maraging, blade loaded with 16 kg. All ACO* channels and KR2-TEST_DAC1_OUT. From GPS 1127343472 to GPS 1127416304
  • Acoustic emission tests on high carbon steel blade, both 12 and 26 kg loads, same channels as above. From GPS 1127774821 to GPS 1128029411
  • Crackle2 data run, all KR2* channels. From GPS 1127888764 to GPS 1127983134

This data is writen in 1 hour long frame files in /frames/archive

Just for future reference, here are the commands I ran to make the copy (I had to install the Fr command line utils):

FrCopy -i 11278/* 11279/* 11280/* -o /frames/archive/AcousticEmissions/HighCarbonSteel24kg/hcs -t *ACO* *TEST_DAC1* -max 3600
FrCopy -i 11273/* 11274/* -o /frames/archive/AcousticEmissions/Maraging/mar -t *ACO* *TEST_DAC1* -max 3600
FrCopy -i 11278/* 11279/* -o /frames/archive/Crackle2/crackle -t *KR2* -max 3600

 

  1334   Wed Oct 28 16:13:27 2015 gabrieleComputingCrackleLast two runs archived

Data from the last two crackling noise runs have been archived in /frames/archive/crackle2

FrCopy -i 11294/* 11295/* 11296/* 11297/* 11298/* 11299/* 11300/* -o /frames/archive/Crackle2/crackle -t *KR2* -max 3600

With this last one, four runs are archived: 2015-10-08, 2015-10-13, 2015-10-20, 2015-10-24

  1572   Thu Jul 14 08:38:30 2016 GabrieleComputingCracklecymac2 restarted

At about 8:40am I restarted the cymac2 to check the BIOS configuration. Hopefully this will help me solving the issues I'm having with cymac3.

I restarted all processes.

  1584   Thu Jul 21 14:07:02 2016 GabrieleComputingCrackleCymac2 temporarly offline

At 1:50pm LT, I disconnected the cymac2 ADC and started a test for the cymac3 setup. Things will be back to normal as soon as I'm done with the test.

The crackle2 autolocker was already in failed state.

I put everything back in the normal state.

  1630   Wed Dec 21 09:04:47 2016 GabrieleComputingCrackleFixed EPICS channel issue

I fixed a long standing issue with EPICS channels that were not correctly acquired by DAQD.

The fix consisted in properly setting the IP address of the cymac system in the /opt/rtcds/tst/x1/target/fb/start_daqd.inittab file

 

ELOG V3.1.3-