40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  SUS Lab eLog, Page 1 of 37  Not logged in ELOG logo
ID Date Author Type Categorydown Subject
  659   Mon Jul 1 17:17:35 2013 nicolasLab InfrastructureVacuumSome vacuum pumps that we saw in the Ogin lab

We went into the Ogin  future Crackle Lab and found a few things that looked like vacuum pumps. There are two roughing pumps (one has the words LEAKS written on it), and most of a turbo pump. They all look like replacement parts for the presumably working pump setup that is hooked up the the thermal noise chamber in that room.



  733   Wed Oct 9 18:40:35 2013 nicolasMiscVacuumcantilever dewar leak rate data

To measure the leak rate, i valved off the pump and let the pressure rise and wrote down some numbers:


t (m:ss) | P (torr)
0          4.5e-4
30         2.6e-3
60         3.6e-3
2:30       6.5e-3
5:00       1.1e-2     
  734   Fri Oct 11 13:31:56 2013 nicolasMiscVacuumcantilever dewar leak rate data


To measure the leak rate, i valved off the pump and let the pressure rise and wrote down some numbers:


t (m:ss) | P (torr)
0          4.5e-4
30         2.6e-3
60         3.6e-3
2:30       6.5e-3
5:00       1.1e-2     

 2 days later I got this:

t (m:ss) | P (torr)
0          5.5e-4
30         1.4e-3
60         1.9e-3
2:30       3.2e-3
5:00       5.3e-3    
  735   Fri Oct 11 13:33:22 2013 nicolasLab InfrastructureVacuumHV feedthrough installed, vacuum baking

I put the new HV feedthrough on the cantilever dewar. I borrowed the heat tape from the CTN lab and I am going to bake the chamber over the weekend at 80 deg C, it's currently pumping and heating up.

  736   Fri Oct 11 16:57:23 2013 nicolasLab InfrastructureVacuumHV feedthrough installed, vacuum baking


I put the new HV feedthrough on the cantilever dewar. I borrowed the heat tape from the CTN lab and I am going to bake the chamber over the weekend at 80 deg C, it's currently pumping and heating up.

 cantilever dewar is baking at 80C and pressure is being recorded to C5:VAC-P2_PRESSURE. data logging started at gps time 1065570956. (4:55pm local time Friday).

  737   Mon Oct 14 18:05:35 2013 nicolasLab InfrastructureVacuumbake ended at 1065834142


pressure trend during bake

  214   Mon May 30 02:09:53 2011 Vladimir DergachevDailyProgressTiltmeterNoise spectrum after cleaning
First useful spectrum after cleaning. It appears to be at least as good as before. This plot uses pre-cleaning calibration - it should not have changed too much, I'll try doing another calibration after collecting more data.
Attachment 1: fine_combined_spectrum_zoomed_presentation.png
  328   Tue Aug 23 22:38:10 2011 Vladimir DergachevNoise HuntingTiltmeter0.1-1 Hz noise
Turns out the noise in 0.1-1 Hz region was due both to seismic and DAC. After using a low pass filter on the output I was able to capture the spectrum on the plot below. The dip at 0.6 Hz goes below 1e-9 rad/sqrt(Hz) and shows up on both seismic and tilt. Now to find a way to make better coil hookup...
Attachment 1: tilt_combined_spectrum.png
  330   Wed Aug 24 07:37:33 2011 Vladimir DergachevNoise HuntingTiltmeter0.1-1 Hz noise

Turns out the noise in 0.1-1 Hz region was due both to seismic and DAC. After using a low pass filter on the output I was able to capture the spectrum on the plot below. The dip at 0.6 Hz goes below 1e-9 rad/sqrt(Hz) and shows up on both seismic and tilt. Now to find a way to make better coil hookup...

 I am suspicious that this is not the true tilt noise, but instead the in-the-loop spectrum as before. Is this corrected for the loop shape an the tiltmeter's mechanical response?

  738   Tue Oct 15 17:56:39 2013 nicolasLab InfrastructureSi Cantileverbake successful, going to bake more

Cryo cantilever dewar.

Here is the pressure trend (30 hours) after turning off my heat tape.


It quickly drops to ~ 1 x 10^-5 torr. Before the bake, it was never getting below 4 or 5 x10^-5 torr.

Because of this success, and because no work will be done on this chamber for the next week and a half. I will keep baking to further remove contamination.

New bake started at 1065921266.

  740   Tue Oct 15 18:05:32 2013 nicolasLab InfrastructureSi Cantileverprobably a leak in my pump line

This is the pressure trend when I close off the valve between the pump line and the cryostat. I am not totally sure how to interpret this, but the fact that the pressure didn't go much lower (like the 10^-6 region) when the pump is only pumping the line makes me think that there may be a leak in my line. Something to investigate.


Attachment 1: pumplineleak.png
  742   Wed Oct 23 10:02:26 2013 EvanDailyProgressSi CantileverCryo cantilever dewar: heat tape turned off

I turned off the heat tape on the cryo cantilever dewar at GPS time 1066582781. The pressure was 1.673×10−4 torr and the temperature was 76.8 °C.

  749   Mon Nov 18 19:18:36 2013 nicolasDailyProgressSi CantileverCantilever is back in the dewar

I've glued the heater and temperature sensor to the SS block and wired them to the outside through the feedthrough.

I set up the laser/split diode readout. The cantilever is flapping and being recorded. GPS of start of excitation is 1068866140.

The vacuum gauge reads 4e-7 torr. I am not convinced this is possible, but I did bake the chamber over the weekend.

  767   Wed Dec 18 18:04:02 2013 nicolasDailyProgressSi CantileverTemperature sensing and control of silicon cantilever cryostat

So I have an RTD inside the cryostat which is glued to the steel clamping block. It has a 100Ohm resistance at 273K and grows with 0.385Ohm/K from there.

I have a protoboard which takes a signal from the cymac and converts it into a proportional current, then I measure the voltage across the resistor in the cymac. This is done with 4 leads to remove effects of the cabling in the cryostat.

I use the cymac to drive a 1mA current at ~210Hz, and do a digital lockin demodulation of the returned voltage to estimate the resistance, and hence temperature.

I have a second cymac output which drives a power amplifier that is hooked up to a second resistor which acts as a heater, also glued to the steel block.

This picture shows the resistance (blue) as a function of time when I apply about 1 Watt of heating (green) for a few minutes. The blue curve is not correctly calibrated, but I will calibrate it to Kelvin when I put in some LN2.


  769   Wed Jan 8 13:59:28 2014 nicolasMiscSi CantileverTemperature

The room temperature in the lab at 1073253434 is 22.8degC.

The raw counts out of the RTD in channel X1:SCQ-TEMPERATURE_CONTROL_INMON is 1044.

  770   Wed Jan 15 18:47:57 2014 nicolasDailyProgressSi CantileverFirst temperature cycle results

I put LN2 in the small dewar until it was at equilibrium.

This was done without the cold shield of the upper reservoir, so the hold time should increase when I start using it.

Some results

The hold time is about 48 hours.
Cooling time constant is about 10 hours
I was only able to reach about 100K (with a rough calibration). This may be do to thermal conduction through my sensor wires. I will tie these to the cold plate for the next go around. Also there may have been thermal gradients on the radiation shield due to the fact that I was only using one shield.
My little mirrors that are supposed to reflect the laser beam out of the cryostat shattered into a million pieces. I guess stuff I get from the craft supply store isn't cryogenically rated? Alastair had previously suggested getting a mirror polish directly on the aluminum piece that was holding the mirrors. I will look into that.

Afterward I did an honest dunk test with the clamping block in a little plastic bucket. I was able to see that the sensor got to LN2 temperature (80K with the rough cal). I also dunked it in ice water and used these two points to make a more precise calibration.

I must say I'm pretty pleased with my temperature readout, it seems pretty robust and low noise.

  153   Thu Jul 8 16:41:29 2010 JanMiscSeismometryRanger Pics

This Ranger is now lying in pieces in West Bridge:



First we removed the lid. You can see some isolation cylinder around the central metal 
part. This cylinder can be taken out of the dark metal enclosure together with the interior 
of the Ranger.


Magnets are fastened all around the isolation cylinder. One of the magnets was missing 
(purposefully?). The magnets are oriented such that a repelling force is formed between 
these magnets and the suspended mass. The purpose of the magnets is to decrease the 
resonance frequency of the suspended mass.


Next, we opened the bottom of the cylinder. You can now see the suspended mass. 
On some of the following pictures you can also find a copper ring (flexure) that was 
partially screwed to the mass and partially to the cylinder. Another flexure ring is 
screwed to the top of the test mass. I guess that the rings are used to fix the horizontal 
position of the mass without producing a significant force in vertical direction. The 
bottom part also has the calibration coil.


Desoldering the wires from the calibration coil, we could completely remove the mass 
from the isolation cylinder. We then found how the mass is suspended, the readout 
coil, etc.:

DSC02509.JPG DSC02513.JPG

  154   Sat Jul 17 13:41:45 2010 JanMiscSeismometryRanger

Just wanted to mention that the Ranger is reassembled. It was straight-forward except for the fact that the Ranger did not work when we put the pieces together the first time. The last (important) screws that you turn fasten the copper rings to the mass (at bottom and top). We observed a nice oscillation of the mass around equilibrium, but only before the very last screw was fixed. Since the copper rings are for horizontal alignment of the mass, I guess what happens is that the mass drifts a little towards the walls of the Ranger while turning the screws. Eventually the mass touches the walls. You can fix this problem since the two copper rings are not perfectly identical in shape, and/or they are not perfectly circular. So I just changed the orientation of one copper ring and the mass kept oscillating nicely when all screws were fastened.

  157   Tue Jul 27 00:22:04 2010 JanMiscSeismometryRanger

The Ranger is in West Bridge again. This time we will keep it as long as it takes to add capacitive sensors to it.

  158   Mon Aug 23 22:07:39 2010 JenneThings to BuySeismometryBoxes for Seismometer Breakout Boxes

In an effort to (1) train Jan and Sanjit to use the elog and (2) actually write down some useful info, I'm going to put some highly useful info into the elog.  We'll see what happens after that....

The deal:  we have a Trillium, an STS-2, a GS-13 and the Ranger Seismometers, and we want to make nifty breakout boxes for each of them.  These aren't meant to be sophisticated, they'll just be converter boxes from the many-pin milspec connectors that each of the seismometers has to several BNCs so that we can read out the signals.  These will also give us the potential to add active control for things like the mass positioning at some later time.  For now however, the basics only.

I suggest buying several boxes which are like Pomona Boxes, but cheaper.  Digi-Key has them.  I don't know how to link to my search results, so I'll list off the filters I applied / searched for in the Digi-Key catalog:

Hammond Manufacturing, Boxes, Series=1550 (we don't have to go for this series of boxes, but it seems sensible and middle-of-the-line), unpainted, watertight.

Then we have a handy-dandy list of possible sizes of nice little boxes. 

The final criteria, which Sanjit is working on, is how big the boxes need to be.  Sanjit is taking a look at the pinouts for each seismometer and determining how many BNC connectors we could possibly need for each breakout box.  Jan's guess is 8, plus power.  So we need a box big enough to comfortably fit that many connectors. 

  163   Tue Dec 21 06:58:09 2010 ranaThings to BuySeismometryTrillium Noise Plot

Nanometrics has a couple of seismometers which are cheaper than the T240 which may be of interest to us: better than the Guralp CMG-40T, but cheaper and easier to use than the STS-2.


  169   Wed Jan 19 17:13:11 2011 JanComputingSeismometryFirst steps towards full NN Wiener-filter simulator

I was putting some things together today to program the first lines of a 2D-NN-Wiener-filter simulation. 2D is ok since it is for aLIGO where the focus lies on surface displacement. Wiener filter is ok (instead of adaptive) since I don't want to get slowed down by the usual finding-the-minimum-of-cost-function-and-staying-there problem. We know how to deal with it in principle, one just needs to program a few more things than I need for a Wiener filter.

My first results are based on a simplified version of surface fields. I assume that all waves have the same seismic speed. It is easy to add more complexity to the simulation, but I want to understand filtering simple fields first. I am using the LMS version of Wiener filtering based on estimated correlations.


The seismic field is constructed from a number of plane waves. Again, one day I will see what happens in the case of spherical waves, but let's forget it about it for now. I calculate the seismic field as a function of time, calculate the NN of a test mass, position a few seismometers on the surface, add Gaussian noise to all seismometer data and test mass displacement, and then do the Wiener filtering to estimate NN based on seismic data. A snapshot of the seismic field after one second is shown in the contour plot.


Seismometers are placed randomly around the test mass at (0,0) except for one seismometer that is always located at the origin. This seismometer plays a special role since it is in principle sufficient to use data from this seismometer alone to estimate NN (as explained in P0900113). The plot above shows coherence between seismometer data and test-mass displacement estimated from the simulated time series.


The seismometers measure displacement with SNR~10. This is why the seismometer data looks almost harmonic in the time series (green curve). The fact that any of the seismometer signal is harmonic is a consequence of the seismic waves all having the same speed. An arbitrary sum of these waves produce harmonic displacement at any point of the surface (although with varying amplitude and phase). The figure shows that the Wiener filter is doing a good job. The question is if we can do any better. The answer is 'yes' depending on the insturmental noise of the seismometers.


So what do I mean? Isn't the Wiener filter always the optimal filter? No, it is not. It is the optimal filter only if you have/know nothing else but the seismometer data and the test-mass displacement. The title of the last plot shows two numbers. These are related to coherence via 1/(1/coh^2-1). So the higher the number, the higher the coherence. The first number is calculated fromthe coherence of the estimated NN displacement of the test mass and the true NN displacement. Since there is other stuff shaking the mirror, I can only know in simulations what the true NN is. The second number is calculated from coherence between the seismometer at the origin and true NN. It is higher! This means that the one seismometer at the origin is doing better than the Wiener filter using data from the entire array. How can this be? This can be since the two numbers are not derived from coherence between total test-mass displacement and seismometer data, but only between the NN displacement and seismometer data. Even though this can only be done in simulation, it means that even in reality you should only use the one seismometer at the origin. This strategy is based on our a priori knowledge about how NN is generated by seismic fields. Now I am simulating a rather simple seismic field. So it still needs to be checked if this conclusion is true for more realistic seismic fields.

But even in this simulated case, the Wiener filter performs better if you simulate a shitty seismometer (e.g. SNR~2 instead of 10). I guess this is the case because averaging over many instrumental noise realizations (from many seismometers) gives you more advantage than the ability to produce the NN signal from seismometer data.

  170   Thu Jan 20 16:05:14 2011 JanComputingSeismometryWiener v. seismometer 0

So I was curious about comparing the performance of the array-based NN Wiener filter versus the single seismometer filter (the seismometer that sits at the test mass). I considered two different instrumental scenarios (seismometers have SNR 10 or SNR 1), and two different seismic scenarios (seismic field does or does not contain high-wavenumber waves, i.e. speed = 100m/s). Remember that this is a 2D simulation, so you can only distinguish between the various modes by their speeds. The simulated field always contains Rayleigh waves (i.e. waves with c=200m/s), and body waves (c=600m/s and higher).

There are 4 combinations of instrumental and seismic scenarios. I already found yesterday that the array Wiener filter is better when seismometers are bad. Here are two plots, left figure without high-k waves, right figure with high-k waves, for the SNR 1 case:


'gamma' is the coherence between the NN and either the Wiener-filtered data or data from seismometer 0. There is not much of a difference between the two figures, so mode content does not play a very important role here. Now the same figures for seismometers with SNR 10:


Here, the single seismometer filter is much better. A value of 10 in the plots mean that the filter gets about 95% of NN power correctly. A value of 100 means that it gets about 99.5% correctly. For the high SNR case, the single seismometer filter is not so much better as the Wiener filter when the seismic field contains high-k waves. I am not sure why this is the case.

The next steps are
A) Simulate spherical waves
B) Simulate wavelets with plane wavefronts (requires implementation of FFT and multi-component FIR filter)
C) Simulate wavelets with spherical wavefronts

Other goals of this simulation are
A) Test PCA
B) Compare filter performance with quality of spatial spectra (i.e. we want to know if the array needs to be able to measure good spatial spectra in order to do good NN filtering)

  171   Fri Jan 21 12:43:17 2011 JanComputingSeismometrycleaned up code and new results

It turns out that I had to do some clean-up of my NN code:

1) The SNRs were wrong. The problem is that after summing all kinds of seismic waves and modes, the total field should have a certain spectral density, which is specified by the user. Now the code works and the seismic field has the correct spectral density no matter how you construct it.

2) I started with a pretty unrealistic scenario. The noise on the test mass, and by this I mean everything but the NN, was too strong. Since this is a simulation of NN subtraction, we should rather assume that NN is much stronger than anything else.

3) I filtered out the wrong kind of NN. I am now projecting NN onto the direction of the arm, and then I let the filter try to subtract it. It turns out, and it is fairly easy to prove this with paper and pencil, that a single seismometer CANNOT never ever be used to subtract NN. This is because of a phase-offset between the seismic displacement at the origin and NN at the origin. It is easy to show that the single-seismometer method only works for the vertical NN or underground for body waves.


This plot is just the prove for the phase-offset between horizontal NN and gnd displacement at origin. The offset is depends on the wave content of the seismic field:


The S0 points in the  following plot are now obsolete. As you can see, the Wiener filter performs excellently now because of the high NN/rest ratio of TM dispalcement. The numbers in the titel now tell you how much NN power is subtracted. So a '1' is pretty nice...


One question is why the filter performance varies from simulation to simulation. Can't we guarantee that the filter always works? Yes we can. One just needs to understand that the plot shows the subtraction efficiency. Now it can happen that a seismic field does not produce much NN, and then we don't need to subtract much. Let's check if the filter performance somehow correlates with NN amplitude:


As you can see, it seems like most of the performance variation can be explained by a changing amplitude of the NN itself. The filter cannot subtract much only in cases when you don't really need to subtract. And it subtracts nicely when NN is strong.

  172   Sat Jan 22 20:19:51 2011 JanComputingSeismometrySpatial spectra

All right. The next problem I wanted to look at was if the ability of the seismic array to produce spatial spectra is somehow correlated with its NN subtraction performance. Now whatever the result is, its implications are very important. Array design is usually done to maximize its accuracy to produce spatial spectra. So the general question is what our guidelines are going to be? Information theory or good spatial spectra? I was always advertizing the information theory approach, but it is scary if you think about it, because the array is theoretically not good for anything useful to seismology, but it may still somehow provide the information that we need for our purposes.

Ok, who wins? Again, the current state of the simulation is to produce plane waves all at the same frequency, but with varying speeds. The challenge is really the mode variation (i.e. varying speeds) and not so much varying frequencies. You can always switch to fft methods as soon as you inject waves at a variety of frequencies. Also, I am simulating arrays of 20 seismometers that are randomly located (within a 30m*30m area) including one seismometer that is always at the origin. One of my next steps will be to study the importance of array design. So here is how well these arrays can do in terms of measuring spatial spectra:


The circles indicate seismic speeds of {100,250,500,1000}m/s and the white dots the injected waves (representing two modes, one at 200m/s, the other at 600m/s). The results are not good at all (as bad as the maps from the geophone array that we had at LLO). It is not really surprising that the results are bad, since seismometer locations are random, but I did not expect that they would be so pathetic. Now, what about NN subtraction performance?


 The numbers indicate the count of simulation runs. The two spatial spectra above have indices 3 (left figure) and 4 (right figure). So you see that everything is fine with NN subtraction, and that spatial spectra can still be really bad. This is great news since we are now deep in information theory. We should not get to excited at this point since we still need to make the simulation more realistic, but I think that we have produced a first powerful clue that the strategy to monitor seismic sources instead of the seismic field may actually work.

  173   Sun Jan 23 09:03:43 2011 JanComputingSeismometryphase offset NN<->xi

I just want to catch up on my conclusion that a single seismometer cannot be used to do the filtering of horizontal NN at the surface. The reason is that there is 90° phase delay of NN compared to ground displacement at the test mass. The first reaction to this shoulb be, "Why the hack phase delay? Wouldn't gravity perturbations become important before the seismic field reaches the TM?". The answer is surprising, but it is "No". The way NN builds up from plane waves does not show anything like phase advance. Then you may say that whatever is true for plane waves must be true for any other field since you can always expand your field into plane waves. This however is not true for reasons I am going to explain in a later post. All right, but to say that seismic dispalcement is 90° ahead of NN really depends on which directoin of NN you look at. The interferometer arm has a direcion e_x. Now the plane seismic wave is propagating along e_k. Now depending on e_k, you may get an additional "-" sign between seismic dispalcement and NN in the direction of e_x. This is the real show killer. If there was a universal 90° between seismic displacement and NN, then we could use a single seismometer to subtract NN. We would just take its data from 90° into the past. But now the problem is that we would need to look either 90° into the past or future depending on propagation direction of the seismic wave. And here two plots of a single-wave simulation. The first plots with -pi/2<angle(e_x,e_k)<pi/2, the second with pi/2<angle(e_x,e_k)<3*pi/2:



  174   Sun Jan 23 10:27:07 2011 JanComputingSeismometryspiral v. random

A spiral shape is a very good choice for array configurations to measure spatial spectra. It produces small aliasing. How important is array configuration for NN subtraction? Again: plane waves, wave speeds {100,200,600}m/s, 2D, SNR~10. The array response looks like Stonehenge:


A spiral array is doing a fairly good job to measure spatial spectra:


The injected waves are now represented by dots with radii proportional to the wave amplitudes (there is always a total of 12 waves, so some dots are not large enough to be seen). The spatial spectra are calculated from covariance matrices, so theory goes that spatial spectra get better using matched-filtering methods (another thing to look at next week...).

Now the comparison between NN subtraction using 20 seismometers, 19 of which randomly placed, one at the origin, and NN subtraction using 20 seismometers in a spiral:


A little surprising to me is that the NN subtraction performance is not substantially better using a spiral configuration of seismometers. The subtraction results show less variation, but this could simply be because the random configuration is changing between simulation runs. So the result is that we don't need to worry much about array configuration. At least when all waves have the same frequency. We need to look at this again when we start injecting wavelets with more complicated spectra. Then it is more challenging to ensure that we obtain information at all wavelengths. The next question is how much NN subtracion depends on the number of seismometers.

  175   Sun Jan 23 12:59:18 2011 JanComputingSeismometryLess seismometers, less subtraction?

Considering equal areas covered by seismic arrays, the number of seismometers relates to the density of seismometers and therefore to the amount of aliasing when measuring spatial spectra. In the following, I considered four cases:

1) 10 seismometers randomly placed (as usual, one of them always at the origin)
2) 10 seismometers in a spiral winding one time around the origin
3) The same number winding two times around the origin (in which case the array does not really look like a spiral anymore):

4) And since isotropy issues start to get important, the forth case is a circular array with one of the 10 seismometers at the origin, the others evenly spaced on the circle. 

Just as a reminder, there was not much of a difference in NN subtraction performance when comparing spiral v. random array in case of 20 seismometers. Now we can check if this is still the case for a smaller number of seismometers, and what the difference is between 10 seismometers and 20 seismometers. Initially we were flirting with the idea to use a single seismometer for NN subtraction, which does not work (for horiz NN from surface fields), but maybe we can do it with a few seismometers around the test mass instead of 20 covering a large area. Let's check.

Here are the four performance graphs for the four cases (in the order given above):



All in all, the subtraction still works very well. We only need to subtract say 90% of the NN, but we still see average subtractions of more than 99%. That's great, but I expect these numbers to drop quite a bit once we add spherical waves and wavelets to the field. Although all arrays perform pretty well, the twice-winding spiral seems to be the best choice. Intuitively this makes a lot of sense. NN drops with 1/r^3 as a function of distance r to the TM, and so you want to gather information more accurately from regions very close to the TM, which leads you to the idea to increase seismometer density close to the TM. I am not sure though if this explanation is the correct one.


  176   Mon Jan 24 15:50:38 2011 JanComputingSeismometryMulti-frequency and spherical

I had to rebuild some of the guts of my simulation to prepare it for the big changes that are to come later this week. So I only have two results to report today. The code can now take arbitrary waveforms. I tested it with spherical waves. I injected 12 spherical waves into the field, all originating 50m away from the test mass with arbitrary azimuths. The 12 waves are distributed over 4 frequencies, {10,14,18,22}Hz with equal spectral density (so 3 waves per frequency). The displacement field is far more complex than the plane-wave fields and looks more like a rough endoplasmic reticulum:


The spatial spectra are not so much different from the plane-wave spectra:


The white dots now indicate the back-azimuth of the injected waves, not their propagation direction. And we can finally compare subtraction performance for plane-wave and spherical-wave fields:


Here the plane-wave simulation is done with 12 plane waves at the same 4 frequencies as the spherical waves, and in both cases I chose a 20 seismometer 4*pi spiral array. Note that the subtraction performance is pretty much identical since the NN was generally stronger in the spherical-wave simulation (dots 5 and 20 in the right figure lie somewhere in between the upper right group of dots in the left figure). This makes me wonder if I shouldn't switch to some absolute measure for the subtraction performance, so that the absolute value of NN does not matter anymore. In the end, we don't want to achieve a subtraction factor, but a subtraction level (i.e. the target sensitivity of the GW detectors).

Anyway, the result is very interesting. I always thought that spherical waves (i.e. local sources) would make everything far more complicated. In fact, it does not. And also the fact that the field consists of waves at 4 different frequencies does not do much harm. (subtration performance decreased a little). Remember that I am currently using a single-tap FIR filter if you want. I thought that you need more taps once you have more frequencies. I was wrong. The next step is the wavelet simulation. This will eventually lead to a final verdict about single-tap v. mutli-tap filtering.


  177   Tue Jan 25 11:57:23 2011 JanComputingSeismometrywavelets

Here is the hour of truth (I think). I ran simulations of wavelets. These are not anymore characterized by a specific frequency, but by a corner frequency. The spectra of these wavelets almost look like a pendulum transfer function, where the resonance frequency now has the meaning of a corner frequency. The width of the peak at the corner frequency depends on the width of the wavelets. These wavelets propagate (without dispersion) from somewhere at some time into and out of the grid. There are always 12 wavelets at four different corner frequencies (same as for the other waves in my previous posts). The NN now has the following time series:


You can see that from time to time a stronger wavelet would pass by and lead to a pulse like excitation of the NN. Now, the first news is that the achieved subtraction factor drops significant compared to the stationary cases (plane waves and spherical waves):


And the 4*pi, 10 seismometer spiral dropped below an average factor of 0.88. But I promised to introduce an absolute figure to quantify subtraction performance. What I am now doing is to subtract the filtered array NN estimation from the real NN and take its standard deviation. The standard deviation of the residual NN should not be larger than the standard deviation of the other noise that is part of the TM displacement. In addition to NN, I add a 1e-16 stddev noise to the TM motion. Here is the absolute filter performance:


As you can see, subtraction still works sufficiently well! I am now pretty much puzzled since I did not expect this at all. Ok, subtraction factors decreased a lot, but they are still good enough. REMINDER: I am using a SINGLE-TAP (multi input channel) Wiener filter to do the subtraction. It is amazing. Ideas to make the problem even more complex and to challenge the filter even more are welcome.


  178   Wed Jan 26 10:34:53 2011 JanSummarySeismometryFIR filters and linear estimation

I wanted to write down what I learned from our filter discussion yesterday. There seem to be two different approaches, but the subject is sufficiently complex to be wrong about details. Anyway, I currently believe that one can distinguish between real filters that operate during run time, and estimation algorithms that cannot be implemented in this way since they are acausal. For simplicity, let's focus on FIR filter and linear estimation to represent the two cases.

A) FIR filters


A FIR filter has M tap coefficients per channel. If the data is sampled, then you would take the past M samples (including sample at present time t) of each channel, run them through the FIR and subtract the FIR output from the test-mass sample at time t. This can also be implemented in a feed-forward system so that the test-mass data is not sampled. Test-mass data is only used initially to calclulate the FIR coefficients, unless the FIR is part of an adaptive algorithm. For adaptive filters, you would factor out anything from the FIR that you know already (e.g. your best estimates of transfer functions) and only let it do the optimization around this starting value.

The FIR filter can only work if transfer functions do not change much over time. This is not the case though for Newtonian noise. Imagine the following case:


where you have two seismometers around a test mass along a line, one of them can be closer to the test mass than the other. We need to monitor the vertical displacement to estimate NN parallel to the line (at least when surface fields are dominant). If a plane wave propagates upwards, perpendicular to the line, then there will be no NN parallel to this line (because of symmetry). The seismic signals at S1 and S2 are identical. Now a plane wave propagating parallel to the line will produce NN. If the distance between the seismometers happens to be the length of the plane wave, then again, the seismometers will show identical seismic signals, but this time there is NN. An FIR filter would give the same NN prediction in these two cases, but NN is actually different (being absent in the first case). So it is pretty obvious that FIR alone cannot handle this situation.

What is the purpose of the FIR anyway? In the case of noise subtraction, it is a clever time-domain representation of transfer functions. Clever means optimal if the FIR is a Wiener filter. So it contains information of the channels between sensors and test mass, but it does not care at all about information content in the sensor data. This information is (intentionally if you want) averaged out when you calculate the FIR filter coefficients.

B) Linear estimation


So how to deal with information content in sensor data from multiple input channels? We will assume that an FIR can be applied to factor out the transfer functions from this problem. In the surface NN case, this would be the 1/f^2 from NN acceleration to test-mass displacement, and the exp(-2*pi*f*h/c) - h being the height of the test mass above ground - which accounts for the frequency-dependent exponential suppression of NN. Since the information content of the seismic field changes continuously, we cannot train a filter that would be able to represent this information for all times. So it is obvious, that this information needs to be updated continuously.

The problem is very similar to GW data analysis. What we are going to do is to construct a NN template that depends on a few template parameters. We estimate these parameters (maximum likelihood) and then we subtract our best-estimate of the NN signal from the data. This cannot be implemented as feed forward and relies on chopping the data into stretches of M samples (not necessarily the same value for M as in the FIR case). Now what are the template parameters? These are the coefficients used to combine the data stretches of the N sensors. This is great since the templates depend linearly on these parameters. And it is trivial to calculate the maximum-liklihood estimates of the template parameters. The formula is in fact analogous to calculating the Wiener-filter coefficients (optimal linear estimates). If we only use one parameter per channel (as discussed yesterday) or if one should rather chop the sensor data into even smaller stretches and introduce additional template coefficients will depend on the sensor data and how nature links them to the test mass. Results of my current simulation suggest that only one parameter per channel is required.

When I realized that the NN subtraction is a linear estimation problem with templates etc, I immediately realized that one could do higher-order noise subtraction so that we will never be limited by other contributions to the test mass displacement (and here I essentially mean GWs since you don't need to subtract NN below other GWD noise, but maybe below the GW spectrum if other instrumental noise is also weaker). Something to look at in the future (if this scenario is likely or not, i.e. NN > GW > other noise).

  179   Thu Jan 27 14:51:41 2011 JanComputingSeismometryapproaching the real world / transfer functions

The simulation is not a good representation of a real detector. The first step to make it a little more realistic is to simulate variables that are actually measured. So for example, instead of using TM acceleration in my simulation, I need to simulate TM displacement. This is not a big change in terms of simulating the problem, but it forces me to program filters that correct the seismometer data for any transfer functions between seismometers and GWD data before the linear estimation is calculated. This has been programmed now. Just to mention, the last more important step to make the simulation more realistic is to simulate seismic and thermal noise as additional TM displacement. Currently, I am only adding white noise to the TM displacement. If the TM displacement noise is not white, then you would have to modify the optimal linear estimator in the usual way (correlations substituted by integrals in frequency domain using freqeuncy-dependent noise weights).

I am now also applying 5Hz high-pass filters here and there to reduce numerical errors accumulating in time-series integrations. The next three plots are just a check that the results still make sense after all these changes. The first plot is shows the subtraction residuals without correcting for any frequency dependence in the transfer functions between TM displacement and seismometer data:


The dashed line indicates the expected minimum of NN subtraction residuals, which is determined by the TM-displacement noise (which in reality would be seismic noise, thermal noise and GW). The next plot is shows the residuals if one applies filters to take the conversion from TM acceleration into displacement into account:


This is already sufficient for the spiral array to perform more or less optimally. In all simulations, I am injecting a merry mix of wavelets and spherical waves at different frequencies. So the displacement field is as complex as it can get. Last but not least, I modified the filters such that they also take the frequency-dependent exponential suppression of NN into account (because of TM being suspended some distance above ground):


The spiral array was already close to optimal, but the performance of the circular array did improve quite a bit (although 10 simulation runs may not be enough to compare this convincingly with the previous case).

  180   Fri Jan 28 11:22:34 2011 JanComputingSeismometryrealistic noise model -> many problems

So far, the test mass noise was white noise such that SNR = NN/noise was about 10. Now the simulation generates more realistic TM noise with the following spectrum:


The time series look like:


So the TM displacement is completely dominated by the low-frequency noise (which I cut off below 3Hz to avoid divergent noise). None of the TM noise is correlated with NN. Now this should be true for aLIGO since it is suspension-thermal and radiation-pressure noise limited at lowest frequencies, but who knows. If it was really limited by seismic noise, then we would also deal with the problem that NN and TM noise are correlated.

Anyway, changing to this more realistic TM noise means that nothing works anymore. The linear estimator tries to subtract the dominant low-frequency noise instead of NN. You cannot solve this problem simply by high-pass filtering the data. The NN subtraction problem becomes genuinely frequency-dependent. So what I will start to do now is to program a frequency-dependent linear estimator. I am really curious how well this is going to work. I also need to change my figures of merit. A simple plot of standard-deviation subtraction residuals will always look bad. This is because you cannot subtract any of the NN at lowest frequencies (since TM noise is so strong there). So I need to plot spectra of subtraction noise and make sure that the residuals lie below or at least close to the TM noise spectrum.

  181   Fri Jan 28 14:50:27 2011 JanComputingSeismometryColored noise subtraction, a first shot

Just briefly, my first subtracition spectra:


Much better than I expected, but also not good enough. All spectra in this plot (except for the constant noise model) are averages over 10 simulation runs. The NN is the average NN, and the two "res." curves show the residual after subtraction. It seems that the frequency-dependent linear estimator is working since subtraction performance is consistent with the (frequency-dependent) SNR. Remember that the total integrated SNR=NN/noise is much smaller than 1 due to the low-frequency noise, and therefore you don't achieve any subtraction using the simple time-domain linear estimators. Now the final step is to improve the subtraction performance a little more. I don't have clever ideas how to do this, but there will be a way.

  182   Fri Jan 28 15:28:52 2011 JanComputingSeismometryHaha, success!

Instead of estimating in the frequency domain, I now have a filter that is defined in frequency domain, but transformed into time domain and then applied to the seismometer data. The filtered seismometer data can then be used for the usual time-domain linear estimators. The results is perfect:


So what's left on the list? Although we don't need this, "historically" I had interest in PCA. Although it is not required anymore, analyzing the eigenvalues of the linear estimators may tell us something about the number of seismometers that we need. And it is simply cool to understand estimation of information in seismic noise fields.

  183   Fri Jan 28 21:09:56 2011 JanSummarySeismometryNN subtraction diagram

This is how Newtonian-noise subtraction works:


  185   Thu Mar 10 14:59:54 2011 JanDailyProgressSeismometryThoughts about how to optimize feed-forward for NN

If the plan is to use feed-forward cancellation instead of noise templates, then the way to optimize the array design is to understand where gravity perturbations are generated. The following plot shows a typical gravity-perturbation field as seen by the test mass. It is a snapshot at a specific moment in time. The gravity-perturbation force is projected onto the line along the arm (Y=0). Green means no gravity perturbation along the arm generated at this point.


The plot shows that the gravity perturbations along the direction of the arm seen by the test mass are generated very close to the test mass (most of it within a radius of 10m), and that it is generated "behind" and "in front of" the mirror. This follows directly from projecting onto the arm direction. As we already know, for feed-forward, we can completely neglect the existence of seismic waves and focus on actual gravity perturbations. In short, for feed-forward, you would place the seismometers inside the blue-red region and don't worry about any locations in the green. The distance between seismometers should be equal to or less than the distance between red and blue extrema. So even though I haven't simulated feed-forward cancellation yet, I already know how to make it work. Obviously, if subtraction goals are more ambitious than what we need for aLIGO, then feed-forward cancellation of NN would completely fail generating more problems than solving problems. Unless someone wants to deploy hundreds to a few thousand seismometers around each test mass.

  203   Fri May 13 10:52:27 2011 JanMiscSeismometrySTS-2 guts

I was flabbergasted when I saw this. There are many really good seismometers with very simple mechanical design and electronics. This is a nice one with complicated mechanics and electronics.

RA: Awesome.

Attachment 1: STS-2_dissassembly.pdf
STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf STS-2_dissassembly.pdf
  291   Wed Aug 10 10:11:14 2011 Daniel, JanDailyProgressSeismometryHealing the GS-13

Sometimes people don't know how to handle seismometers like STS-2 and GS-13. We got a GS-13 from LLO (broken in several ways). The first thing that got fixed was a broken leg (this had happened already a while ago). Then the next problem was to substitute the cheap flexures that you get when you buy the GS-13. They can bend when people forget to lock the mass as you can see in the next picture:


Luckily Brian Lantz et al have designed a more robust flexure that you can use to substitute the standard GS-13 flexures. There are two types of LIGO flexures, one type for the three top flexures, one type for the three bottom flexures. Unfortunately, Celine Ramet mailed us plenty of bottom flexures and only one top flexure. So we could make all substitutions at the bottom, but only one at the top (luckily there was only one bent flexure at the top). Eventually, we should get more top flexures from Celine.


Attachment 1: RIMG0048.JPG
  292   Wed Aug 10 10:37:41 2011 Daniel, JanDailyProgressSeismometryProtoype

The SS-1 Ranger has a new frame now that allows us to access the parts "during runtime" and operate it as a prototype.


For more details about the original design, search the e-log for entries about 1 year old. The original SS-1 design has a coil readout. We want to substitute it by capacitive readouts. In any case, it turns out that the tiny coil wire was damaged and that the coil readout does not work anymore anyway.


Now we thought about how to implement the capacitive readout. We certainly won't start with the electronically most sophisticated way, but it is already a mechanical design problem. So what seemed most clever to us is to use the conducting flexure rings as part of the capacitors. Their location also makes it easier to build additional capacitor plates around them. So we will probably start with a simple differential capacitive readout using three plates like the STS-2 or T240 have. Once the mechanical design is good, we can copy it as many times we like to other positions around the two flexures at the top and bottom of the mass. Then it won't be difficult to implement a more sophisticated readout. Luckily the calibration coil is still good, and we will eventually (try to) use it for force feedback operation.

  345   Wed Sep 7 18:38:32 2011 Daniel, JanDailyProgressSeismometryGS-13 instrumental noise

We tried to measure the instrumental noise of the GS-13, amplified with our low noise preamp and an SR560 in unity gain mode. Therefore we locked the test mass. As it turned out, environmental influences still coupled into the signal.


Not only jumping causes peaks in the signal, but magnetic fields did that as well (e.g. typing on a keyboard), which we proved by moving a magnet near the GS-13. To verify that the magnetic signal is not induced only by the movements we made to move the magnet, we did the same movements without the magnet and could not see that signal again.

Having realised that, it somehow explains the odd spectrum we got from the GS-13 with locked mass even without some bigger disturbances from jumps or magnets, as there are probably still vibrations coupling into the signal.



  106   Mon Nov 26 16:05:43 2007 Norna RobertsonMiscSUSMass of OMC silca bench (measured 16 Nov 2007)
For the record here is a note sent round OMC-SUS colleagues on 16th November.


A team of Chris, Chub, Helena and me ably led by Sam (who did the scary lifting) got the OMC bench dismounted from its suspension and weighed today. It is now sitting on its custom plate with dark side down (minus the preamp boxes) in the centre of the optics table in room 56. It will remain there for the forseeable future. The metal parts of the suspension will be disassembled by Chris with Ken Mailand's help starting Monday.

Results of weighing

1) Optical bench including preamps and the wire bundle to the heater, pzt etc = 6.514 kg
2)Preamp cable (mock-up) = 24.4 g (without the connectors to the preamp)
3) The two leaning towers of counterweights (which were situated close to the far corners of the mass - NOT above the holes where they need to be for fixing in place) = 200.2 g and 187.7 g

I remind you all that there are 2 more tombstones ( fused silica 8 mm thick x 30 mm tall x 32? mm wide with 18 mm diam hole- size to be checked from CAD drawing) and two pieces of black glass 1 inch by 1 inch by 3 mm ( I noted at our telecon that these weigh 8.5 gm total) to be added to the bench - situated close to the DC photodiodes which sit on bright side under the preamps.
  119   Wed Jan 23 22:52:21 2008 Norna RobertsonMiscSUSOMC SUS assembly adjustments made and lessons learnt
Calum and I have been working in the 40 m annex re-assembling and testing the OMC SUS after the parts have been cleaned and baked and various parts modified. We have had several problems which needed to be addressed and made several observations of items which need attention for the next OMC. Also we note several points which need to be followed when the OMC is being prepared at LLO for input into the vacuum chamber. The following are in no particular order.

1) Problem with one of the blades
When we got the double pendulum hanging today we noticed that one of the lower blades was interfering with part of the top mass - hitting a block which holds one of the screws for adjusting the position of the clamp for the upper wire ( for adjusting pitch). The blade was 13S - one of those which had an angled clamp of 3.5 degrees (the larger of the angles - the blades at the other end of the bench have 3 degree angles). These large angles are being used to counteract the fact that the bench weighs more than these blades were designed for. The blades thus have a convex curvature (looking from above) and the crown of the bend was where the interference was occuring. The puzzling bit is why only one blade showed this, and not its partner. We tried another 3.5 degree clamp ( in case it was a fault with the clamp) but saw the same effect. We then tried reducing to 3 degrees for that blade - still almost touching. We then reduced the angle for that blade to 2.5 degrees. This worked Ok - and when the masses were hanging again all the blade tips appeared to be at approximately the same height (measured using a steel rule). So the dynamics should be OK. Puzzle remains why only one blade showed this behaviour. Note that for the LHO OMC we will be getting new blades designed for the mass they will be taking, and so we should not need to use angled clamps as large as for the LLO suspension Hence this interference problem should not arise again.
Lesson for the future - need to characterise all blades with the full range of angled clamps to be used. (We did not have time to do this for this suspension).

2) Test of new method of hanging silica bench (using metal bench)
Yesterday with Ken Mailand's help we tried out the new way of suspending the bench using Ken's bench holder with one of the metal benches and the two lab jacks. We learnt that to get the right range of height adjustment to lift the bench to put the discs on the lower wire clamps and then lower the bench again the mating pieces on the lab jacks had to mate with the central portion of the handles at each end rather than the top part of the handles. Also it looks like the lower EQ stop holder (which sits under the bench) should be put in place before the bench is brought in - otherwise difficult to get into place. Finally we note that this job is a three person job - need one on each end to raise and lower the lab jacks and one to fit the discs and then guide the discs and clamps into the holes and check they are seated correctly as the bench is lowered.

3) Height of bench.
We measured the height of the lower surface of the metal bench above the optics table to be (160.5 - 38.75) = 121.75 mm. Requirement is 101.6 + 20 = 121.6 (+/- 2mm) so we are OK. This was without any extra mass added to the top mass. We note that the tabelcloth is close to lower end of its range, but should be OK. However the final alignment needs to be done when the OSEMs are in place.

3) EQ stops and parts which should be removed before putting into vacuum chamber.
For sending the OMC to LLO we put on double nuts on all the EQ stops. This limits the range of the stops and in particular the ones under the bench were not long enough to reach the bench when double nuts are used. So we had to improvise the position of the EQ holder by putting ~ 1/2 inch "shims" to raise the holder ( the shims were three of the masses made for attaching to the metal bench to increase its mass). We also removed all "loose" parts - the adjustment mechanisms for the top blade yaw positions, the magnets and flags, the set screws for locking the larger vertical stops used to hold the top mass, and the screws used to alter the pitch adjustment clamp positions.
When the OMC is reassembled at LLO and made ready for installation, all the second nuts on the EQ stops should be removed. Also once the pitch adjustment using the moveable clamps is set correctly, the screws for doing those adjustments should be removed again. Basically all loose screws should be removed.

4) Parts needing modifications
i) EQ cross pieces holding the plate which goes over the bench need to be reduced in length to fit between the structure legs.
ii) Blind holes in EQ corner brackets.

5) Point to note for next OMC.
i) The slots in the structure which take the dowel pins for alignment need to be lengthened to allow the tableloth the full range of movement which the slots for the attachment screws would allow.
ii) The targets for aligning OSEMS need a hole in the centre for the flag.

More to follow tomorrow
Also we took lots of pictures today. will put relevant ones into installation document.

OMC is due to be crated tomorrow for pick-up on Friday am.
  122   Tue Nov 17 10:33:37 2009 ZachMiscSUSAOSEM test progress

 It's dusty in here...



I was recently commissioned to do some noise measurements on the new  AOSEMS. I set up a humble experiment in the LIGO e-lab to do some preliminary measurements:


I made a simple current-to-voltage converter out of an OP27E (using a 100-kohm feedback resistor) to use as the transimpedance amplifier for the readout. This results in a transimpedance of 0.1 V / uA. A simple schematic of the important elements is attached below.


DC power was provided without regulators directly from the laboratory DC supply in the lab. The value of 1.7 V across the LED was set such that the current through it was ~35 mA.


Rana and I took a few important PSDs (one of the DC supply, one of the OP27E with no supplied current, and two with the setup fully connected--one each with and without the PD covered), all from 250 mHz - 200 Hz, AC coupled. Using a sophisticated estimation method (called, by some, the "pick two points and approximate with a power law" method for lack of something fittingly elegant), we obtained a rough estimate of these spectral densities in order to compare them.


These were all converted into equivalent PD current noise. For all but the "supply" noise, this was done trivially by dividing by the transimpedance of the OP27E. For "supply", LED voltage noise had to be converted to PD current noise in the following way:


Z_LED = 1.7 V / 35 mA ~ 50 ohm


equivalent PD current noise = (I_PD / I_LED) * (measured supply voltage noise / 50 ohm)


where the PD-LED current ratio was found empirically to be (I_PD / I_LED) ~ 1 / 1000 by measuring the voltage out of the amp with full brightness (i.e. I_LED = 35 mA, no obstruction) and dividing by the transimpedance (see 2nd figure).


The third figure below is a plot of these spectral densities in common units. Somewhat expectedly, the noise of the "dark" configuration seems limited by the supply noise. However, the "bright" line seems to be dominated by something else. I'm not sure I see how it could be anything but the LED itself, but it is worthwhile to repeat this "test" with a better setup.


On the to-do list:


1. Voltage regulator/reference

Rana thinks that the AD587LN is a good choice of reference given its performance on some LISA tests. I am in contact with AD, and there is no longer a 'LN' package, but I am trying to get samples of the currently manufactured one that is most similar (AD587KNZ).

In the meantime, I am going to find some simple regulators downstairs or at the 40m.


2. Bandpass filter

I was advised that it is a good idea to build your own high/bandpass filter instead of relying on the spectrum analyzer's AC coupling function. I will be doing just this.


3. Switch to a better op amp

        Like the AD743


4. Calibration

I need to find a good way to hold the OSEM in place while I stick something in there with a micron drive without it being unreliably shaky.







Attachment 1: schematic.jpg
Attachment 2: 2009-11-12_17.34.58.jpg
Attachment 3: noise_comparison.png
  123   Tue Nov 17 21:23:08 2009 KojiMiscSUSAOSEM test progress

We have LT1021-7 at the 40m, next to the Alberto's desk. This is the VREF for 7V.


1. Voltage regulator/reference

Rana thinks that the AD587LN is a good choice of reference given its performance on some LISA tests. I am in contact with AD, and there is no longer a 'LN' package, but I am trying to get samples of the currently manufactured one that is most similar (AD587KNZ).

In the meantime, I am going to find some simple regulators downstairs or at the 40m. 


  124   Thu Nov 19 03:41:12 2009 ZachMiscSUSAOSEM calibration

 Tonight, I calibrated the AOSEM's response in [A/m]. I used a sophisticated rig consisting of:


1. One of those anodized Faraday isolator mounts to hold the OSEM


2. A translation stage with a screw gauge to jam something into it (gracefully)


3. Some DC power supplies and multimeters


4. My simple transimpedance amplifier sketched in the previous post (I did not bother upgrading the readout circuit for this measurement since I was just taking a relatively rough DC measurement)




I used a 9/64 hex key to simulate the shadowmaking magnet. A picture of the setup is attached below.


Some pertinent info:


- The current through the LED was maintained at 35 mA throughout the measurement. The measured voltage across it was 1.62 V, giving Z_LED = 46.3 ohm.


- The op amp supply voltage was +/- 10 V, and the PD bias was +10 V.


- The output voltage of the amplifier with the PD fully lit was 3.04 V (measured before and after the test). Note that this voltage increases slightly as the key is inserted due to reflections. 



The second attachment is a plot of the photocurrent versus the position of the key (the x axis is shifted such that the key is roughly centered at x = 0, and x < 0 corresponds to the key being further inside). The response of the OSEM in the linear region is roughly 0.05 A/m.

Attachment 1: 100_0362.JPG
Attachment 2: calibration_plot_11_18_09.png
  125   Thu Nov 19 12:00:04 2009 ZachMiscSUSbandpass filter

 Attached is the transfer function for the bandpass filter I built for the AOSEM readout. The schematic is primitively outlined below. The corner frequencies are what they should be (HP: 100 mHz, LP: 1 kHz)

The Johnson noise for the 1 k resistor is roughly 4 nV/rt(Hz). Frank says that it doesn't make sense to use much lower than 1 k if I'm putting it into an SR785.



           10 uF                  1 k

      -------| |-------------/\/\/\/\/\/------------------

                        | |

        < |

         160 k  >        _|_  160 nF

        <        ---

        > |

        | |

                V                 V


Attachment 1: BPF_bode_11_18_09.png
  126   Fri Nov 20 06:18:51 2009 ranaMiscSUSbandpass filter

seems OK, as long as the current noise doesn't get you. To make sure you have to terminate the input to this filter and then look at the resulting noise in the SR785.

  127   Sun Nov 22 16:02:09 2009 ZachMiscSUSAOSEM noise measurement

On Friday, Rana and I discovered that my transimpedance amp was oscillating like whoa at about 100 kHz. A little research showed this to be due to the input capacitance of the AD743 (~20 pF). To fix this, I put a 20-pF cap in parallel with the 100k feedback resistor, and that seemed to do the trick.


The relevant circuitry is shown in attachment 1. +/- 12 V DC was provided by voltage regulators (7912, 78M12). The voltage across the LED was measured to be V_LED = 1.61 V, and the current through it was I_LED = 31.6 mA, giving Z_LED = 50.9 ohm. The voltage out of the amp with a fully lit PD was V_out = -2.83 V, giving a photocurrent of I_ph = 28.3 uA. 


I was concerned about noise that might be imposed by the bandpass filter, so I compared spectra I took with and without it (that is, AC coupled, no BPF, and DC coupled, with BPF). This comparison is shown in attachment 2. There appears to be no difference apart from the aliasing effects at low frequency.


After this, I took the real measurement, extending the range to 800 Hz, averaging 100x and with a linewidth of 1 Hz (I realize now that I should probably have done this with a smaller linewidth, so that I could see below ~1 Hz. I will repeat the measurement this week with better low-frequency resolution). The result can be seen in attachment 3, calibrated to displacement noise in m/rt(Hz) using the measured 0.05-A/m response of the OSEM in the linear region. The four lines are:


- Bright: noise in the OSEM with a fully lit PD

- Dark: noise in the OSEM with the LED off

- Amp: noise in the transimpedance amp with the input terminated

- V_LED: noise in the LED voltage


The first three spectra were taken at the output of the amplifier and calibrated back to meters using the transimpedance gain and OSEM response in A/m. The last was taken across the LED, and calibrated into meters using the values given in paragraph 2. All measurements were taken with the OSEM under a box and with the lights out.


It appears that we are still limited by our setup. The "Dark" noise is coincident with the amplifier noise, while the "Bright" noise is coincident with the LED noise. That said, it is fairly comforting that all this noise is at the level of around 10^-10 m or less, as we can probably expect the true noise of the OSEM to be lower than this. We will know this for sure once we have a truly quiet setup (starting with ultra-low-noise voltage references).

Attachment 1: schematic.png
Attachment 2: BPF_noise_comparison.png
Attachment 3: 0-800.png
  128   Tue Nov 24 12:28:52 2009 ZachMiscSUSAOSEM measurement update

  I was able to take some better measurements last night. I took data in two bands: 0-100 Hz, 0-1.6 kHz, each with 800 lines. This gives us a decent idea of what's going on at low and high frequency. Attached are four plots, two from each band. All measurements were taken with a box over both the OSEM and the readout circuit and the lights out.

The first two are low- and high- frequency comparisons of the noise in the full (bright) configuration as measured with no BPF and AC coupling vs with the BPF and DC coupling. There appears to be no difference apart from the expected effect above the pole at 1 kHz.

The next two are plots of the noise in various components and the full scheme calibrated into equivalent displacement noise. Everything is below ~10e-10 m/rt(Hz) with the exception of line peaks, and again it would appear that we are limited by our measurement equipment.

Some notes:

- The "dark" noise seems to be coincident with the "amp" noise with the exception of some extra pickup that increases at high frequency (seems to be line-related).

- The "LED" noise is coincident with the "supply" noise up until its 8-Hz corner frequency, after which it falls off as expected until it hits an apparent floor around 100 Hz.

- The "bright" noise seems to be coincident with the "supply" noise, while the "dark" and "amp" are much lower. This could be because the supply noise only shows up when there is an appreciable voltage at the output of the amp.


Have to think about this for a bit, but the next logical step is to turn the measurement setup into something solid (i.e. soldering, enclosure, etc.).

Attachment 1: BPF_low.png
Attachment 2: BPF_high.png
Attachment 3: OSEM_low.png
Attachment 4: OSEM_high.png
ELOG V3.1.3-