40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  SUS Lab eLog, Page 37 of 37  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Type Categoryup Subject
  172   Sat Jan 22 20:19:51 2011 JanComputingSeismometrySpatial spectra

All right. The next problem I wanted to look at was if the ability of the seismic array to produce spatial spectra is somehow correlated with its NN subtraction performance. Now whatever the result is, its implications are very important. Array design is usually done to maximize its accuracy to produce spatial spectra. So the general question is what our guidelines are going to be? Information theory or good spatial spectra? I was always advertizing the information theory approach, but it is scary if you think about it, because the array is theoretically not good for anything useful to seismology, but it may still somehow provide the information that we need for our purposes.

Ok, who wins? Again, the current state of the simulation is to produce plane waves all at the same frequency, but with varying speeds. The challenge is really the mode variation (i.e. varying speeds) and not so much varying frequencies. You can always switch to fft methods as soon as you inject waves at a variety of frequencies. Also, I am simulating arrays of 20 seismometers that are randomly located (within a 30m*30m area) including one seismometer that is always at the origin. One of my next steps will be to study the importance of array design. So here is how well these arrays can do in terms of measuring spatial spectra:

Map_3.jpgMap_4.jpg

The circles indicate seismic speeds of {100,250,500,1000}m/s and the white dots the injected waves (representing two modes, one at 200m/s, the other at 600m/s). The results are not good at all (as bad as the maps from the geophone array that we had at LLO). It is not really surprising that the results are bad, since seismometer locations are random, but I did not expect that they would be so pathetic. Now, what about NN subtraction performance?

Performance_cNN_SNR10_A.jpg

 The numbers indicate the count of simulation runs. The two spatial spectra above have indices 3 (left figure) and 4 (right figure). So you see that everything is fine with NN subtraction, and that spatial spectra can still be really bad. This is great news since we are now deep in information theory. We should not get to excited at this point since we still need to make the simulation more realistic, but I think that we have produced a first powerful clue that the strategy to monitor seismic sources instead of the seismic field may actually work.

  173   Sun Jan 23 09:03:43 2011 JanComputingSeismometryphase offset NN<->xi

I just want to catch up on my conclusion that a single seismometer cannot be used to do the filtering of horizontal NN at the surface. The reason is that there is 90° phase delay of NN compared to ground displacement at the test mass. The first reaction to this shoulb be, "Why the hack phase delay? Wouldn't gravity perturbations become important before the seismic field reaches the TM?". The answer is surprising, but it is "No". The way NN builds up from plane waves does not show anything like phase advance. Then you may say that whatever is true for plane waves must be true for any other field since you can always expand your field into plane waves. This however is not true for reasons I am going to explain in a later post. All right, but to say that seismic dispalcement is 90° ahead of NN really depends on which directoin of NN you look at. The interferometer arm has a direcion e_x. Now the plane seismic wave is propagating along e_k. Now depending on e_k, you may get an additional "-" sign between seismic dispalcement and NN in the direction of e_x. This is the real show killer. If there was a universal 90° between seismic displacement and NN, then we could use a single seismometer to subtract NN. We would just take its data from 90° into the past. But now the problem is that we would need to look either 90° into the past or future depending on propagation direction of the seismic wave. And here two plots of a single-wave simulation. The first plots with -pi/2<angle(e_x,e_k)<pi/2, the second with pi/2<angle(e_x,e_k)<3*pi/2:

TimeSeries_fwd.jpgTimeSeries_bwd.jpg

 

  174   Sun Jan 23 10:27:07 2011 JanComputingSeismometryspiral v. random

A spiral shape is a very good choice for array configurations to measure spatial spectra. It produces small aliasing. How important is array configuration for NN subtraction? Again: plane waves, wave speeds {100,200,600}m/s, 2D, SNR~10. The array response looks like Stonehenge:

Coherence_spiral.jpgSpiral_resp.jpg

A spiral array is doing a fairly good job to measure spatial spectra:

Map_6.jpgMap_7.jpg

The injected waves are now represented by dots with radii proportional to the wave amplitudes (there is always a total of 12 waves, so some dots are not large enough to be seen). The spatial spectra are calculated from covariance matrices, so theory goes that spatial spectra get better using matched-filtering methods (another thing to look at next week...).

Now the comparison between NN subtraction using 20 seismometers, 19 of which randomly placed, one at the origin, and NN subtraction using 20 seismometers in a spiral:

Performance_cNN_SNR10_B_random.jpgPerformance_cNN_SNR10_B_spiral.jpg

A little surprising to me is that the NN subtraction performance is not substantially better using a spiral configuration of seismometers. The subtraction results show less variation, but this could simply be because the random configuration is changing between simulation runs. So the result is that we don't need to worry much about array configuration. At least when all waves have the same frequency. We need to look at this again when we start injecting wavelets with more complicated spectra. Then it is more challenging to ensure that we obtain information at all wavelengths. The next question is how much NN subtracion depends on the number of seismometers.

  175   Sun Jan 23 12:59:18 2011 JanComputingSeismometryLess seismometers, less subtraction?

Considering equal areas covered by seismic arrays, the number of seismometers relates to the density of seismometers and therefore to the amount of aliasing when measuring spatial spectra. In the following, I considered four cases:

1) 10 seismometers randomly placed (as usual, one of them always at the origin)
2) 10 seismometers in a spiral winding one time around the origin
3) The same number winding two times around the origin (in which case the array does not really look like a spiral anymore):

Coherence_spiral_A2.jpg
4) And since isotropy issues start to get important, the forth case is a circular array with one of the 10 seismometers at the origin, the others evenly spaced on the circle. 

Just as a reminder, there was not much of a difference in NN subtraction performance when comparing spiral v. random array in case of 20 seismometers. Now we can check if this is still the case for a smaller number of seismometers, and what the difference is between 10 seismometers and 20 seismometers. Initially we were flirting with the idea to use a single seismometer for NN subtraction, which does not work (for horiz NN from surface fields), but maybe we can do it with a few seismometers around the test mass instead of 20 covering a large area. Let's check.

Here are the four performance graphs for the four cases (in the order given above):

Performance_cNN_SNR10_B_N10_random.jpgPerformance_cNN_SNR10_B_N10_spiral.jpg

Performance_cNN_SNR10_B_N10_A2_spiral.jpgPerformance_cNN_SNR10_B_N10_circ.jpg

All in all, the subtraction still works very well. We only need to subtract say 90% of the NN, but we still see average subtractions of more than 99%. That's great, but I expect these numbers to drop quite a bit once we add spherical waves and wavelets to the field. Although all arrays perform pretty well, the twice-winding spiral seems to be the best choice. Intuitively this makes a lot of sense. NN drops with 1/r^3 as a function of distance r to the TM, and so you want to gather information more accurately from regions very close to the TM, which leads you to the idea to increase seismometer density close to the TM. I am not sure though if this explanation is the correct one.

 

  176   Mon Jan 24 15:50:38 2011 JanComputingSeismometryMulti-frequency and spherical

I had to rebuild some of the guts of my simulation to prepare it for the big changes that are to come later this week. So I only have two results to report today. The code can now take arbitrary waveforms. I tested it with spherical waves. I injected 12 spherical waves into the field, all originating 50m away from the test mass with arbitrary azimuths. The 12 waves are distributed over 4 frequencies, {10,14,18,22}Hz with equal spectral density (so 3 waves per frequency). The displacement field is far more complex than the plane-wave fields and looks more like a rough endoplasmic reticulum:

Field_SW4f3.jpg

The spatial spectra are not so much different from the plane-wave spectra:

Map_10Hz_SW4f3.jpg

The white dots now indicate the back-azimuth of the injected waves, not their propagation direction. And we can finally compare subtraction performance for plane-wave and spherical-wave fields:

Performance_Spiral20_PW4f3.jpgPerformance_Spiral20_SW4f3.jpg

Here the plane-wave simulation is done with 12 plane waves at the same 4 frequencies as the spherical waves, and in both cases I chose a 20 seismometer 4*pi spiral array. Note that the subtraction performance is pretty much identical since the NN was generally stronger in the spherical-wave simulation (dots 5 and 20 in the right figure lie somewhere in between the upper right group of dots in the left figure). This makes me wonder if I shouldn't switch to some absolute measure for the subtraction performance, so that the absolute value of NN does not matter anymore. In the end, we don't want to achieve a subtraction factor, but a subtraction level (i.e. the target sensitivity of the GW detectors).

Anyway, the result is very interesting. I always thought that spherical waves (i.e. local sources) would make everything far more complicated. In fact, it does not. And also the fact that the field consists of waves at 4 different frequencies does not do much harm. (subtration performance decreased a little). Remember that I am currently using a single-tap FIR filter if you want. I thought that you need more taps once you have more frequencies. I was wrong. The next step is the wavelet simulation. This will eventually lead to a final verdict about single-tap v. mutli-tap filtering.

 

  177   Tue Jan 25 11:57:23 2011 JanComputingSeismometrywavelets

Here is the hour of truth (I think). I ran simulations of wavelets. These are not anymore characterized by a specific frequency, but by a corner frequency. The spectra of these wavelets almost look like a pendulum transfer function, where the resonance frequency now has the meaning of a corner frequency. The width of the peak at the corner frequency depends on the width of the wavelets. These wavelets propagate (without dispersion) from somewhere at some time into and out of the grid. There are always 12 wavelets at four different corner frequencies (same as for the other waves in my previous posts). The NN now has the following time series:

Data_WA4f3.jpg

You can see that from time to time a stronger wavelet would pass by and lead to a pulse like excitation of the NN. Now, the first news is that the achieved subtraction factor drops significant compared to the stationary cases (plane waves and spherical waves):

Perf_rel_Spiral20_WA4f3.jpg

And the 4*pi, 10 seismometer spiral dropped below an average factor of 0.88. But I promised to introduce an absolute figure to quantify subtraction performance. What I am now doing is to subtract the filtered array NN estimation from the real NN and take its standard deviation. The standard deviation of the residual NN should not be larger than the standard deviation of the other noise that is part of the TM displacement. In addition to NN, I add a 1e-16 stddev noise to the TM motion. Here is the absolute filter performance:

Perf_abs_Spirals_WA4f3.jpg

As you can see, subtraction still works sufficiently well! I am now pretty much puzzled since I did not expect this at all. Ok, subtraction factors decreased a lot, but they are still good enough. REMINDER: I am using a SINGLE-TAP (multi input channel) Wiener filter to do the subtraction. It is amazing. Ideas to make the problem even more complex and to challenge the filter even more are welcome.

 

  178   Wed Jan 26 10:34:53 2011 JanSummarySeismometryFIR filters and linear estimation

I wanted to write down what I learned from our filter discussion yesterday. There seem to be two different approaches, but the subject is sufficiently complex to be wrong about details. Anyway, I currently believe that one can distinguish between real filters that operate during run time, and estimation algorithms that cannot be implemented in this way since they are acausal. For simplicity, let's focus on FIR filter and linear estimation to represent the two cases.

A) FIR filters

FIR.jpg

A FIR filter has M tap coefficients per channel. If the data is sampled, then you would take the past M samples (including sample at present time t) of each channel, run them through the FIR and subtract the FIR output from the test-mass sample at time t. This can also be implemented in a feed-forward system so that the test-mass data is not sampled. Test-mass data is only used initially to calclulate the FIR coefficients, unless the FIR is part of an adaptive algorithm. For adaptive filters, you would factor out anything from the FIR that you know already (e.g. your best estimates of transfer functions) and only let it do the optimization around this starting value.

The FIR filter can only work if transfer functions do not change much over time. This is not the case though for Newtonian noise. Imagine the following case:

(S1)-----(TM)----------(S2)

where you have two seismometers around a test mass along a line, one of them can be closer to the test mass than the other. We need to monitor the vertical displacement to estimate NN parallel to the line (at least when surface fields are dominant). If a plane wave propagates upwards, perpendicular to the line, then there will be no NN parallel to this line (because of symmetry). The seismic signals at S1 and S2 are identical. Now a plane wave propagating parallel to the line will produce NN. If the distance between the seismometers happens to be the length of the plane wave, then again, the seismometers will show identical seismic signals, but this time there is NN. An FIR filter would give the same NN prediction in these two cases, but NN is actually different (being absent in the first case). So it is pretty obvious that FIR alone cannot handle this situation.

What is the purpose of the FIR anyway? In the case of noise subtraction, it is a clever time-domain representation of transfer functions. Clever means optimal if the FIR is a Wiener filter. So it contains information of the channels between sensors and test mass, but it does not care at all about information content in the sensor data. This information is (intentionally if you want) averaged out when you calculate the FIR filter coefficients.

B) Linear estimation

Wiener.jpg

So how to deal with information content in sensor data from multiple input channels? We will assume that an FIR can be applied to factor out the transfer functions from this problem. In the surface NN case, this would be the 1/f^2 from NN acceleration to test-mass displacement, and the exp(-2*pi*f*h/c) - h being the height of the test mass above ground - which accounts for the frequency-dependent exponential suppression of NN. Since the information content of the seismic field changes continuously, we cannot train a filter that would be able to represent this information for all times. So it is obvious, that this information needs to be updated continuously.

The problem is very similar to GW data analysis. What we are going to do is to construct a NN template that depends on a few template parameters. We estimate these parameters (maximum likelihood) and then we subtract our best-estimate of the NN signal from the data. This cannot be implemented as feed forward and relies on chopping the data into stretches of M samples (not necessarily the same value for M as in the FIR case). Now what are the template parameters? These are the coefficients used to combine the data stretches of the N sensors. This is great since the templates depend linearly on these parameters. And it is trivial to calculate the maximum-liklihood estimates of the template parameters. The formula is in fact analogous to calculating the Wiener-filter coefficients (optimal linear estimates). If we only use one parameter per channel (as discussed yesterday) or if one should rather chop the sensor data into even smaller stretches and introduce additional template coefficients will depend on the sensor data and how nature links them to the test mass. Results of my current simulation suggest that only one parameter per channel is required.

When I realized that the NN subtraction is a linear estimation problem with templates etc, I immediately realized that one could do higher-order noise subtraction so that we will never be limited by other contributions to the test mass displacement (and here I essentially mean GWs since you don't need to subtract NN below other GWD noise, but maybe below the GW spectrum if other instrumental noise is also weaker). Something to look at in the future (if this scenario is likely or not, i.e. NN > GW > other noise).

  179   Thu Jan 27 14:51:41 2011 JanComputingSeismometryapproaching the real world / transfer functions

The simulation is not a good representation of a real detector. The first step to make it a little more realistic is to simulate variables that are actually measured. So for example, instead of using TM acceleration in my simulation, I need to simulate TM displacement. This is not a big change in terms of simulating the problem, but it forces me to program filters that correct the seismometer data for any transfer functions between seismometers and GWD data before the linear estimation is calculated. This has been programmed now. Just to mention, the last more important step to make the simulation more realistic is to simulate seismic and thermal noise as additional TM displacement. Currently, I am only adding white noise to the TM displacement. If the TM displacement noise is not white, then you would have to modify the optimal linear estimator in the usual way (correlations substituted by integrals in frequency domain using freqeuncy-dependent noise weights).

I am now also applying 5Hz high-pass filters here and there to reduce numerical errors accumulating in time-series integrations. The next three plots are just a check that the results still make sense after all these changes. The first plot is shows the subtraction residuals without correcting for any frequency dependence in the transfer functions between TM displacement and seismometer data:

Perf_abs_WA4f2_SW4f1_noTF.jpg

The dashed line indicates the expected minimum of NN subtraction residuals, which is determined by the TM-displacement noise (which in reality would be seismic noise, thermal noise and GW). The next plot is shows the residuals if one applies filters to take the conversion from TM acceleration into displacement into account:

Perf_abs_WA4f2_SW4f1_acc.jpg

This is already sufficient for the spiral array to perform more or less optimally. In all simulations, I am injecting a merry mix of wavelets and spherical waves at different frequencies. So the displacement field is as complex as it can get. Last but not least, I modified the filters such that they also take the frequency-dependent exponential suppression of NN into account (because of TM being suspended some distance above ground):

Perf_abs_WA4f2_SW4f1_acex.jpg

The spiral array was already close to optimal, but the performance of the circular array did improve quite a bit (although 10 simulation runs may not be enough to compare this convincingly with the previous case).

  180   Fri Jan 28 11:22:34 2011 JanComputingSeismometryrealistic noise model -> many problems

So far, the test mass noise was white noise such that SNR = NN/noise was about 10. Now the simulation generates more realistic TM noise with the following spectrum:

NoiseModel_TM.jpg

The time series look like:

Data_WA4f2_SW4f1.jpg

So the TM displacement is completely dominated by the low-frequency noise (which I cut off below 3Hz to avoid divergent noise). None of the TM noise is correlated with NN. Now this should be true for aLIGO since it is suspension-thermal and radiation-pressure noise limited at lowest frequencies, but who knows. If it was really limited by seismic noise, then we would also deal with the problem that NN and TM noise are correlated.

Anyway, changing to this more realistic TM noise means that nothing works anymore. The linear estimator tries to subtract the dominant low-frequency noise instead of NN. You cannot solve this problem simply by high-pass filtering the data. The NN subtraction problem becomes genuinely frequency-dependent. So what I will start to do now is to program a frequency-dependent linear estimator. I am really curious how well this is going to work. I also need to change my figures of merit. A simple plot of standard-deviation subtraction residuals will always look bad. This is because you cannot subtract any of the NN at lowest frequencies (since TM noise is so strong there). So I need to plot spectra of subtraction noise and make sure that the residuals lie below or at least close to the TM noise spectrum.

  181   Fri Jan 28 14:50:27 2011 JanComputingSeismometryColored noise subtraction, a first shot

Just briefly, my first subtracition spectra:

Subtraction_ff.jpg

Much better than I expected, but also not good enough. All spectra in this plot (except for the constant noise model) are averages over 10 simulation runs. The NN is the average NN, and the two "res." curves show the residual after subtraction. It seems that the frequency-dependent linear estimator is working since subtraction performance is consistent with the (frequency-dependent) SNR. Remember that the total integrated SNR=NN/noise is much smaller than 1 due to the low-frequency noise, and therefore you don't achieve any subtraction using the simple time-domain linear estimators. Now the final step is to improve the subtraction performance a little more. I don't have clever ideas how to do this, but there will be a way.

  182   Fri Jan 28 15:28:52 2011 JanComputingSeismometryHaha, success!

Instead of estimating in the frequency domain, I now have a filter that is defined in frequency domain, but transformed into time domain and then applied to the seismometer data. The filtered seismometer data can then be used for the usual time-domain linear estimators. The results is perfect:

Subtraction_ff.jpg

So what's left on the list? Although we don't need this, "historically" I had interest in PCA. Although it is not required anymore, analyzing the eigenvalues of the linear estimators may tell us something about the number of seismometers that we need. And it is simply cool to understand estimation of information in seismic noise fields.

  183   Fri Jan 28 21:09:56 2011 JanSummarySeismometryNN subtraction diagram

This is how Newtonian-noise subtraction works:

NN_Filter.jpg

  185   Thu Mar 10 14:59:54 2011 JanDailyProgressSeismometryThoughts about how to optimize feed-forward for NN

If the plan is to use feed-forward cancellation instead of noise templates, then the way to optimize the array design is to understand where gravity perturbations are generated. The following plot shows a typical gravity-perturbation field as seen by the test mass. It is a snapshot at a specific moment in time. The gravity-perturbation force is projected onto the line along the arm (Y=0). Green means no gravity perturbation along the arm generated at this point.

Snap_41.jpg

The plot shows that the gravity perturbations along the direction of the arm seen by the test mass are generated very close to the test mass (most of it within a radius of 10m), and that it is generated "behind" and "in front of" the mirror. This follows directly from projecting onto the arm direction. As we already know, for feed-forward, we can completely neglect the existence of seismic waves and focus on actual gravity perturbations. In short, for feed-forward, you would place the seismometers inside the blue-red region and don't worry about any locations in the green. The distance between seismometers should be equal to or less than the distance between red and blue extrema. So even though I haven't simulated feed-forward cancellation yet, I already know how to make it work. Obviously, if subtraction goals are more ambitious than what we need for aLIGO, then feed-forward cancellation of NN would completely fail generating more problems than solving problems. Unless someone wants to deploy hundreds to a few thousand seismometers around each test mass.

  203   Fri May 13 10:52:27 2011 JanMiscSeismometrySTS-2 guts

I was flabbergasted when I saw this. There are many really good seismometers with very simple mechanical design and electronics. This is a nice one with complicated mechanics and electronics.

RA: Awesome.

  291   Wed Aug 10 10:11:14 2011 Daniel, JanDailyProgressSeismometryHealing the GS-13

Sometimes people don't know how to handle seismometers like STS-2 and GS-13. We got a GS-13 from LLO (broken in several ways). The first thing that got fixed was a broken leg (this had happened already a while ago). Then the next problem was to substitute the cheap flexures that you get when you buy the GS-13. They can bend when people forget to lock the mass as you can see in the next picture:

RIMG0049.JPG

Luckily Brian Lantz et al have designed a more robust flexure that you can use to substitute the standard GS-13 flexures. There are two types of LIGO flexures, one type for the three top flexures, one type for the three bottom flexures. Unfortunately, Celine Ramet mailed us plenty of bottom flexures and only one top flexure. So we could make all substitutions at the bottom, but only one at the top (luckily there was only one bent flexure at the top). Eventually, we should get more top flexures from Celine.

RIMG0053.JPGRIMG0055.JPG


  292   Wed Aug 10 10:37:41 2011 Daniel, JanDailyProgressSeismometryProtoype

The SS-1 Ranger has a new frame now that allows us to access the parts "during runtime" and operate it as a prototype.

RIMG0050.JPGRIMG0132.JPG

For more details about the original design, search the e-log for entries about 1 year old. The original SS-1 design has a coil readout. We want to substitute it by capacitive readouts. In any case, it turns out that the tiny coil wire was damaged and that the coil readout does not work anymore anyway.

RIMG0133.JPGRIMG0134.JPG

Now we thought about how to implement the capacitive readout. We certainly won't start with the electronically most sophisticated way, but it is already a mechanical design problem. So what seemed most clever to us is to use the conducting flexure rings as part of the capacitors. Their location also makes it easier to build additional capacitor plates around them. So we will probably start with a simple differential capacitive readout using three plates like the STS-2 or T240 have. Once the mechanical design is good, we can copy it as many times we like to other positions around the two flexures at the top and bottom of the mass. Then it won't be difficult to implement a more sophisticated readout. Luckily the calibration coil is still good, and we will eventually (try to) use it for force feedback operation.

  345   Wed Sep 7 18:38:32 2011 Daniel, JanDailyProgressSeismometryGS-13 instrumental noise

We tried to measure the instrumental noise of the GS-13, amplified with our low noise preamp and an SR560 in unity gain mode. Therefore we locked the test mass. As it turned out, environmental influences still coupled into the signal.

 Magnet.png

Not only jumping causes peaks in the signal, but magnetic fields did that as well (e.g. typing on a keyboard), which we proved by moving a magnet near the GS-13. To verify that the magnetic signal is not induced only by the movements we made to move the magnet, we did the same movements without the magnet and could not see that signal again.

Having realised that, it somehow explains the odd spectrum we got from the GS-13 with locked mass even without some bigger disturbances from jumps or magnets, as there are probably still vibrations coupling into the signal.

 GS13_locked.png

 

  738   Tue Oct 15 17:56:39 2013 nicolasLab InfrastructureSi Cantileverbake successful, going to bake more

Cryo cantilever dewar.

Here is the pressure trend (30 hours) after turning off my heat tape.

afterbake.png

It quickly drops to ~ 1 x 10^-5 torr. Before the bake, it was never getting below 4 or 5 x10^-5 torr.

Because of this success, and because no work will be done on this chamber for the next week and a half. I will keep baking to further remove contamination.

New bake started at 1065921266.

  740   Tue Oct 15 18:05:32 2013 nicolasLab InfrastructureSi Cantileverprobably a leak in my pump line

This is the pressure trend when I close off the valve between the pump line and the cryostat. I am not totally sure how to interpret this, but the fact that the pressure didn't go much lower (like the 10^-6 region) when the pump is only pumping the line makes me think that there may be a leak in my line. Something to investigate.

lineleak.png

  742   Wed Oct 23 10:02:26 2013 EvanDailyProgressSi CantileverCryo cantilever dewar: heat tape turned off

I turned off the heat tape on the cryo cantilever dewar at GPS time 1066582781. The pressure was 1.673×10−4 torr and the temperature was 76.8 °C.

  749   Mon Nov 18 19:18:36 2013 nicolasDailyProgressSi CantileverCantilever is back in the dewar

I've glued the heater and temperature sensor to the SS block and wired them to the outside through the feedthrough.

I set up the laser/split diode readout. The cantilever is flapping and being recorded. GPS of start of excitation is 1068866140.

The vacuum gauge reads 4e-7 torr. I am not convinced this is possible, but I did bake the chamber over the weekend.

  767   Wed Dec 18 18:04:02 2013 nicolasDailyProgressSi CantileverTemperature sensing and control of silicon cantilever cryostat

So I have an RTD inside the cryostat which is glued to the steel clamping block. It has a 100Ohm resistance at 273K and grows with 0.385Ohm/K from there.

I have a protoboard which takes a signal from the cymac and converts it into a proportional current, then I measure the voltage across the resistor in the cymac. This is done with 4 leads to remove effects of the cabling in the cryostat.

I use the cymac to drive a 1mA current at ~210Hz, and do a digital lockin demodulation of the returned voltage to estimate the resistance, and hence temperature.

I have a second cymac output which drives a power amplifier that is hooked up to a second resistor which acts as a heater, also glued to the steel block.

This picture shows the resistance (blue) as a function of time when I apply about 1 Watt of heating (green) for a few minutes. The blue curve is not correctly calibrated, but I will calibrate it to Kelvin when I put in some LN2.

tempcontrol.png

  769   Wed Jan 8 13:59:28 2014 nicolasMiscSi CantileverTemperature

The room temperature in the lab at 1073253434 is 22.8degC.

The raw counts out of the RTD in channel X1:SCQ-TEMPERATURE_CONTROL_INMON is 1044.

  770   Wed Jan 15 18:47:57 2014 nicolasDailyProgressSi CantileverFirst temperature cycle results

I put LN2 in the small dewar until it was at equilibrium.

This was done without the cold shield of the upper reservoir, so the hold time should increase when I start using it.

Some results

The hold time is about 48 hours.
Cooling time constant is about 10 hours
I was only able to reach about 100K (with a rough calibration). This may be do to thermal conduction through my sensor wires. I will tie these to the cold plate for the next go around. Also there may have been thermal gradients on the radiation shield due to the fact that I was only using one shield.
My little mirrors that are supposed to reflect the laser beam out of the cryostat shattered into a million pieces. I guess stuff I get from the craft supply store isn't cryogenically rated? Alastair had previously suggested getting a mirror polish directly on the aluminum piece that was holding the mirrors. I will look into that.

Afterward I did an honest dunk test with the clamping block in a little plastic bucket. I was able to see that the sensor got to LN2 temperature (80K with the rough cal). I also dunked it in ice water and used these two points to make a more precise calibration.

I must say I'm pretty pleased with my temperature readout, it seems pretty robust and low noise.

  214   Mon May 30 02:09:53 2011 Vladimir DergachevDailyProgressTiltmeterNoise spectrum after cleaning
First useful spectrum after cleaning. It appears to be at least as good as before. This plot uses pre-cleaning calibration - it should not have changed too much, I'll try doing another calibration after collecting more data.
  328   Tue Aug 23 22:38:10 2011 Vladimir DergachevNoise HuntingTiltmeter0.1-1 Hz noise
Turns out the noise in 0.1-1 Hz region was due both to seismic and DAC. After using a low pass filter on the output I was able to capture the spectrum on the plot below. The dip at 0.6 Hz goes below 1e-9 rad/sqrt(Hz) and shows up on both seismic and tilt. Now to find a way to make better coil hookup...
  330   Wed Aug 24 07:37:33 2011 Vladimir DergachevNoise HuntingTiltmeter0.1-1 Hz noise

Quote:
Turns out the noise in 0.1-1 Hz region was due both to seismic and DAC. After using a low pass filter on the output I was able to capture the spectrum on the plot below. The dip at 0.6 Hz goes below 1e-9 rad/sqrt(Hz) and shows up on both seismic and tilt. Now to find a way to make better coil hookup...

 I am suspicious that this is not the true tilt noise, but instead the in-the-loop spectrum as before. Is this corrected for the loop shape an the tiltmeter's mechanical response?

  659   Mon Jul 1 17:17:35 2013 nicolasLab InfrastructureVacuumSome vacuum pumps that we saw in the Ogin lab

We went into the Ogin  future Crackle Lab and found a few things that looked like vacuum pumps. There are two roughing pumps (one has the words LEAKS written on it), and most of a turbo pump. They all look like replacement parts for the presumably working pump setup that is hooked up the the thermal noise chamber in that room.

 

link


  733   Wed Oct 9 18:40:35 2013 nicolasMiscVacuumcantilever dewar leak rate data

To measure the leak rate, i valved off the pump and let the pressure rise and wrote down some numbers:

 

t (m:ss) | P (torr)
0          4.5e-4
30         2.6e-3
60         3.6e-3
2:30       6.5e-3
5:00       1.1e-2     
  734   Fri Oct 11 13:31:56 2013 nicolasMiscVacuumcantilever dewar leak rate data

Quote:

To measure the leak rate, i valved off the pump and let the pressure rise and wrote down some numbers:

 

t (m:ss) | P (torr)
0          4.5e-4
30         2.6e-3
60         3.6e-3
2:30       6.5e-3
5:00       1.1e-2     

 2 days later I got this:

t (m:ss) | P (torr)
0          5.5e-4
30         1.4e-3
60         1.9e-3
2:30       3.2e-3
5:00       5.3e-3    
  735   Fri Oct 11 13:33:22 2013 nicolasLab InfrastructureVacuumHV feedthrough installed, vacuum baking

I put the new HV feedthrough on the cantilever dewar. I borrowed the heat tape from the CTN lab and I am going to bake the chamber over the weekend at 80 deg C, it's currently pumping and heating up.

  736   Fri Oct 11 16:57:23 2013 nicolasLab InfrastructureVacuumHV feedthrough installed, vacuum baking

Quote:

I put the new HV feedthrough on the cantilever dewar. I borrowed the heat tape from the CTN lab and I am going to bake the chamber over the weekend at 80 deg C, it's currently pumping and heating up.

 cantilever dewar is baking at 80C and pressure is being recorded to C5:VAC-P2_PRESSURE. data logging started at gps time 1065570956. (4:55pm local time Friday).

  737   Mon Oct 14 18:05:35 2013 nicolasLab InfrastructureVacuumbake ended at 1065834142

bakepressure.png

pressure trend during bake

ELOG V3.1.3-