I wanted to see what the noise of the Ranger seismometer should be. I used LISO and file ranger.fil (in our LISO SVN) to calculate the voltage noise referred to the input. In this model, we represent the EMF from the moving magnet in the coil as a voltage source at 'nin' which drives the coil impedance. This is the same approach that Brian Lantz uses in his noise modeling of the GS-13 (PDF is on our Ranger wiki page).
In the simulation, I used the OP27 as a placeholder for the SR560 that we use (I don't know the current noise of the SR560). To do this, I use the new 'inputnoise' feature in LISO (its in the README, but not in the manual).
You can see that we would be limited by the input current noise of the OP27. So we would do a lot better if we used an FET based readout amp like the AD743 (or equivalent) or even better using the new multi-FET readout circuit that Rich Abbott has developed. Clearly, its also silly to have a load resistance in there - I put it in because the manual says to do it, but all it does is damp the mass and reduce the size of the signal.
# Noise sim for the Ranger SS-1 seismometer
# | \
# n2- - - ---- - | \
# | | | op1>-- n4 - r4 -- no
# Rg RL n3- | / |
# n1 - | | | | / |
# Lg | | / |
# | | | - - - R2 - -
# nin gnd R1
We previously measured the Ranger's self noise by locking it down.
The 1/f^3 noise that we see below 1 Hz is roughly consistent with the noise model: to get from my plot into meters you have to multiply by:
(1 + f)^2
340 * f^2
P.S. Secret PDF handshake: You can make your non-compliant applications like LISO or DTT produce a thumbnailing PDF by using Acrobat to open the file and export it as PDF/A.
In the second attachment, I have used an OPA827 (new low-noise FET input amp from TI) as the readout amplifier. This seems like a good choice - main drawback is that Digikey backordered my OPA827s by 19 weeks!
I like to ask someone to review the calculation on the wiki.
Last week we vented and we cleaned the main optics of the arm cavities.
I measured the frequency of the cavity poles for both the arm cavities to see how they changed (see previous elog entry 2226). These the results:
fp_X = 1616 +/- 14 Hz
fp_Y = 1590 +/- 4 Hz
I ran the armLoss script for both Xarm and Yarm. The results are confidential, pending the completion of Alberto's cavity pole/finesse measurement due to the 'bet' as to what the new losses are after the drag wiping.
If you're the kind of person who likes to look at their Chrismas presents, the log files with the results are in the usual place for this script: /scripts/LSC/loss-ARM-GPStime.log (loss-Y-944865071.log and loss-X-944865946.log)
[Jenne, Kiwamu, Koji]
We got the IFO back up and running! After all of our aligning, we even managed to get both arms locked simultaneously.
I'm going to do it right now.
We got the IFO back up and running! After all of our aligning, we even managed to get both arms locked simultaneously. Basically, we are awesome.
This morning, we did the following:
* Turned on the PZT High voltages for both the steering mirrors and the OMC. (For the steering mirrors, turn on the power, then hit "close loop" on each. For the OMC, hit Output ON/OFF).
* Looked at the PZT strain gauges, to confirm that the PZTs came back to where they had been. (Look at the snapshot of C1ASC_PZT_Al)
* Locked all components of the PSL (This had already been done.)
* Removed beam dump which was blocking the PSL, and opened the PSL mechanical shutter. Light into the IFO!
* Locked the Mode Cleaner. The auto-locker handled this with no problem.
* Confirm that light is going through the Faraday. (Look at the TV sitting on top of MC13 tank...it shows the Faraday, and we're hitting the input of the Faraday pretty much dead-on).
* Look at IP_ANG and IP_POS. Adjust the steering mirrors slightly to zero the X&Y readings on IP_ANG. This did not change the PZTs by very much, so that's good.
* Align all of the Core Optics to their OpLev positions.
* On the IFO_Align screen, save these positions.
* Run the IFO_Configure scripts, in the usual order. (Xarm, Yarm, PRM, DRM). Save the appropriate optics' positions after running the alignment scripts. We ended up running each alignment script twice, because there was some residual misalignment after the first iteration, which we could see in the signal as viewed on DataViewer (Either TRX, TRY, or SPOB, for those respective DoFs).
* Restore Full IFO.
* Watch the beauty of both arms and the central cavity snapping together all by themselves! In the attached screenshot, notice that TRX and TRY are both ~0.5, and SPOB and AS166Q are high. Yay!
* The wiping may have helped. While aligning X and Y separately, TRX got as high as ~1.08, and TRY got as high as 0.98 This seems to be a little bit higher than it was previously.
* Since everything locked up in pretty short order, and the free swinging spectra (as measured by Kiwamu in elog 2405) looks good, we didn't break anything while we were in the chambers last week. Excellent.
* We are now ready for a finesse measurement to tell us more quantitatively how we did with the wiping last week.
We still had some ants visiting the sink area this morning. These ants seem to be addicted to our our Peet's coffe
Spectracide: Bug Stop insect killer was sprayed. Please wash your eating dishes well ! and keep area clean.
I made a short stop at the 40m on Sunday night and found that hundreds ants are in the coffee maker.
I removed ants around the sink and washed the coffee maker.
It looked the ants were everywhere in the lab tonight. They seemed to prefer warm places like in the coffee maker and below the coffee mill.
So, I recommend that Steve should confirm there is no ants in the coffee maker again before the first coffee of the week is made.
Othewise they will add some more acidity to your cup.
For the Laser Gyro, I wondered how much mechanical noise we might get with a non-suspended cavity. My guess is that the PMC is better than we could do with a large ring and that the MZ is much worse than we could do.
Below 5 Hz, I think the MZ is "wind noise" limited. Above 10 Hz, its just ADC noise in the readout of the PZT voltage.
I ramped the MZ PZT (with the loop disabled on the input switch) to calibrate it. Since the transmission has been blocked, I used the so-called "REFL" port of the MZ to do this.
The dark-to-dark distance for the MZ corresponds to 2 consecutive destructive interferences. Therefore, that's 2 pi in phase or 1 full wavelength of length change in the arm with the moving mirror.
Eyeballing it on the DTT plot (after lowpassing at 0.1 Hz) and using its cursors, I find that the dark-to-dark distance corresponds to 47.4 +/- 5 seconds.
So the calibration of the MZ PZT is 88 +/- 8 Volts/micron.
Inversely, that's a mean of 12 nm / V.
why am I calibrating the MZ? Maybe because Rob may want it later, but mainly because Koji won't let me lock the IFO.
Apparently, we haven't had a fast channel for any of the MZ board. So I have temporarily hooked it up to MC_DRUM at 21:13 and also turned down the HEPA. Now, let's see how stable the MZ and PMC really are overnight.
EDIT: it railed the +/- 2V ADCwe have so I put in a 1:4 attenuator via Pomona box. The calibration of MC_DRUM in terms of MZ_PZT volts is 31.8 cts/V.
So the calibration of MC_DRUM1 in meters is: 0.38 nm / count
The free swinging spectra of ITMs, ETMs, BS, PRM and SRM were measured under the vacuum-condition. The attachment are measured spectra.
It looks there are nothing wrong because no significant difference appear from the past data and the current data (under atmosperic pressure).
So everything is going well.
Need to measure the length of the cable, but too lazy to use a measuring tape?
Then you too can become an expert cable length measurer by just using an RF signal generator and a scope:
The T is kind of acting like a beamsplitter in an asymmetric length Michelson in this case. Just as we can use the RF phase shift between the arms to measure the Schnupp asymmetry, we can also use a T to measure the cable length. The speed of light in the cable is documented in the cable catalog, but in most cases its just 66% of the speed of light in the vacuum.
Seems like very strange cable loss numbers. The Heliax is lossier than the RG-174? I wonder how these compare with the specs in the cable catalog?
I did the measurement on a 4.33 meter long cable with SMA connectors at the ends.
Alberto, Jenne, Kiwamu
We together will lead the IFO restoring and the following is our plan.
- - - - -|- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#.0 | measuring the free swinging spectra (weekend by kiwamu) DONE
#.1 | turn ON the PZTs for steering mirror and so on. (Dec.14 Mon.) DONE
#. 1 | lock around PSL DONE
#.2 | deal with mechanical shutter (Dec.14 Mon.)DONE
#.3 | lock MCs (Dec.14 Mon.)DONE
#.4 | align the IFO (Dec.15 Tue.)DONE
#.5 | lock full IFO (Dec.15 Tue.)DONE
- - - - -|- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Oplev positions before and after drag wiped arm TMs as of yesterday. Slow-mode pumpdown has started with 3/4 turn opened RV1 valve at 8am today.
Pump down is completed. Valve configuration is VACUUM NORMAL. CC1 pressure is in the ~8 e-5 torr PSL output shutter is opened.
Wait, Wait, Wait. You are moving too fast. Go one by one.
Check the PZTs, the MC, initial pointings, IFO mirrors, some of the partial locks, and maybe some momentary full locks?
Once the recover of the IFO is declared, you can proceed to the measurements.
I'm leaving the lab now for less than 2 hours. I should be back in time for when the pumping is finished so that I can measure the finesse again.
I temporarily disconnected the Heliax cable that brings the 166MHz LO to the LSC rack.
I'm doing a couple of measurement and I'll put it back in as soon as I'm done.
These are the losses I measured on a RG-174 cable for the two frequencies that we're planning to use in the Upgrade:
(The cable was 2.07m long. The input signal was +10dBm and the output voltages at the oscilloscope where: Vpk-pk(11MHz)=1.90V, Vpk-pk(11MHz)=1.82V )
I apologize for the lack of correctness on the units in yesterday's elog entry, but I wasn't very sharp last night.
I repeated the measurement today, this time also making sure that I had a 50ohm input impedance set in the scope. These the results for the losses.
I also measured the losses in the Heliax cable going from the 166 MHz LO to the LSC rack:
Attached are updated plots of the T&R Measurements for a variety of mirrors, and diagrams for the setup used to make the measurements.
T is plotted for the 1064 nm measurement, since these mirrors are highly reflective at 1064, and either R or R&T are plotted for the 532 nm measurement, depending on how larger the R signal is.
As with the previous set of plots, the error bars here are purely statistical, and there are certainly other sources of error not accounted for in these plots. In general, the T measurement was quite stable, and the additional errors
are probably not enormous, perhaps a few percent.
The mirrors are:
The free swinging spectra of ITMs, ETMs, BS, PRM and SRM were measured last night in order to make sure that nothing wrong have happened by the wiping.
I think there are nothing wrong with ITMs, ETMs, BS, PRM and SRM successfully.
For the comparison, Yoichi's figure in his elog entry of Aug.7 2008 is good, but in his figure somehow PRM spectrum doesn't look correct.
Anyway, compared with his past data, there are no significant changes in the spectra. For PRM which has no counterpart to compare with, its shape of spectra looks similar to any other spectra. So I think PRM is also OK. The measured spectra are attached below.
I replaced the SMA end connector for the 166 MHZ Local Oscillator signal that goes to the back of the flange in the 1Y2 rack. The connector had got damaged after it twisted when I was tigheting the N connector of the Heliax cable on the front panel.
The vacuum system is at 760 torr All chambers with doors on and their annuloses are pumped down.
PSL output shutter is still closed. We are fully prepared to star slow pump down tomorrow.
The plan is to reach 1 torr ~ in 6 hrs without a creating a sand storm.
last 20 days - including the pounding from next door
[Everybody: Alberto, Kiwamu, Joe, Koji, Steve, Bob, Jenne]
The last heavy door was put on after lunch. We're now ready to pump.
The frame builder was power cycled during the morning bootfest. I have restarted the backup script once more.
All the front ends are back up.
I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.
I'll go for a big boot fest.
I burtrestored all the snapshots to Dec 9 2009 at 18:00.
Note: The set point C1:PSL-FSS_RCPID_SETPOINT is 37.0 on C1PSL_FSS_RCPID.adl.
Now the temp is recovering with its full speed. At some point we have to restore the value of the FSS SLOW DC as the temp change drag it up.
Koji, Jenne, Rob
We found that the RCPID servo "setpoint" was not in the relevant saverestore.req file, and so when c1psl got rebooted earlier this week, this setting was left at zero. Thus, the RC got a bit chilly over the last few days. This channel has been added.
Also, RCPID channels have been added (manually) to conlog_channels.
Instead of doing RCG stuff, I went to Millikan to work on data analysis as I couldn't stand the fumes from the construction. (this morning, 8am)
Diesel fumes are pumped away from control room AC intakes with the help of newly installed reflector boxes on the CES wall fans.........see # 2272
Well, I get the point now. It could be either seismic or change in the suspension Q.
The pendulum memorizes its own state for a period of ~ Q T_pend. (T_pend is the period of the pendulum)
If the pendulum Q is very high (>104), once the pendulum is excited, the effect of the excitation can last many hours.
On the other hand, in our current case, we turned on the damping once, and then turned off the damping.
Again it takes ~Q T_pend to be excited.
In those cases, the peak height is not yet before in equilibrium, and can be higher or lower than expected.
So, my suggestion is:
Track the peak height along the long time scale (~10hrs) and compare between the previous one and the current one.
This may indicate whether it is equilibrium or not, and where the equilibrium is.
If such variation of the peak heights is cased by the seismic activity, it means the seismic level change by several 10 times. It sounds large to me.
Okay, now the data are attached. At that time I just wanted to say like the follower.
- - -
In the free-swinging spectra around ~0.5Hz, you can see the two resonances, which come from pitch and yaw mode of the pendulum.
Note that, the vertical and the horizontal axis are adjusted to be the same for the two plots in the figure .
And I found that
* the floor levels are almost the same (the factor of about 1.5 or something like that) compared to the past.
* however the peak heights for two resonances are several 10 times smaller than the past.
* this tendency are shown in all of the data (ITMX, ETMX, ETMY).
If such variation of the peak heights is cased by the seismic activity, it means the seismic level change by several 10 times. It sounds large to me.
By the way I found a trend, which can be seen in all of the data taken today and yesterday.
The resonances of pitch and yaw around 0.5Hz look like being damped, because their height from the floor become lower than the past.
I don't know what goes on, but it is interesting because you can see the trend in all of the data.
[Kiwamu, Jenne, Alberto, Steve, Bob, Koji]
We finished wiping of four test masses without any trouble. ITMY looked little bit dusty, but not as much as ITMX did.
We confirmed the surface of the ITMX again as we worked at vertex a lot today. It still looked clean.
We closed the light doors. The suspensions are left free tonight in order to check their behavior.
Tomorrow morning from 9AM, we will replace the door to the heavy ones.
Please do not touch the watchdogs for all SUSs except for MCs,
because I am going to measure the free swinging spectra for ITMs, ETMs, BS, PRM, SRM tonight.
Today, it is good chance to summarize those data under atmospheric pressure.
I finally got around to taking a look at the digital camera setup today. Rob had complained the client had stopped working on Rosalba.
After looking at the code start up and not complain, yet not produce any window output, it looks like it was a network problem. I tried rebooting Rosalba, but that didn't fix anything.
Using netstat -an, I looked for the port 5010 on both rosalba and ottavia, since that is the port that was being used by the camera. Ottavia was saying there were 6 established connections after Rosalba had rebooted (rosalba is 220.127.116.11). I can only presume 6 instances of the camera code had somehow shutdown in such a way they had not closed the connection.
[root@ottavia controls]#netstat -an | grep 5010
tcp 0 0 0.0.0.0:5010 0.0.0.0:* LISTEN
tcp 0 0 18.104.22.168:5010 22.214.171.124:57366 ESTABLISHED
tcp 0 0 126.96.36.199:5010 188.8.131.52:58417 ESTABLISHED
tcp 1 0 184.108.40.206:46459 220.127.116.11:5010 CLOSE_WAIT
tcp 0 0 18.104.22.168:5010 22.214.171.124:57211 ESTABLISHED
tcp 0 0 126.96.36.199:5010 188.8.131.52:57300 ESTABLISHED
tcp 0 0 184.108.40.206:5010 220.127.116.11:57299 ESTABLISHED
tcp 0 0 18.104.22.168:5010 22.214.171.124:57315 ESTABLISHED
I switched the code to use port 5022 which worked fine. However, I'm not sure what would have caused the original connection closure failures, as I test several close methods (including the kill command on the server end used by the medm screen), and none seemed to generate this broken connection state. I rebooted Ottavia, and this seemed to fix the connections, and allowed port 5010 to work. I also tried creating 10 connections, which all seem to run fine simultaneously. So its not someone overloading that port with too many connections which caused the problem. Its like the the port stopped working somehow, which froze the connection status, but how or why I don't know at this point.
The construction activity is shaking the tables in the control room. The compactor- large remote controlled jackhammer is in the bottom of the 16-17 ft deep hole 15 ft east of ITMY in CES bay. The suspensions are holding OK. PRM, MC1 and MC3 are effected mostly.
We like to see it if you think this is interesting.
... Just a naive guess: Is it just because the seismic level got quiet in the night?
You looks consistently confused some words like damping, Q, and peak height.
dy = H dx
As the damping makes the Q lower, the peak height also gets lowered by the damping.
But if the disturbance gets smaller, the peak height can become small even without any change of the damping and the Q.
The free swinging spectra of ETMY and ITMX were taken after today's wiping, in order to check the test masses.
These data were taken under the atmospheric pressure condition, as well as the spectra of ETMX taken yesterday.
Compared with the past (see Yoichi's good summary in Aug.7 2008), there are no significant difference.
There are nothing wrong with the ETMY and ITMX successfully.
Jenne, Kiwamu, Koji, Alberto, Steve, Bob
ITMX was wiped without having to move it.
After 'practice' this morning on ETMY, Kiwamu and I successfully wiped ITMX by leaning into the chamber to get at the front face.
Most notable (other than the not moving it) was that inspection with the fiber light before touching showed many very small particles on the coated part of the optic (this is versus ETMY, where we saw very few, but larger particles). The after-wiping fiber light inspection showed many, many fewer particles on the optical surface. I have high hopes for lower optical loss here!
Jenne, Kiwamu, Alberto, Steve, Bob, Koji
We wiped ETMY after recovery of the computer system. We take the lunch and resume at 14:00 for ITMX.
Detailed reports will follow.
Alberto, Kiwamu, Koji,
this morning we found the RFM network and all the front-ends down.
To fix the problem, we first tried a soft strategy, that is, we tried to restart CODAQCTRL and C1DCUEPICS alone, but it didn't work.
We then went for a big bootfest. We first powered off fb40m, C1DCUEPICS, CODAQCTRL, reset the RFM Network switch. Then we rebooted them in the same order in which we turned them off.
Then we power cycled and restarted all the front-ends.
Finally we restored all the burt snapshots to Monday Dec 7th at 20:00.