I used aluminum tape to attach the sensor and heater to the 40m's EOM, and we plugged in the controller. It seems to be kind of working. Zach figured out the GPIB output stuff, so we can talk to it remotely.
I stole the Prologix wireless GPIB interface from the SR785 that's down the Y-Arm temporarily. The address is 192.168.113.108. (Incidentally, I think some network settings have been changed since the GPIB stuff was initially configured. All the Prologix boxes have 131.215.X.X written on them, while they are only accessible via the 192.168.X.X addresses. Also, the 40MARS wireless router is only accessible from Martian computers at 192.168.113.226---not 126.96.36.199).
In any case, the Newport 6000 is controllable via telnet. I went through the remote RTD calibration process in the manual, by measuring the exact RTD resistance with an ohmmeter and entering it in. Despite this, when the TEC output is turned on, the heating way overshoots the entered set temperature. This is probably because the controller parameters (gain, etc.) are not set right. We have left it off for the moment.
Here are a couple command examples:
1. Turning on the TEC output
Foam house installed on EOM a few min ago. We'll leave it until ~tomorrow, then try out the heater loop.
We have placed a foil aperture in front of ETMY, to aid in aligning the Y-arm, and then the PRC. It obviously needs to be removed before we close up.
Attachment #1 shows the spectra of our three available seismometers over a period of ~10ksec.
Attachment #2 shows the result of applying frequency domain Wiener filter subtraction to the POP QPD (target) with the vertex seismometer signals as witness channels.
this is due to the Equivalence Principle: local accelerations are indistinguishable from spacetime curvature. On a spherical Earth, the local gradient of the metric points in the direction towards the center of the Earth, which is colloquially known as "down".
I don't understand why the z-axis motion reported by the T240 is ~10x lower at 10 mHz compared to the X and Y motions. Is this some electronics noise artefact?
Here is some disturbance in the spacetime curvature, where the local gradient of the metric seems to have been modulated (in the "downward" as well as in the other two orthogonal Cartesian directions) at ~1 Hz - seems real as far as I can tell, all the suspensions were being shaken about and all the seismometers witnessed it, though the peak is pretty narrow. A broader, less prominent peak also shows up around 0.5 Hz. We couldn't identify any clear source (no LN2 fill-up / obvious CES activity). This event lasted for ~45 mins, and stopped around 2315 local time. Shortly (~5min) after the ~1 Hz peak died down, however, the 3-10 Hz BLRMS channel reports an increase by ~factor of 2.
Onto trying some locking now that the suspensions have settled down somewhat.
at 1 Hz' this effect is not large so that's real translation. at lower frequencies a ground tilt couples to the horizontal sensors at first order and so the apparent signal is amplified by the double integral. drawing a free body diagram u can c that
x_apparent = (g / s^2) * theta
but for vortical this not tru because it already measures the full free fall and the tilt only shows up at 2nd order
The large ground motion at 1 Hz started up again tonight at around 23:30. I walked around the lab and nearby buildings with a flashlight and couldn't find anything whumping. The noise is very sinusoidal and seems like it must be a 1 Hz motor rather than any natural disturbance or traffic, etc. Suspect that it is a pump in the nearby CES building which is waking up and running to fill up some liquid level. Will check out in the morning.
Estimate of displacement noise based on the observed MC_F channel showing a 25 MHz peak-peak excursion for the laser:
dL = 25e6 * (13 m / (c / lambda)
= 1 micron
So this is a lot. Probably our pendulum is amplifying the ground motion by 10x, so I suspect a ground noise of ~0.1 micron peak-peak.
(this is a native PDF export using qtgrace rather than XMgrace. uninstall xmgrace and symlink to qtgrace.)
Attachment #1 is a spectrogram of the BS sesimometer signals for a ~24 hour period (from Wednesday night to Thursday night local time, zipped because its a large file). I've marked the nearly pure tones that show up for some time and then turn off. We need to get to the bottom of this and ideally stop it from happening at night because it is eating ~1 hour of lockable time.
We considered if we could look at the phasing between the vertex and end seismometers to localize the source of the disturbance.
The nightly seismic activity enhancement continued during the weekend. It always shows up around 10pm local time, persists for ~1 hour, and then goes away. This isn't a show stopper as long as it stops at some point, but it is annoying that it is eating up >1 hour of possible locking time. I walked over to CES, no one there admitted to anything - there is an "Earth Surface Dynamics Laboratory" there that runs some heavy equipment right next to us, but they claim they aren't running anything post ~530pm. Rick (building manager ?) also doesn't know of anything that turns on with the periodicity we see. He suggested contacting Watson but I have no idea who to talk to there who has an overview of what goes on in the building. 😢
The shaking started earlier today than yesterday, at ~9pm local time.
While the IFO is shaking, I thought (as Jan Harms suggested) I'd take a look at the cross-spectra between our seismometer channels at the dominant excitation frequency, which is ~1.135 Hz. Attachment #1 shows the phase of the cross spectrum taken for 10 averages (with 30mHz resolution) during the time period when the shaking was strong yesterday (~1500 seconds with 50% overlap). The logic is that we can use the relative phasing between the seismometer channels to estimate the direction of arrival and hence, the source location. However, I already see some inconsistencies - for example, the relative phase between BS_Z and EX_Z suggests that the signal arrives at the EX seismometer first. But the phasing between EX_Y and BS_Y suggests the opposite. So maybe my thinking about the problem as 3 co-located sensors measuring plane-wave disturbances originating from the same place is too simplistic? Moreover, Koji points out that for two sensors separated by ~40m, for a ground wave velocity of 1.5 km/s, the maximum phase delay we should see between sensors is 30 msec, which corresponds to ~10 degrees of phase. I guess we have to undo the effects of the phasing in the electronics chain.
Does anyone have some code that's already attempted something similar that I can put the data through? I'd like to not get sucked into writing fresh code.
🤞 this means that the shaking is over for today and I get a few hours of locking time later today evening.
Another observarion is that even after the main 1.14 Hz peak dies out, there is elevated seismic acitivity reported by the 1-3 Hz BLRMS band. This unfortunately coincides with some stack resonance, and so the arm cavity transmission reports greater RIN even after the main peak dies out. Today, it seems that all the BLRMS return to their "nominal" nighttime levels ~10 mins after the main 1.14 Hz peak dies out.
Tip Seals were replaced on the forepumps for TP2 and TP3, and both are ready to be installed back onto the forelines.
TP2 Forepump Ultimate Pressure: 180 mtorr
TP3 Forepump Ultimate Pressure: 120 mtorr
I couldn't understand the Y-End green setup as the PD was turned off and the sign of the servo was flipped. Once they are fixed, I could lock the cavity with the green beams.
[EricQ, Jenne, brains of other people]
Get green spots co-located with IR spots on ETMs, ITMs, check path of leakage through the arms, make sure both greens get out to PSL table
I had turned the green refl PD off on Tuesday while we were doing the IPANG alignment, since the beam was not so bright, and the LED on top of the PD was very annoyingly bright. I forgot to turn it back on. The sign flip on the servo, I can't explain.
After lots of trial and error, and a little inspiration from Koji, I have written a new script that will run when you select "update snapshot" in the yellow ! button on any MEDM screen.
Right now, it's only live for the OAF_OVERVIEW screen. View snapshot and view prev snapshot also work.
Next on the list is to make a script that will create the yellow buttons for each screen, so I don't have to type millions of things in by hand.
The script lives in: /cvs/cds/rtcds/caltech/c1/scripts/MEDMsnapshots, and it's called....wait for it....... "updatesnap".
Currently the update snapshot script looks at the 3 letters after "C1" to determine what folder to put the snapshots in. (It can also handle the case when there is no C1, ex. OAF_OVERVIEW.adl still goes to the c1oaf folder). If the 3 letters after C1 are SYS, then it puts the snapshot into /opt/rtcds/caltech/c1/medm/c1sys/snap/MEDM_SCREEN_NAME.adl
Mostly this is totally okay, but a few subsystems seem to have incongruous names. For example, there are screens called "C1ALS...." in the c1gcv folder. Is it okay if these snapshots go into a /c1als/snap folder, or do I need to figure out how to put them in the exact same folder they currently exist in? Or, perhaps, why aren't they just in a c1als folder to begin with? It seems like we just weren't careful when organizing these screens.
Another problem one is the C1_FE_STATUS.adl screen. Can I create a c1gds folder, and rename that screen to C1GDS_FE_STATUS.adl? Objections?
In the previous elog we've compared Matlab and Foton SOS representations using low-order filter. Now we move on to high order filters and see that Foton is pretty bad here.
We consider Chebyshev filter of the first type with cuf off frequency 12 Hz and ripple 1 dB. In the table below we summarize the GAINS obtained by Matlab and Foton for different digital filter orders.
We can see that for high orders the gains are completely different (ORDER of 2!!!). Interesting that besides of very bad GAIN, SOS-MATRIX Foton calculates pretty well. I checked up to 5 digit - full coincidence. Only GAIN is very bad.
The filter considered is cheby1("LowPass",6,1,12) and is a part of the bad Cheby filter where we loose coherence and see some other strange things.
We investigated some more the discrepancy between Matlab and Foton numbers. The comparison of cheby1(k, 1, 2*12/16384) was done between versions implemented in Matlab, R and Octave. Filters created by R and Octave agree with Foton.
Also, we found that Matlab has gross precision errors for cutoff frequencies just smaller than used in our fitler, for example cheby1(6, 2*3/16384) fails miserably.
It would be useful to see some plots so we could figure out exactly what magnitude and phase error correspond to "gross" and "miserable".
I'd like to re-measure the transfer function from driving MC2 position to the MC_L_DQ channel (for feedforward purposes). Swept sine would be one option, but I can't get the "Envelope" feature of DTT to work, the excitation amplitude isn't getting scaled as specified in the envelope, and so I'm unable to make the measurement near 1 Hz (which is where the FF is effective). I see some scattered mentions of such an issue in past elogs but no mention of a fix (I also feel like I have gotten the envelope function to work for some other loop measurement templates). So then I thought I'd try broadband noise injection, since that seems to have been the approach followed in the past. Again, the noise injection needs to be shaped around ~1 Hz to avoid knocking the IMC out of lock, but I can't get Foton to do shaped noise injections because it doesn't inherit the sample rate when launched from inside DTT/awggui - this is not a new issue, does anyone know the fix?
Note that we are using the gds2.15 install of foton, but the pre-packaged foton that comes with the SL7 installation doesn't work either.
The envelope feature for swept-sine wasn't working because i specified the frequency grid in the wrong order apparently. Eric von Reis has been notified to include a sorting algorithm in future DTT so that this can be in arbitrary order. fixing that allows me to run a swept sine with enveloped excitation amplitude and hence get the TF I want, but still no shaped noise injections via foton 😢
do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.
If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.
The problem is that foton does not inherit the model sample rate when launched from DTT/awggui. This is likely some shared/linked/dynamic library issue, the binaries we are running are precompiled presumably for some other OS. I've never gotten this to work since we changed to SL7 (but I did use it successfully in 2017 with the Ubuntu12 install).
While we looking using dtt and going over the basics of its operation, we discovered that the filter sample rates for the suspensions were still set to 2048 Hz, rather than 16384 Hz which is the new front end. This caused the filters loaded into the front ends to not behave as expected.
After correcting the sample rate, the transfer functions obtained from dtt are now looking like the bode plots from foton.
We fixed the C1SUS.txt and C1MCS.txt files in the /opt/rtcds/caltech/c1/chans/ directory, by changing the SAMPLING lines to have 16384 rather than 2048.
I found the current bias output channels, C1:SUS-<OPTIC>_<DOF>BiasAdj, were all pointed at C1:SUS-<OPTIC>_ULBiasSet for every degree of freedom. This same issue appeared in all eight database files (one per optic), so it looks like a copy-and-paste error. I fixed them to all reference the correct degree of freedom.
X green beat note found!
1. Near-field and far-field alignment on the PSL table. The near-field alignment checked by looking at the camera and the far-field alignment checked by allowing the beams to propagate by removing the DC PD.
2. Check laser temperature and get a sense of how the offset translates to the actual laser temperature.
3. Get an idea of the expected temperature of laser using the plot in elog.
PSL laser temperature = 31.45 deg C
X end laser temperature = 39.24 deg C
C1-ALS-X_SLOW_SEERVO2_OFFSET = 4810
Amplitude of beat note = -40dBm
I do not understand why
1. The amplitude of beatnote falls linearly with frequency (peak traced using 'hold' option of the spectrum analyzer).
2. I found the beat note at the RF output of the PD. Earlier, while I was trying to search for the beatnote from the RFmon output of the betabox, there was a strong peak at 29.6MHz that existed even when the green shutters were closed. It's source has to be traced.
Solve beatbox puzzle and lock arm using ALS.
It was unlocked since ~4:30am. No idea why. It's relocked so I can try round N of measuring the PRC length.
We found the beat at 1064nm. T(PSL)=26.59deg, T(X-end)=31.15deg.
The X-end laser was moved to the PSL table.
The beating setup was quickly constructed with mode matching based on beam profile measurements by the IR cards.
We used the 1GHz BW PD, Newfocus #1611-FS-AC.
As soon as we swept the Xtal temp of the X-end laser, we found the strong beating.
The beating setup was quickly constructed with mode matching based on beam profile measurements by the IR cards.
We used the 1GHz BW PD, Newfocus #1611-FS-AC.
[Koji / Suresh]
We worked on the 1064 beating a bit more.
- First of all, FSS and FSS SLOW servo were disabled not to have variating Xtal temp for the PSL.
- The PSL Xtal temp (T_PSL) was scanned from 22deg-45deg while we search the Xtal temp (T_Xend) for the Xend laser to have the beat freq well low (f<30MHz).
The pumping current for each laser was I_PSL = 2.101 [A] and I_Xend = 2.000 [A]
For a certain T_PSL, we found multiple T_Xend because the freq of the laser is not a monotonic function of the Xtal temperature. (see the innolight manual).
T_Xend to give us the beating was categorized in the three sets as shown in the figure. The set on "curve2" is the steadiest one. (Use this!)
The trends were quite linear but the slope was slightly off from the unity.
- T_PSL was scanned to see the trend of the PMC output.
The PMC was sometimes locked to the mode with lower transmission (V_PMCT ~ 3.0V).
When T_PSL ~ 31deg we consistently locked the PMC at higer transmission (V_PMCT ~ 5.3V).
At the moment we decided the operating point of T_PSL = 32.25 deg, V_PMCT = 5.34, where we found the beat at T_Xend=38.28deg.
- We cleaned up the PSL table more than how it was. Returned the tools to their original places.
The X-end laser was shut down and was left on the PSL table.
Kiwamu can move the X-end laser to the Xend and realign it.
Then we should be able to see the green beating quite easily.
Thesedays we were continuously annoyed by unELOGGED activities of the interferometer.
MC2 LOCKIN was left on and has continuously injected frequency noise and beam pointing modulation
during all of the comissioning / vent preparation.
C1:SUS-MC2_LOCKIN2_OSC_FREQ was 0.075
C1:SUS-MC2_LOCKIN2_OSC_CLKGAIN was 99
For more than a week ago we noticed that the curve of the MC WFS stripchart suddenly got THICKER.
MC WFS, arm transmission, beam pointing... everything was modulated.
It was not WFS instability, and it was not the cavity mirrors.
Today I made the investigation and finally tracked down the cause of this issue to be on MC2 suspension.
Then it was found that this LOCKIN was ON.
There is no direct record of this lockin in the frame files.
From the recorded channel "C1:IOO-WFS2-YAW_OUT16" (which is the trace on the StripTool chart on the wall)
It was turned on at July 10th, 2:00UTC (July 9th, 7PM PDT)
Yes, this was not ELOG'd by me, unfortunately. This was the MC tickler which I described to some people in the control room when I turned it on.
As Koji points out, with the MCL path turned off this injects frequency noise and pointing fluctuations into the MC. With the MCL path back on it would have very small effect. After the pumpdown we can turn it back on and have it disabled after lock is acquired. Unfortunately, our LOCKIN modules don't have a ramp available for the excitation and so this will produce some transients (or perhaps we can ezcastep it for now). Eventually, we will modify this CDS part so that we can ramp the sine wave.
I've written a new TICKLE script using the newly found 'cavget' and 'cavput' programs. They are in the standard epics distribution as extension binaries. They allow multichannel read/write as well as ramping, delays, incremental steps, etc. http://www.aps.anl.gov/epics/tech-talk/2012/msg01465.php.
Running from the command line, they seem to work fine, but I've left it OFF for now. I'll switch it into the MC autolocker at some point soon.
I found the PSL laser has been off for four hours. Nobody seemed to know why.
I just turned it on and it is now providing about 10% lower power compared with one before the shutdown.
Let's keep the eyes on the power if it can recover as the housing gets warm.
Frame builder is down. PRM has tripped its watch dogs. I have reset the watch dog on PRM and turned on the OPLEV. It has damped down. Unable to check what happened since FB is not responding.
There was an minor earthquake yesterday morning which people could feel a few blocks away. It could have caused the the PRM to unlock.
Jamie,Rolf, is it okay or us to restart the FB?
If it's down it's alway ok to restart it. If it doesn't respond or immediately crashes again after restart then it might require some investigation, but it should always be ok to restart it.
I tried restarting the fb in two different ways. Neither of them re-established the connection to dtt or epics.
1) I restarted the fb from the control room console with the 'shutdown' command. No change.
2) I halted the machine with 'shutdown -h now' and restarted it with the hardware reset button on its front-panel. No change.
The console connected to the fb showed that the network file systems did not load. Could this have resulted in failure to start several services since it could not find the files which are stored on the network file system?
The fb is otherwise healthy since I am able to ssh into it and browse the directory structure.
The fb is okay. Rana found that it works on Pianosa, but not on Allegra or Rossa. It also works on Rosalba, on which Jamie recently installed Ubuntu.
The white fields on the medm 'Status' screen for fb are an unrelated problem.
Please be conscious of what components are doing what. The problem you were experiencing was not "frame builder down". It was "dtt not able to connect to frame builder". Those are potentially completely different things. If the front end status screens show that the frame builder is fine, then it's probably not the frame builder.
Also "epics" has nothing whatsoever to do with any of this. That's a completely different set of stuff, unrelated to DTT or the frame builder.
I think the daqd process isn't running on the frame builder.
I tried telnetting' to fb's port 8087 (telnet fb 8087) and typing "shutdown", but so far that is hanging and hasn't returned a prompt to me in the last few minutes. Also, if I do a "ps -ef | grep daqd" in another terminal, it hangs.
I wasn't sure if this was an ntp problem (although that has been indicated in the past by 1 red block, not 2 red blocks and a white one), so I did "sudo /etc/init.d/ntp-client restart", but that didn't make any change. I also did an mxstream restart just in case, but that didn't help either.
I can ssh to the frame builder, but I can't do another telnet (the first one is still hung). I get an error "telnet: Unable to connect to remote host: Invalid argument"
Thoughts and suggestions are welcome!
CPU load seems extremely high. You need to reboot it, I think
controls@fb /proc 0$ cat loadavg
36.85 30.52 22.66 1/163 19295
This CPU load may have been me deleting some old frame files, to see if that would allow daqd to come back to life.
Daqd was segfaulting, and behaving in a manner similar to what is described here: (stack exchange link). However, I couldn't kill or revive daqd, so I rebooted the FB.
Things seem ok for now...
[Joe, Jamie, Alex]
I asked Alex which cron to use (dcron? frcron?). He promptly did the following:
rc-update add dcron default
Copied the wiper.pl script from LLO to /opt/rtcds/caltech/c1/target/fb/
At that point, I modified wiper.pl script to reduce to 95% instead of 99.7%.
I added controls to the cron group on fb:
sudo gpasswd -a controls cron
I then added the wiper.pl to the crontab as the following line using crontab -e.
0 6 * * * /opt/rtcds/caltech/c1/target/fb/wiper.pl --delete &> /opt/rtcds/caltech/c1/target/fb/wiper.log
Note, placing backups on the /frames raid array will break this script, because it compares the amount in the /frames/full/, /frames/trends/minutes, and /frames/trends/seconds to the total capacity.
Apparently, we had backups from September 27th, 2010 and March 22nd, 2011. These would have broken the script in any case.
We are currently removing these backups, as they are redundant data, and we have rsync'd backups of the frames and trends. We should now have approximately twice the lookback of full frames.
Since Leo was trying to demo his LIGO Data Listener code, he noticed that there was and NDS2 issue. The NDS2 guy (JZ) noticed that the FrameBuilder had an issue.
We investigated. At 4PM on Dec 31, the GPS timestamp of the frame file names started to be recorded wrong. In fact, it started to give it a file name matching the correct time from 1 year in the past.
So that's our version of the Y2011 bug. Here's the 'ls' of /frames/full:
drwxr-xr-x 2 controls controls 252K Dec 26 03:59 9773
drwxr-xr-x 2 controls controls 260K Dec 27 07:46 9774
drwxr-xr-x 2 controls controls 256K Dec 28 11:33 9775
drwxr-xr-x 2 controls controls 252K Dec 29 15:19 9776
drwxr-xr-x 2 controls controls 244K Dec 30 19:06 9777
drwxr-xr-x 2 controls controls 188K Dec 31 16:00 9778
drwxr-xr-x 2 controls controls 148K Jan 1 08:53 9463
drwxr-xr-x 2 controls controls 260K Jan 2 12:39 9464
drwxr-xr-x 2 controls controls 252K Jan 3 16:26 9465
drwxr-xr-x 2 controls controls 248K Jan 4 20:13 9466
drwxr-xr-x 2 controls controls 36K Jan 5 00:22 9467
controls@fb /frames/full $
The culprit is the directory who's name starts out as 9463 whereas it should be 9779.
Email from Alex:
Turned out to be the lack of current year information in the IRIG-B signal
received by the Symmetricom GPS card in the frame builder machine caused
this. I have added a constant in daqdrc to bring the seconds forward:
controls@fb /opt/rtcds/caltech/c1/target/fb $ grep symm daqdrc
Hopefully we will be upgrading to the newer timing system at the 40M this
year, so this will not happen again next year.
Doing an 'ls -lrt' in /frames/full/ now shows that the names are correct:
drwxr-xr-x 2 controls controls 249856 Dec 30 19:06 9777
drwxr-xr-x 2 controls controls 192512 Dec 31 16:00 9778
drwxr-xr-x 2 controls controls 151552 Jan 1 08:53 9463
drwxr-xr-x 2 controls controls 266240 Jan 2 12:39 9464
drwxr-xr-x 2 controls controls 258048 Jan 3 16:26 9465
drwxr-xr-x 2 controls controls 253952 Jan 4 20:13 9466
drwxr-xr-x 2 controls controls 151552 Jan 5 13:54 9467
drwxr-xr-x 2 controls controls 12288 Jan 5 15:57 9783
Just a proof that the DAQ is working - ran DTT on nodus from 3 hours ago.
We looked into the /frames situation a bit tonight. Here is a summary:
Plan of action:
BTW - the last chiara (shared drive) backup was October 16 6 am. dmesg showed a bunch of errors, Koji is now running fsck in a tmux session on chiara, let's see if that repairs the errors. We missed the opportunity to swap in the 4TB backup disk, so we will do this at the next opportunity.
DTT stopped working for recent data. An 'ls' in the frames/full/ directory reveals:
drwxr-xr-x 2 controls controls 258048 Feb 3 12:26 9807
drwxr-xr-x 2 controls controls 258048 Feb 4 16:13 9808
drwxr-xr-x 2 controls controls 262144 Feb 5 19:59 9809
drwxr-xr-x 2 controls controls 258048 Feb 6 23:46 9810
drwxr-xr-x 2 controls controls 258048 Feb 8 03:33 9811
drwxr-xr-x 2 controls controls 262144 Feb 9 07:19 9812
drwxr-xr-x 2 controls controls 253952 Feb 10 11:06 9813
drwxr-xr-x 2 controls controls 266240 Feb 11 14:53 9814
drwxr-xr-x 2 controls controls 266240 Feb 12 18:39 9815
drwxr-xr-x 2 controls controls 266240 Feb 13 22:26 9816
drwxr-xr-x 2 controls controls 262144 Feb 15 02:13 9817
drwxr-xr-x 2 controls controls 253952 Feb 16 05:59 9818
drwxr-xr-x 2 controls controls 241664 Feb 17 09:46 9819
drwxr-xr-x 2 controls controls 28672 Feb 17 12:22 9820
drwxr-xr-x 2 controls controls 32768 Feb 17 15:06 6663
drwxr-xr-x 2 controls controls 73728 Feb 17 23:39 6664
controls@fb /frames/full $ date
Thu Feb 17 23:39:27 PST 2011
There are at least 5 free DAC channels (4 if you discount the one channel from these that I am hijacking) available in the 1Y2 electronics rack.
Jamie's nice wiring diagram shows the topology - the actual DAC card sits in 1Y3 inside the c1lsc expansion chassis (while the c1lsc frontend itself is in 1X4). The output of the DAC goes via SCSI to an interface box (D080303) and then to some dewhitening/AI boards (D000316). There are a total of 16 DAC channels available, out of which 8 are used for the TTs, 2 are used for the DAFI model, and one is labeleld "From c1ioo 1X2" (I don't know what this one is for). So I'm going to use some of these channels for measuring the coupling of oscillator noise and intensity noise to MICH in the DRMI lock.
The de-whitening/AI board seems to be old - it has 2x 800Hz Butterworth LPFs and no notch for the clock frequency, but maybe this doesn't matter for the tests I have in mind. The AI board available on 1X2 is more modern but routing the DAC channels from 1Y2 to it is going to be some work.
I'm going to add my testpoint to c1daf given that it seems to be the least critical model on c1lsc.
EDIT: testpoints added to c1daf don't show up in the list of available channels - there was some issue with this model while we were getting the new RTCDS going. So I'm moving my temporary testpoint to c1cal instead.
its an acquired taste, but its a must since we're sending an interferometer to India
Free swing of ITMY started at
Tue Sep 6 17:41:43 PDT 2011
I think Kiwamu accidentally restarted this kick at 17:48:02 PDT.
The free swinging spectra of ITMs, ETMs, BS, PRM and SRM were measured last night in order to make sure that nothing wrong have happened by the wiping.
I think there are nothing wrong with ITMs, ETMs, BS, PRM and SRM successfully.
For the comparison, Yoichi's figure in his elog entry of Aug.7 2008 is good, but in his figure somehow PRM spectrum doesn't look correct.
Anyway, compared with his past data, there are no significant changes in the spectra. For PRM which has no counterpart to compare with, its shape of spectra looks similar to any other spectra. So I think PRM is also OK. The measured spectra are attached below.