All fine, except ITMX_sensor_UL's 60 counts deep hoop for an hour.
Steve and Jamie: After Jamie checks the ITM free swingings, please put on the ITM heavy doors and start the pump! For real this time!!! Yeah!
I just started a freeswing all, as a final check before we pump:
Wed Sep 7 00:43:21 PDT 2011
Wed Sep 7 00:43:32 PDT 2011
WATCHDOGS WILL BE RESET 5 HOURS AFTER THIS TIME
sleeping for 5 hours...
Jamie: Please do a quickie analysis (at least for the ITMs) before helping Steve with the heavy doors.
I closed the PSL shutter.
Both ITM chambers were checked for tools, so there should be nothing left to do but put the heavy doors on, and begin pumping.
(What we did)
* Moved SUS to edge of table for OSEM adjustment.
* Leveled the table in this temporary tower position.
* Rotated all OSEMs to give some seperation between magnets and LED/PD packages.
* Moved the upper OSEM bracket a little bit upward.
* All the OSEM holding set screws were short with flat heads; this is annoying since we would like to use them more like thumbscrews. Steve took the long set-screws out of the old ITMX cage and we swapped them. Need to order ~100 silver-plated socket head spare/replacements.
* Took pictures of OSEMs.
* Moved tower back to old position.
* Releveled the table (added one rectangular weight in the NW corner of the table).
* Find that ITMX OSEMs were a couple 100 micron out of position; we adjusted them in-situ in the final position of the tower, trying not to rotate them. All mean voltages now are within 100 mV of ideal half-light.
* Back/front EQ positions adjusted by the screw method. bottom/top stops adjusted earlier.
* OSEM cables tied down with copper wire.
* Increased the incident power up to 91 mW going into MC to temporarily make the POX beam more visible.
* The POX beam was checked. It was exiting from the chamber and going through about the center of the viewport.
We did the following things in the ITMY chamber today:
1) We tried to get the ITMY stuck again by adjusting the coil gains so that it goes into the orientation where it used to get stuck. We (reassuringly) failed to get it stuck again. This, as we came to know later, is because kiwamu had rotated the side OSEM such that the optic does not get stuck . However the OSEM beam is at about 30 deg to the vertical and the SD is sensitive to POS motion now resulting in the poorer separation of modes as noted by Jenne earlier (5439)
2) We checked the earthquake stops and repositioned two at the bottom (towards the AR side of the optic) which we had backed out earlier.
3) We took pics of all the OSEMS.
4) Checked to see if there are any stray beams with an IR card. There were none.
5) I obtained the max values of the OSEMS by misaligning the optic with the coil offsets. These values are in good agreement with those on the wiki
OSEM UL UR LR LL SD
Max 1.80 1.53 1.68 1.96 2.10
Current 0.97 0.79 0.83 0.97 1.02
We can close the heavy doors tomorrow morning.
I just tried to adjust the ETMY camera and its not very user friendly = NEEDS FIXING.
* Camera view is upside down.
* Camera lens is contacting the lexan viewport cover; this means the focus cannot be adjusted without misaligning the camera.
* There's no strain relief of the camera cables at the can. Needs a rubber cable grommet too.
* There's a BNC "T" in the cable line.
Probably similar issues with some of the other setups; they've had aluminum foil covers for too long. We'll have a camera committee meeting tomorrow to see how to proceed.
[Rana and Kiwamu on ITMX, Jenne and Suresh on ITMY, Zombie/brains meeting on accepting the matricies]
pit yaw pos side butt
UL 0.584 0.641 1.396 -0.578 0.558
UR 0.755 -1.359 0.120 -0.286 0.262
LR -1.245 -0.139 0.604 -0.388 0.511
LL -1.416 1.861 1.880 -0.681 -2.669
SD -0.753 0.492 3.263 1.000 -1.523
pit yaw pos side butt
UL 1.000 0.572 1.134 -0.059 0.951
UR 0.578 -1.428 0.916 -0.032 -1.024
LR -1.422 -0.531 0.866 -0.009 1.086
LL -1.000 1.469 1.084 -0.036 -0.939
SD -0.662 0.822 1.498 1.000 0.265
OSEMs were tweaked. We have decided that both ITMs are okay in terms of their diagonalization. ITMY isn't stellar when you look at the spectra, but it's kind of close enough. Certainly the matrix looks fine.
Aside from checking on POX, I think we're now ready to close up. Check back later tonight for a final decision announced on the elog.
I restarted the elog with the script as it was not up when I tried to make a post. It was again unresponsive when I went to submit, but this time the script couldn't restart it. The log said it couldn't bind to 8080, which usually happens if the daemon is still running. I pkilled it, then reran the script, and it appears to be working.
Free swing of ITMY started at
Tue Sep 6 17:41:43 PDT 2011
I think Kiwamu accidentally restarted this kick at 17:48:02 PDT.
Tue Sep 6 17:48:02 PDT 2011
Kiwamu excited ITMY (which Suresh had already started). I just kicked ITMX:
Tue Sep 6 17:55:21 PDT 2011
Tue Sep 6 17:48:02 PDT 2011
These are the things that I can think of that we need to do before we can close up:
* Take a close look at both ITMs' OSEMs, and ensure that the magnets aren't too close to either plate in the OSEMs. Both have had funny business over the past week.
* Do a free swinging test on both ITMs. (ITMY may not need it, if we haven't touched it since the last free swinging test, but it can't hurt to take the data)
* Confirm that POX is exiting the chamber.
* Is there anything else???
Our goal is to finish this work by tonight, so that we can close doors and start pumping tomorrow.
[Jenne, Katrin, Jamie]
I'm a bad kid, and forgot to elog my Friday morning work...
Bob gave me back ITMX after a 48hour bake at 80C + clean RGA scan Friday morning after coffee and doughnuts. Katrin helped me put it back in the suspension wire.
While I was leveling the optic (making sure the scribe lines on each side of the optic are at the same height off the table), Katrin cut some new viton for replacement EQ stops. The optic was missing one lower earthquake stop (the one that Jamie noticed last week), and somehow one other rubber piece came out of the EQ stop on another lower screw while we were re-suspending the optic. We put the new stops in, and then checked the balance of the test mass.
The oplev is still the HeNe laser that is leveled to the level optical table in the cleanroom. The lever arm is ~1.5 meters, and over that distance the reflected beam was pointed "up" in pitch by ~1.5mm, which is less than one beam diameter of the HeNe. This is well within our ability to correct using the OSEMs.
We then locked the test mass, and installed it in the chamber. I approximately did the half-voltage centering of the OSEMs, leaving the fine-tuning to Kiwamu for after lunch.
The new ITMX was aligned by changing the DC biases.
The resultant DC biases are reasonably small.
C1:SUS-ITMX_PIT_COMM = -0.2909
C1:SUS-ITMX_YAW_COMM = -0.0617
The alignment was done by trying to resonate the green light in the X arm cavity.
The spot position of the green light on the ITMX mirror looked good. This was confirmed by inserting a sensor card.
I did the OSEM mid-range adjustment and the rotation adjustment but at the end the OSEM DC voltage has changed due to the DC bias operation.
The OSEM rotation was approximately optimized so that all the face shadow sensors are sensitive to the POS motion but the SIDE shadow sensor is insensitive to the POS motion.
It needs a free swinging diagnosis.
ITMX OSEMs UL 1.8V, UR 1.7V, LR 0V, LL 0V, SD 1.3V at the same bias setting shown above. May be a lose earth quake tip?or magnet is touching?
Restarted elog 9:11PM 9/5/11
The reflected RF power going back to the RF generation box will be :
Power at 11MHz = 2 dBm
Power at 29.5 MHz = 3 dBm
Power at 55 MHz = 9dBm
Assuming the input power at 11 and 55 MHz are at 27 dBm (40m wiki page). And 15 dBm for 29.5 MHz.
Since there is an RF combiner in between the generation box and the resonant box, it reduces the reflections by an additional factor of 10 dB (#4517)
In the estimation above, the reduction due to the RF combiner was taken into account.
Besides the reflection issue, the circuit meets a rough requirement of 200 mrad at 11 and 55 MHz.
For the 29.5 MHz modulation, the depth will be reduced approximately by a factor of 2, which I don't think it's a significant issue.
So the modulation depths should be okay.
Assuming the performance of the resonant circuit remains the same (#2586), the modulation depths will be :
Mod. depth at 11 MHz = 280 mrad
Mod. depth at 29.5 MHz = 4 mrad (This is about half of the current modulation depth)
Mod. depth at 55 MHz = 250 mrad
What are the reflected RF powers for those frequencies?
Is the 29.5MHz more problem than the 55MHz, considering the required modulation depth?
It stacked again . We should take a closer look at it.
The ITMY mirror was released. The OSEM readouts became healthy.
The new ITMX was aligned by changing the DC biases.
What are the reflected RF powers for those frequencies?
Is the 29.5MHz more problem than the 55MHz, considering the required modulation depth?
The triple resonant box was checked again. Each resonant frequency was tuned and the box is ready to go.
Before the actual installation I want to hear opinions about RF reflections because the RF reflection at 29 MHz isn't negligible.
It might be a problem since the reflection will go back to the RF generation box and would damage the amplifiers.
(Frequency adjustment and resultant reflection coefficient)
In order to tune the resonant frequencies the RF reflection was continuously monitored while the variable inductors were tweaked.
The plot below shows the reflection coefficient of the box after the frequency adjustment.
In the upper plot, where the amplitude of the reflection coefficient of the box is plotted, there are three notches at 11, 29.5 and 55 MHz.
A notch means an RF power, which is applied to the resonant box, is successfully absorbed and consequently the EOM obtains some voltage at this frequency.
These power absorptions take place at the resonant frequencies as we designed so.
A good thing by monitoring this reflection coefficient is that one can easily tune the resonant frequency by looking at the positions of the notches.
Note that :
If amplitude is 0dB ( =1), it means all of the signal is reflected.
If a circuit under test is impedance matched to 50 Ohm the amplitude will be ideally zero (= -infinity dB).
at 11 MHz = -15 dB (3% of RF power is reflected)
at 29.5 MHz = -2 dB (63% of RF power is reflected)
at 55 MHz = -8 dB (15% of RF power is reflected)
Length tolerance of the vertex part is about 5 mm.
Sorry for my procrastinating update on this topic. In my last post, I reported that the length tolerance of the vertex ifo would be 2mm, based on Kiwamu's code on CVS. Then we noticed that the MICH degrees of freedom was wrong in the code. I modified the code and ran again. You can find the modified codes on CVS (40m folder, analyzeDRMITolerance3f.m and DRMITolerance.m)
In this code, the arm lengths were kept to be ideal while some length offsets of random gaussian distribution were added on PRCL, SRCL and MICH lengths. The iteration was 1000 times for each sigma of the random gaussian distribution. The resulting sensing matrix is shown as histogram. Also, a histogram of the demodulation phase separation between MICH and SRCL is plotted by this code, as these two length degrees of freedom will be obtained by one channel separated by the demodulation phase. We check this separation because you want to make sure that the random length offsets does not make the separation of these two signals close.
The result is a bit different from the previous post, in the better way! The length tolerance is about 5 mm for the vertex ifo. Fig.1 shows the sensing matrix. Although signal levels are changed by the random offsets, only few orders of magnitude is changed in each degrees of freedom. Fig.2 shows that the signal separation between MICH and SRCL at POP55 varies from 55 to 120 degrees, which may be OK. If you have 1cm sigma, it varies from 50 degrees to 150 degrees.
Fig. 1 Histgram of the sensing matrix including 3f channels, when sigma is 5mm. Please note that the x-axis is in long 10.
Fig. 2 Histogram of the demodulation phase difference between MICH and SRCL, when sigma is 5 mm. To obtain the two signals independently, 90 is ideal. With the random offsets, the demodulation phase difference varies from 55 degrees to 120 degrees.
My next step is to run the similar code for LLO.
Suresh, Kiwamu and Steve
Heavy chamber doors replaced by light ones at ITMX-west and ITMY-north locations.
Atm1, ITMY and the SRM are on the same isolation stack. So why does the SRM move twice as much?
Atm2, We should check the ITMY SIDE_OSEM before pump down. Anatomically correct, beautiful picture taken by Kiwamu on August 22
The RGA back ground at day 29 of this vent.
[Kiwamu, Manuel, Jenne]
The new optics storage drawers have been populated with optics. Each drawer is labelled. Harsh punishments will be inflicted on anyone found disobeying the new scheme.
The pictures that we took are now on the Picasa web site. Check it out.
Also, we took photos (to be posted on Picasa in a day or two) of all the main IFO magnet-in-OSEM centering, as best we could. SRM, BS, PRM all caused trouble, due to their tight optical layouts. We got what we could.
Uniblitz mechanical shutter installed in the green beam path at ETMY-ISCT The remote control cable has not been connected.
I reverted the C1IOO model to the last working version and restarted the fb at this time..Tue Aug 30 17:28:38 PDT 2011
To see what is going on, I changed the PIT DC bias slider on ITMY from 0.8 to -1 or so, and then the optic started showing a free swinging behavior.
If there were no responses to the DC bias, I was going to let people to open the chamber to look at it closer, but fortunately it released the optic.
Then I brought the slider back to 0.8, and it looked still free swinging. Possibly the optic had been stacked on some of the OSEMS as Jamie expected.
ITMY, which is supposed to be fully free-swinging at the moment, is displaying the tell-tale signs of being stuck to one of it's OSEMs.
Do we have a procedure for remotely getting it unstuck? If not, we need to open up ITMYC and unstick it before we pump.
All the front-ends are now running. Many of them came back on their own after the testpoint.par was fixed and the framebuilder was restarted. Those that didn't just needed to be restarted manually.
The c1ioo model is currently in a broken state: it won't compile. I assume that this was what Suresh was working on when the framebuilder crash happened. This model needs to be fixed.
The testpoint.par file, located at /opt/rtcds/caltech/c1/target/gds/param/testpoint.par, which tells GDS processes where to find the various awgtpman processes, was completely empty. The file was there but was just 0 bytes. Apparently the awgtpman processes themselves also consult this file when starting, which means that none of the awgtpman processes would start.
This file is manipulated in the "install-daq-%" target in the RCG Makefile, ultimately being written with output from the src/epics/util/updateTestpointPar.pl script, which creates a stanza for each front-end model. Rebuilding and installing all of the models properly regenerated this file.
I have no idea what would cause this file to get truncated, but apparently this is not the first time: elog #3999. I'm submitting a bug report with CDS.
The fsck on the framebuilder (fb) raid array (/dev/sda1) completed overnight without issue. I rebooted the framebuilder and it came up without problem.
I'm now working on getting all of the front-end computers and models restarted and talking to the framebuilder now.
I have restored the damping of BS and PRM. Today is janitor day. He is shaking things around the lab.
1) To see if there are significant dark-offsets on the WFS sensors we closed the PSL shutter and found that the offsets are in the 1% range. We decided to ignore them for now.
2) To center the MC_REFL beam on the WFS we opened the PSL shutter, unlocked the MC and then centered the DC_PIT and DC_YAW signals in the C1IOO_WFS_QPD screen.
3) We then looked at the power spectrum of the I and Q signals from WFS1 to see if the spectrum looked okay and found that some of the quadrants looked very different from others. The reason was traced to incorrect Comb60 filters. After correcting these filters we adjusted the R phase angle in the WFS1_SETTINGS screen to suppress the 1Hz natural oscillation signal in the Q channels of all the four quadrants. We repeated this process for WFS2
4) To see if the relative phase of all four quadrants was correct we first drove the MC_length and tried to check the phase of the response on each quadrant. However the response was very weak as the signal was suppressed by the MC servo. Increasing the drive made the PMC lock unstable. So we introduced a 6Hz, 50mVpp signal from an SR785 into the MC_servo (Input2) and with this we were able to excite a significant response in the WFS without affecting the PMC servo. By looking at the time series of the signals from the quadrants we set the R phase angle in WFS_Settings such that all the quadrants showed the same phase response to the MC_length modulation.
Using the larger response were were able to further tweak the R angle to supress the Q channels to about 1% of the I phase signals.
5) I then edited the c1ioo.mdl so that we can use the six lockins just as they are used in MC_ASS. However we can now set elements of the SEN_DMD_MATRX (sensor demod matrix) to select any of the MCL, WFS PIT and YAW channels (or a linear combination of them) for demodulation. The change is shown below. While compiling and model on C1IOO FE machine there were problems which eventually led to the FB crash.
ITMY, which is supposed to be fully free-swinging at the moment, is displaying the tell-tale signs of being stuck to one of it's OSEMs. This is indicated by the PDMon values, one of which is zero while the others are max:
fb is now up and running, although the /frames raid is still undergoing an fsck which is likely take another day. Consequently there is no daqd and no frames are being written to disk. It's running and providing the diskless root to the rest of the front end systems, so, so the rest of the IFO should be operational.
I burt restored the following (which I believe is everything that was rebooted), from Saturday night:
I edited the C1SUS_SUMMARY.adl file and set the channels in alarm mode to show the values in green, yellow and red according to the values of the thresholds (LOLO, LOW, HIGH, HIHI)
I wrote a script in python, which call the command ezcawrite and ezcaread, to change the thresholds one by one.
You can call this program with a button named "Change Thresholds one by one" in the menu come down when you click the ! button.
I'm going to write another program to change the thresholds all together.
fb was requiring manual fsck on it's disks because it was sensing filesystem errors. The errors had to do with the filesystem timestamps being in the future. It turned out that fb's system date was set to something in 2005. I'm not sure what caused the date to be so off (motherboard battery problem?) But I did determine after I got the system booting that the NTP client on fb was misconfigured and was therefore incapable of setting the system date. It seems that it was configured to query a non-existent ntp server. Why the hell it would have been set like this I have no idea.
In any event, I did a manual check on /dev/sdb1, which is the root disk, and postponed a check on /dev/sda1 (the RAID mounted at /frames) until I had the system booting. /dev/sda1 is being checked now, since there are filesystems errors that need to be corrected, but it will probably take a couple of hours to complete. Once the filesystems are clean I'll reboot fb and try to get everything up and running again.
Fb is in a bad situation. It needs a MANUAL fsck to fix the file system.
HELP US, Jamieeeeeeeeeeee !!!
When Suresh and I connected a display and tried to see what was going on, the fb computer was in a file system check.
This was because Suresh did a hardware reboot by pressing a power button on the front panel.
Since the file checking took so long time and didn't proceed fast, we pressed the reset button and again the power button.
Actually the reset button didn't work (maybe ?) it just made some light indicators flashing.
After the second reboot the reboot message said that it needs a manual fsck to fix the file system. This maybe because we interrupted the file checking.
We are leaving it to Jamie because the fsck command would do something bad if unfamiliar persons, like us, do it.
In addition to it, the boot message was also saying that line 37 in /etc/fstab was bad.
We logged into the machine with a safe mode, then found there was an empty line in 37th line of fstab.
We tried erasing this empty line, but failed for some reasons. We were able to edit it by using vi, but wasn't able to save it.
I recompiled c1ioo after making some changes and restarted fb. (about 9:45 - 10PM PDT) But it failed to restart. It responds to ping, but does not allow a ssh or telnet. The screen output is:
ssh: connect to host fb port 22: Connection refused
allegra:~>telnet fb 8087
telnet: connect to address 192.168.113.202: Connection refused
telnet: Unable to connect to remote host: Connection refused
Nor am I able to connect to c1ioo either....
Tomorrow I will come in and glue the magnet dumbbell assembly to the ITM.
Tomorrow afternoon I'll remove the optic from the fixture, and put it in the oven.
We wanted to continue the work with WFS servo loops. As the current optical paths on the AP table do not send any light to the WFS, I changed a mirror to a 98% window and a window to a mirror to send about 0.25mW of light towards the WFS. The MC locking is unaffected by this change. The autolocker works fine.
When the power to the MC is increased, these will have to be replaced or else the WFS will burn.
In the previous elog of mine, I looked at the nullstream (aka butterfly mode) to find out if the intrinsic OSEM noise is limiting the displacement noise of the interferometer or possibly the Wiener FF performance.
The conclusion was that its not above ~0.2 Hz. Due to the fortuitous breaking of the ITMX magnet, we also have a chance to check the 'bright noise': what the noise is with no magnet to occlude the LED beam.
As expected, the noise spectra with no magnets is less than the calculated nullstream. The attached plot shows the comparison of the LL OSEM (all the bright spectra look basically alike) with the damped
optic spectra from 1 month week ago.
From 0.1 - 10 Hz, the motion is cleanly larger than the noise. Below ~0.2 Hz, its possible that the common mode rejection of the short cavity lengths are ruined by this. We should try to see if the low frequency
noise in the PRC/SRC is explainable with our current knowledge of seismicity and the 2-dimensional 2-poiint correllation functions of the ground.
As I feared, since I couldn't see the magnet-to-dumbbell joint from all angles, they ended up being off by ~1/3 of a magnet diameter.
Because I don't want to deal with finding another failed glue joint tomorrow, I removed the magnet and dumbbell from the optic, and broke the manget off of the dumbbell. As with yesterday, I kept track of which end of the magnet had been glued to the dumbbell.
I got a new dumbbell, removed all the glue from the magnet, and reglued them together, in the fixture that ensures they are well aligned.
Jamie and Shuresh moved in Jenne's 11 drawers cabinet and relocated old note book boxes on the inside of the vac tube.
Barring other chores for next Wednesday, we're going to spend Wednesday afternoon populating the new cabinet with all of the optics hardware: posts, forks, dogs, everything! It's going to be so organized and awesome!!
The horizontal trolley drive stopped working at the east end this morning. It is working intermittently. In the worst case we can take the door off with the manual -Genie- lift.
I'm working with Konecrane to solve the wormgear drive problem.
New gear box installed and tested by Fred KoneCranes.
Dmass just reminded me that the usual procedure is to bake the optics after the last gluing, before putting them into the chambers. Does anyone have opinions on this?
On the one hand, it's probably safer to do a vacuum bake, just to be sure. On the other hand, even if we could use one of the ovens immediately, it's a 48 hour bake, plus cool down time. But they're working on aLIGO cables, and might not have an oven for us for a while. Thoughts?
Follow full procedure for full strength, minimum risk