excited all the optics. (with ITMY WTF OFF)
Tue Aug 23 11:52:52 PDT 2011
Before lunch we took a closer look at two of the suspensions that were most problematic: ITMY and ETMY. Over lunch we took new free swinging data. Results below:
pit yaw pos side butt
UL 0.157 1.311 1.213 -0.090 0.956
UR 1.749 -0.490 0.886 -0.038 -1.042
LR -0.251 -2.000 0.787 -0.007 1.066
LL -1.843 -0.199 1.114 -0.059 -0.936
SD -0.973 -0.205 1.428 1.000 0.239
pit yaw pos side butt
UL -0.138 1.224 1.463 -0.086 0.944
UR 0.867 -0.776 1.501 -0.072 -1.051
LR -0.995 -0.896 0.537 -0.045 0.754
LL -2.000 1.104 0.499 -0.059 -1.251
SD 0.011 0.220 1.917 1.000 0.224
[Steve, Bob, Jamie, Kiwamu, Valera, Jenne]
The access connector is now in place, in preparation for pump-down. Tomorrow (hopefully) we will do all the other doors.
Tue Aug 23 17:20:45 PDT 2011
In preparation for tomorrow's drag wiping and door closing, I have clamped ITMX, ITMY, and ETMX with their earthquake stops and moved the suspension cages to the door-edge of their respective tables. They will remain clamped through drag wiping.
ETMY was left free-swinging, so we will clamp and move it directly prior to drag wiping tomorrow morning.
We noticed that we have used wrong code for MICH degree of freedom for both of the ELOG entries on this topic (cavity lengths tolerance search). It will be modified and posted soon.
By looking at a longer data stretch for the SRM (6 hours instead of just one), we were able to get enough extra resolution to make fits to the very close POS and SIDE peaks. This allowed us to do the matrix inversion. The result is that SRM looks pretty good, and agrees with what was measured previously:
pit yaw pos side butt
UL 0.869 0.975 1.140 -0.253 1.085
UR 1.028 -1.025 1.083 -0.128 -1.063
LR -0.972 -0.993 0.860 -0.080 0.834
LL -1.131 1.007 0.917 -0.205 -1.018
SD 0.106 0.064 3.188 1.000 -0.011
We ran one more free swing test on ETMY last night, after the last bit of tweaking on the SIDE OSEM. It now looks pretty good:
pit yaw pos side butt
UL -0.323 1.274 1.459 -0.019 0.932
UR 1.013 -0.726 1.410 -0.050 -1.099
LR -0.664 -1.353 0.541 -0.036 0.750
LL -2.000 0.647 0.590 -0.004 -1.219
SD 0.021 -0.035 1.174 1.000 0.137
So I declare: WE'RE NOW READY TO CLOSE UP.
We've closed up ETMX:
ITMX was drag wiped, and the suspension was put back into place. However, after removing all of the earthquake stops we found that the suspension was hanging in a very strange way.
The optic appears to heavily pitched forward in the suspension. All of the rear face magnets are high in their OSEMs, while the SIDE OSEM appears fine. When first inspected, some of the magnets appeared to be stuck to their top OSEM plates, which was definitely causing it to pitch forward severely. After gently touching the top of the optic I could get the magnets to sit in a more reasonable position in the OSEMs. However, they still seem to be sitting a little high. All of the PDMon values are also too low:
Taking a free swing measurement now.
ETMX, ETMY: 998248032
I broke the UL magnet on ITMX
The ITMX tower was shipped into the Bob's clean room to put the magnet back on.
Since we found that all the magnets were relatively high (#5296) in the shadow sensors, we decided to slide the OSEM holder bar upward.
During the work, I haven't made the OSEMs far enough from the magnets.
So the magnets and OSEMs touched as I moved the holder.
Then the UL magnets were broken off and fell into the UL coil.
The ITMX tower was shipped into the Bob's clean room to put the magnet back on.
Repair work is delayed. I need the "pickle pickers" that hold the magnet+dumbbell in the gluing fixture, for gluing them to the optic. Here at the 40m we have a full set of SOS gluing supplies, except for pickle pickers. We had borrowed Betsy's from Hanford for about a year, but a few months ago I returned all of the supplies we had borrowed. Betsy said she would find them in her lab, and overnight them to us. Since the problem occurred so late in the day, they won't get shipped until tomorrow (Thursday), and won't arrive until Friday.
I also can't find our magnet-to-dumbbell gluing fixture, so I asked her to send us her one of those, as well.
I have 2 options for fixing ITMX. I'll write down the pros and cons for each, and we can make a decision over the next ~36 hours.
(#1) Remove dumbbell from optic. Reglue magnet to dumbbell. Reglue magnet+dumbbell to optic.
(#2) Carefully clean dumbbell and magnet, without breaking dumbbell off of optic. Glue magnet to dumbbell.
(#1) Guarantee that magnet and dumbbell are axially aligned.
(#2) Takes only 1 day of glue curing time.
(#1) Takes 2 days of glue curing time. (one for magnet to dumbbell, one for set to optic.)
(#2) Could have slight mismatch in axis of dumbbell and magnet. Could accidentally drop a bit of acetone onto dumbbell-to-optic glue, which forces us into option 1, since this might destroy the integrity of the glue joint (this would take only the 2 days already required for option 1, it wouldn't force us to take 2+1=3 days).
jamie, jenne, kiwamu, suresh, steve
ETMY and ITMY were treated the same way as ETMX. The BS chamber was closed with heavy vac door yesterday also. The IOO access connector's inner jamnuts are torqued to 45 ft/lbs as all vac door bolts.
The vac envelope is ready for pumpdown condition, except ITMX chamber with light atm door cover.
Jenne will summeries the condition of dust on the TMs before and after the drag wipes.
As we have seen in the past, both of the ITMs were more dusty than the ETMs, presumably because we have the vertex open much more often than the ends. Kiwamu and I wiped all of the optics until we could no longer see any dust particles within a ~1.5 inch diameter area around the center.
Since we have ITMX out for magnet gluing, I'll probably drag wipe both front and back surfaces before putting it back in the suspension cage. All of the optics have clear dust on the AR surfaces, but we can't get to that surface while the optics are suspended. For the ETMs this isn't too big of a deal, but it does concern me a bit for the ITMs and other transmissive optics we have. I don't think it's bad enough yet though to warrant removing optics from suspensions just to wipe them.
Dmass just reminded me that the usual procedure is to bake the optics after the last gluing, before putting them into the chambers. Does anyone have opinions on this?
On the one hand, it's probably safer to do a vacuum bake, just to be sure. On the other hand, even if we could use one of the ovens immediately, it's a 48 hour bake, plus cool down time. But they're working on aLIGO cables, and might not have an oven for us for a while. Thoughts?
Jamie is modeling our next generation in-vac & clean room bonny suit that Jenne and myself already tested.
It is quite bearable with our traditional cleanroom beret-bouffant cap. Please use these in the future.
This will help to avoid the farther degradation of somewhat dusty 40m vac envelope.
It is the required dress code to enter the clean assembly room in the 40m.
We have small, med, large and x-large in stock. I'm getting larger sizes.
It will not allow certain people to climb inside the vacuum chamber in dirty pants.
elog died b/c someone somewhere did something which may or may not have been innocuous. I ran the script in /cvs/cds/caltech/elog to restart the elog (thrice).
I have now banned Warren from clicking on the elog from home
I think we should follow the established procedure in full, even though it will cost us a few more days. I dont think we should consider the vacuum bake as something "optional". If the glue has any volatile components they could be deposited on the optic resulting in a change in the coating and consequently optical loss in the arm cavity.
Follow full procedure for full strength, minimum risk
The horizontal trolley drive stopped working at the east end this morning. It is working intermittently. In the worst case we can take the door off with the manual -Genie- lift.
I'm working with Konecrane to solve the wormgear drive problem.
New gear box installed and tested by Fred KoneCranes.
Jamie and Shuresh moved in Jenne's 11 drawers cabinet and relocated old note book boxes on the inside of the vac tube.
Barring other chores for next Wednesday, we're going to spend Wednesday afternoon populating the new cabinet with all of the optics hardware: posts, forks, dogs, everything! It's going to be so organized and awesome!!
As I feared, since I couldn't see the magnet-to-dumbbell joint from all angles, they ended up being off by ~1/3 of a magnet diameter.
Because I don't want to deal with finding another failed glue joint tomorrow, I removed the magnet and dumbbell from the optic, and broke the manget off of the dumbbell. As with yesterday, I kept track of which end of the magnet had been glued to the dumbbell.
I got a new dumbbell, removed all the glue from the magnet, and reglued them together, in the fixture that ensures they are well aligned.
Tomorrow I will come in and glue the magnet dumbbell assembly to the ITM.
In the previous elog of mine, I looked at the nullstream (aka butterfly mode) to find out if the intrinsic OSEM noise is limiting the displacement noise of the interferometer or possibly the Wiener FF performance.
The conclusion was that its not above ~0.2 Hz. Due to the fortuitous breaking of the ITMX magnet, we also have a chance to check the 'bright noise': what the noise is with no magnet to occlude the LED beam.
As expected, the noise spectra with no magnets is less than the calculated nullstream. The attached plot shows the comparison of the LL OSEM (all the bright spectra look basically alike) with the damped
optic spectra from 1 month week ago.
From 0.1 - 10 Hz, the motion is cleanly larger than the noise. Below ~0.2 Hz, its possible that the common mode rejection of the short cavity lengths are ruined by this. We should try to see if the low frequency
noise in the PRC/SRC is explainable with our current knowledge of seismicity and the 2-dimensional 2-poiint correllation functions of the ground.
We wanted to continue the work with WFS servo loops. As the current optical paths on the AP table do not send any light to the WFS, I changed a mirror to a 98% window and a window to a mirror to send about 0.25mW of light towards the WFS. The MC locking is unaffected by this change. The autolocker works fine.
When the power to the MC is increased, these will have to be replaced or else the WFS will burn.
Tomorrow afternoon I'll remove the optic from the fixture, and put it in the oven.
I recompiled c1ioo after making some changes and restarted fb. (about 9:45 - 10PM PDT) But it failed to restart. It responds to ping, but does not allow a ssh or telnet. The screen output is:
ssh: connect to host fb port 22: Connection refused
allegra:~>telnet fb 8087
telnet: connect to address 192.168.113.202: Connection refused
telnet: Unable to connect to remote host: Connection refused
Nor am I able to connect to c1ioo either....
Fb is in a bad situation. It needs a MANUAL fsck to fix the file system.
HELP US, Jamieeeeeeeeeeee !!!
When Suresh and I connected a display and tried to see what was going on, the fb computer was in a file system check.
This was because Suresh did a hardware reboot by pressing a power button on the front panel.
Since the file checking took so long time and didn't proceed fast, we pressed the reset button and again the power button.
Actually the reset button didn't work (maybe ?) it just made some light indicators flashing.
After the second reboot the reboot message said that it needs a manual fsck to fix the file system. This maybe because we interrupted the file checking.
We are leaving it to Jamie because the fsck command would do something bad if unfamiliar persons, like us, do it.
In addition to it, the boot message was also saying that line 37 in /etc/fstab was bad.
We logged into the machine with a safe mode, then found there was an empty line in 37th line of fstab.
We tried erasing this empty line, but failed for some reasons. We were able to edit it by using vi, but wasn't able to save it.
fb was requiring manual fsck on it's disks because it was sensing filesystem errors. The errors had to do with the filesystem timestamps being in the future. It turned out that fb's system date was set to something in 2005. I'm not sure what caused the date to be so off (motherboard battery problem?) But I did determine after I got the system booting that the NTP client on fb was misconfigured and was therefore incapable of setting the system date. It seems that it was configured to query a non-existent ntp server. Why the hell it would have been set like this I have no idea.
In any event, I did a manual check on /dev/sdb1, which is the root disk, and postponed a check on /dev/sda1 (the RAID mounted at /frames) until I had the system booting. /dev/sda1 is being checked now, since there are filesystems errors that need to be corrected, but it will probably take a couple of hours to complete. Once the filesystems are clean I'll reboot fb and try to get everything up and running again.
I edited the C1SUS_SUMMARY.adl file and set the channels in alarm mode to show the values in green, yellow and red according to the values of the thresholds (LOLO, LOW, HIGH, HIHI)
I wrote a script in python, which call the command ezcawrite and ezcaread, to change the thresholds one by one.
You can call this program with a button named "Change Thresholds one by one" in the menu come down when you click the ! button.
I'm going to write another program to change the thresholds all together.
fb is now up and running, although the /frames raid is still undergoing an fsck which is likely take another day. Consequently there is no daqd and no frames are being written to disk. It's running and providing the diskless root to the rest of the front end systems, so, so the rest of the IFO should be operational.
I burt restored the following (which I believe is everything that was rebooted), from Saturday night:
ITMY, which is supposed to be fully free-swinging at the moment, is displaying the tell-tale signs of being stuck to one of it's OSEMs. This is indicated by the PDMon values, one of which is zero while the others are max:
Do we have a procedure for remotely getting it unstuck? If not, we need to open up ITMYC and unstick it before we pump.
1) To see if there are significant dark-offsets on the WFS sensors we closed the PSL shutter and found that the offsets are in the 1% range. We decided to ignore them for now.
2) To center the MC_REFL beam on the WFS we opened the PSL shutter, unlocked the MC and then centered the DC_PIT and DC_YAW signals in the C1IOO_WFS_QPD screen.
3) We then looked at the power spectrum of the I and Q signals from WFS1 to see if the spectrum looked okay and found that some of the quadrants looked very different from others. The reason was traced to incorrect Comb60 filters. After correcting these filters we adjusted the R phase angle in the WFS1_SETTINGS screen to suppress the 1Hz natural oscillation signal in the Q channels of all the four quadrants. We repeated this process for WFS2
4) To see if the relative phase of all four quadrants was correct we first drove the MC_length and tried to check the phase of the response on each quadrant. However the response was very weak as the signal was suppressed by the MC servo. Increasing the drive made the PMC lock unstable. So we introduced a 6Hz, 50mVpp signal from an SR785 into the MC_servo (Input2) and with this we were able to excite a significant response in the WFS without affecting the PMC servo. By looking at the time series of the signals from the quadrants we set the R phase angle in WFS_Settings such that all the quadrants showed the same phase response to the MC_length modulation.
Using the larger response were were able to further tweak the R angle to supress the Q channels to about 1% of the I phase signals.
5) I then edited the c1ioo.mdl so that we can use the six lockins just as they are used in MC_ASS. However we can now set elements of the SEN_DMD_MATRX (sensor demod matrix) to select any of the MCL, WFS PIT and YAW channels (or a linear combination of them) for demodulation. The change is shown below. While compiling and model on C1IOO FE machine there were problems which eventually led to the FB crash.
I have restored the damping of BS and PRM. Today is janitor day. He is shaking things around the lab.
The fsck on the framebuilder (fb) raid array (/dev/sda1) completed overnight without issue. I rebooted the framebuilder and it came up without problem.
I'm now working on getting all of the front-end computers and models restarted and talking to the framebuilder now.
The testpoint.par file, located at /opt/rtcds/caltech/c1/target/gds/param/testpoint.par, which tells GDS processes where to find the various awgtpman processes, was completely empty. The file was there but was just 0 bytes. Apparently the awgtpman processes themselves also consult this file when starting, which means that none of the awgtpman processes would start.
This file is manipulated in the "install-daq-%" target in the RCG Makefile, ultimately being written with output from the src/epics/util/updateTestpointPar.pl script, which creates a stanza for each front-end model. Rebuilding and installing all of the models properly regenerated this file.
I have no idea what would cause this file to get truncated, but apparently this is not the first time: elog #3999. I'm submitting a bug report with CDS.
All the front-ends are now running. Many of them came back on their own after the testpoint.par was fixed and the framebuilder was restarted. Those that didn't just needed to be restarted manually.
The c1ioo model is currently in a broken state: it won't compile. I assume that this was what Suresh was working on when the framebuilder crash happened. This model needs to be fixed.
The ITMY mirror was released. The OSEM readouts became healthy.
To see what is going on, I changed the PIT DC bias slider on ITMY from 0.8 to -1 or so, and then the optic started showing a free swinging behavior.
If there were no responses to the DC bias, I was going to let people to open the chamber to look at it closer, but fortunately it released the optic.
Then I brought the slider back to 0.8, and it looked still free swinging. Possibly the optic had been stacked on some of the OSEMS as Jamie expected.
ITMY, which is supposed to be fully free-swinging at the moment, is displaying the tell-tale signs of being stuck to one of it's OSEMs.
I reverted the C1IOO model to the last working version and restarted the fb at this time..Tue Aug 30 17:28:38 PDT 2011
Uniblitz mechanical shutter installed in the green beam path at ETMY-ISCT The remote control cable has not been connected.
The pictures that we took are now on the Picasa web site. Check it out.
Also, we took photos (to be posted on Picasa in a day or two) of all the main IFO magnet-in-OSEM centering, as best we could. SRM, BS, PRM all caused trouble, due to their tight optical layouts. We got what we could.
[Kiwamu, Manuel, Jenne]
The new optics storage drawers have been populated with optics. Each drawer is labelled. Harsh punishments will be inflicted on anyone found disobeying the new scheme.
The RGA back ground at day 29 of this vent.
Atm1, ITMY and the SRM are on the same isolation stack. So why does the SRM move twice as much?
Atm2, We should check the ITMY SIDE_OSEM before pump down. Anatomically correct, beautiful picture taken by Kiwamu on August 22
Suresh, Kiwamu and Steve
Heavy chamber doors replaced by light ones at ITMX-west and ITMY-north locations.
Length tolerance of the vertex part is about 5 mm.
Sorry for my procrastinating update on this topic. In my last post, I reported that the length tolerance of the vertex ifo would be 2mm, based on Kiwamu's code on CVS. Then we noticed that the MICH degrees of freedom was wrong in the code. I modified the code and ran again. You can find the modified codes on CVS (40m folder, analyzeDRMITolerance3f.m and DRMITolerance.m)
In this code, the arm lengths were kept to be ideal while some length offsets of random gaussian distribution were added on PRCL, SRCL and MICH lengths. The iteration was 1000 times for each sigma of the random gaussian distribution. The resulting sensing matrix is shown as histogram. Also, a histogram of the demodulation phase separation between MICH and SRCL is plotted by this code, as these two length degrees of freedom will be obtained by one channel separated by the demodulation phase. We check this separation because you want to make sure that the random length offsets does not make the separation of these two signals close.
The result is a bit different from the previous post, in the better way! The length tolerance is about 5 mm for the vertex ifo. Fig.1 shows the sensing matrix. Although signal levels are changed by the random offsets, only few orders of magnitude is changed in each degrees of freedom. Fig.2 shows that the signal separation between MICH and SRCL at POP55 varies from 55 to 120 degrees, which may be OK. If you have 1cm sigma, it varies from 50 degrees to 150 degrees.
Fig. 1 Histgram of the sensing matrix including 3f channels, when sigma is 5mm. Please note that the x-axis is in long 10.
Fig. 2 Histogram of the demodulation phase difference between MICH and SRCL, when sigma is 5 mm. To obtain the two signals independently, 90 is ideal. With the random offsets, the demodulation phase difference varies from 55 degrees to 120 degrees.
My next step is to run the similar code for LLO.
The triple resonant box was checked again. Each resonant frequency was tuned and the box is ready to go.
Before the actual installation I want to hear opinions about RF reflections because the RF reflection at 29 MHz isn't negligible.
It might be a problem since the reflection will go back to the RF generation box and would damage the amplifiers.
(Frequency adjustment and resultant reflection coefficient)
In order to tune the resonant frequencies the RF reflection was continuously monitored while the variable inductors were tweaked.
The plot below shows the reflection coefficient of the box after the frequency adjustment.
In the upper plot, where the amplitude of the reflection coefficient of the box is plotted, there are three notches at 11, 29.5 and 55 MHz.
A notch means an RF power, which is applied to the resonant box, is successfully absorbed and consequently the EOM obtains some voltage at this frequency.
These power absorptions take place at the resonant frequencies as we designed so.
A good thing by monitoring this reflection coefficient is that one can easily tune the resonant frequency by looking at the positions of the notches.
Note that :
If amplitude is 0dB ( =1), it means all of the signal is reflected.
If a circuit under test is impedance matched to 50 Ohm the amplitude will be ideally zero (= -infinity dB).
at 11 MHz = -15 dB (3% of RF power is reflected)
at 29.5 MHz = -2 dB (63% of RF power is reflected)
at 55 MHz = -8 dB (15% of RF power is reflected)
What are the reflected RF powers for those frequencies?
Is the 29.5MHz more problem than the 55MHz, considering the required modulation depth?