CPU load seems extremely high. You need to reboot it, I think
controls@fb /proc 0$ cat loadavg
36.85 30.52 22.66 1/163 19295
I think the daqd process isn't running on the frame builder.
I tried telnetting' to fb's port 8087 (telnet fb 8087) and typing "shutdown", but so far that is hanging and hasn't returned a prompt to me in the last few minutes. Also, if I do a "ps -ef | grep daqd" in another terminal, it hangs.
I wasn't sure if this was an ntp problem (although that has been indicated in the past by 1 red block, not 2 red blocks and a white one), so I did "sudo /etc/init.d/ntp-client restart", but that didn't make any change. I also did an mxstream restart just in case, but that didn't help either.
I can ssh to the frame builder, but I can't do another telnet (the first one is still hung). I get an error "telnet: Unable to connect to remote host: Invalid argument"
Thoughts and suggestions are welcome!
Our first RGA scan since May 27, 2014 elog10585
The Rga is still warming up. It was turned on 3 days ago as we recovered from the second power outage.
I went back into the DQ channels to look at the TF from AO injection to REFLDC (which is easy to do with this kind of noise injection TF).
I fear that REFL does not seem to have as much phase under the resonance as we have modeled, lacking about 10-20 degrees. This could result from the zero in the REFL DC response that we've modeled at ~200ish Hz is actually higher. I'll look into what affects the frequency of that feature.
It is, of course, possible, that this measurement doesn't properly cancel out the various digital effects, but the REFLDC phase curves do seem to settle to (+/-) 90 after the pole as expected.
DTT XML file is attached.
Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.
I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.
c1vac1 is not responding as of now. All other computers have come back and are alive.
IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.
We are pumping again. This is a temporary configuration. The annuloses are at atmosphere. The reset reboot of c1Vac1 and 2 opened everything except the valves that were disconnected.
TP2 lost it's vent solenoid power supply and dry pump during the power outage.
They were replaced but the new small turbo controller is not set up as the old TP2 was so it does not allow V4 to open.
Tomorrow I will swap back the old controller, pump down the annuloses and close off the ion pumps.
I removed the beam block from the PSL table and opened the shutter. CC4 has the real pressure 2e-5 Torr
CC1 is not real.
Tp2 is controlled by old controller. Annuloses pumped down. Valve configuration: "vacuum normal "
Ion pumps closed at <1e-4 mT
I have brought back c1auxex and c1auxey. Hopefully this elog will have some more details to add to Rana's elog 10015, so that in the end, we have the whole process documented.
The old Dell computer was already in a Minicom session, so I didn't have to start that up - hopefully it's just as easy as opening the program.
I plugged the DB9-RJ45 cable into the top of the RJ45 jacks on the computers. Since the aux end station computers hadn't had their bootChanges done yet, the prompt was "VxWorks Boot" (or something like that). For a computer that was already configured, for example the psl machine, the prompt was "c1psl", the name of the machine. So, the indication that work needs to be done is either you get the Boot prompt, or the computer starts to hang while it's trying to load the operating system (since it's not where the computer expects it to be). If the computer is hanging, key the crate again to power cycle it. When it gets to the countdown that says "press any key to enter manual boot" or something like that, push some key. This will get you to the "VxWorks Boot" prompt.
Once you have this prompt, press "?" to get the boot help menu. Press "p" to print the current boot parameters (the same list of things that you see with the bootChange command when you telnet in). Press "c" to go line-by-line through the parameters with the option to change parameters. I discovered that you can just type what you want the parameter to be next to the old value, and that will change the value. (ex. "host name : linux1 chiara" will change the host name from the old value of linux1 to the new value that you just typed of chiara).
After changing the appropriate parameters (as with all the other slow computers, just the [host name] and the [host inet] parameters needed changing), key the crate one more time and let it boot. It should boot successfully, and when it has finished and given you the name for the prompt (ex. c1auxex), you can just pull out the RJ45 end of the cable from the computer, and move on to the next one.
Koji, Jenne and Steve
Preparation to reboot:
1, closed VA6, V5 disconnected cable to valves ( closed all annuloses )
2, closed V1, disconnected it and stopped Maglev rotation
3, closed V4, disconnected its cable
See Atm1, This set up is insured us so there can not be any accidental valve switching to vent the vacuum envelope if reboot-caos strikes.[moving=disconnected]
4, RESET c1Vac1 and c1Vac2 one by one and together. They both went at once. We did NOT power recycled.
Jenne entered the new "carma" words on the old Dell laptop and checked the good answers. The reboot was done.
Note: c1Vac1 green-RUN indicator LED is yellow. It is fine as yellow.
5, Checked and TOGGLED valve positions to be correct value ( We did not correct the the small turbo pumps monitor positions, but they were alive )
6, V4 was reconnected and opened. Maglev was started.
7, V1 cable reconnected and opened at full rotation speed of 560 Hz
8, V5 cable reconnected, valve opened..............VA6 cable connected and opened........
9, Vacuum Normal valve configuration was reached.
Yesterday's reboot was prepared as stated above with one difference.
c1Vac1 and c1Vac2 were DOWN before reset. The disconnected valves stayed closed (plus VC1) . This saved us, so the main volume was not vented.
All others OPENED. PR1 and PR2 rouphing pumps turned ON. Ion pumps gate valve opened too. The ion pumps did not matter either because they were pump down recently.
We'll have to rewrite how to reboot vacuum.
I have a simulated version of the differences that we expect to see between the 2 different sides of the CARM resonance. The point is that we can try to compare these results with Q's measured results (elog 10594) to see if we know if we are on the spring or antispring side.
I calculated the same transfer functions vs CARM offset again, although tonight I do it in steps of 20pm because I was getting bored of waiting forever. Anyhow, this is important because my previous post (elog 10591) didn't have spring side calculations all the way down to 1pm.
This is similarly true for that elog 10591, but here are some notes on how I am currently getting the W/N units out of Optickle. First of all, I am still using old Optickle1. I don't know if there are significant units ramifications for that, but just in case I'll write it down. Nic tells me that to get [W/N] out of Optickle1, I need to multiply sigAC (units of [W/m]) by my simple pendulum (units of [m/N]). Both of these "meters" in the last sentence are "mevans meters", which are the meters you would get per actuation if radiation pressure didn't exist. So, I guess they're supposed to cancel out? I need to camp out in Nic's office until I figure this out and get it untangled in my head.
Plots of transfer functions for both sides of CARM resonance (same as prev. elog), as well as the ratio between the spring and antispring transfer functions at each CARM offset:
The take-away message from the 3rd column is that other than a sign flip, we don't expect to see very much difference between the 2 sides of the CARM resonance, particularly above a few hundred Hz. (Note that we do not see the sign flip in Q's measurements because he is looking at CARM_IN1, which is after the input matrix, and the input matrix elements have opposite signs between the signs of the CARM offsets. So, the sign flip between spring and antispring around the UGF is implied in the measurements, just not explicit).
Also, something that Rana pointed out to me, and I still don't know why it's true: The antispring transfer functions (at least for the transmission) don't have all the phase features that we expect to see based on their magnitudes. If you look at the TRX antispring plot, blue trace (which is about 500pm from resonance), you'll see that the magnitude starts flat at DC, has some slope in an intermediate region, and then at high frequencies has 1/f^2. However, the phase seems to not know about this intermediate region, and magically waits until the 1kHz resonance to flip the full 180 degrees.
I made some measurements to try and see if any difference could be seen with different CARM offset signs.
Specifically, at various offsets, I used a spare DAC channel to drive IN1 of the CM board, as an "AO Exciter." I used CM_SLOW to monitor the signal that was actually on the board. I used the CARM_IN1 error signal to see how the optical plant responded to the AO excitation. Rather than a swept sine, I used a noise injection kind of TF measurement.
Here are plots of CARM_IN1 / CM_SLOW at different CARM FM offsets; I chose to plot this in an attempt to divide out some of the common things like AA and delays and make the detuned CARM pole more evident). The offsets chosen correspond roughly to powers of 2, 2.5, and 3. I tried to go higher than that, but didn't remain locked for long enough to measure the TF.
By eye, I don't see much of a difference. We can zpk fit the data, and see what happens.
Assuming that these Watts/Newtons TFs are correct, I've modeled the resulting open loop gain for CARM. The goal is to design a loop that is stable under a wide range of offsets and also has enough low frequency gain.
The attached PDF shows this. I used a CARM OLG Simulink model:
I've replaced the 'armTF' block with a digital gain of zero. After measuring the open loop gain of all but this piece, I multiply that 'OLG' with the W/N that Jenne extracted from Optickle for CARM->TR (not sqrtInv)
I plot the resulting estimate of the actual OLG in the following plot. Since the CARM-RSE peak is moving down, we use the LP filter that Den installed for us several months ago. To account for the radiation pressure spring, we use some low frequency boosts but not the crazy FM4 filter.
As you can see, the loop is stable from 500 to 200 pm, but then goes unstable around 110 pm. I expect that we will want to do some fancy shaping there or switch from TRX+TRY into something else.
This assumes we have filters 0, 1, 3, 5, and 7 on in the CARM filter bank - still need to add the digital AA/AI to make the loop phase lag a little more accruate, but I think this is looking promising.
I touched up the PMC alignment.
While bringing back the MC, I realized IOO got a really old BURT restore again... Restored from midnight last night. WFS still working.
Now aligning IFO for tonight's work
Okay, here (finally) is the optickle version.
I have the antispring case, starting at 501pm and going roughly every 10pm down to 1pm. I also have the spring case, starting at -501pm and going down every 10pm to roughly -113pm. Rossa crashed partway through the calculation, which is why it's not all the way.
In the .zip is a .mat file called PDs_vs_CARMoffset_WattsPerNewton.mat, which has (a) a list of the 50 CARM offsets, (b) a frequency vector, and (c) several transfer function arrays. The transfer function arrays are supposed to be intuitively named, eg. REFLDC_antispring.
In the .zip file are also the original .mat files that are a result of the tickle calculations, as well as a .m file for loading them and making the plots, etc. For anyone who is trying to re-create the transfer function variables, I by-hand saved the variable called PD_WperN to the names like REFLDC_antispring. Just kidding. Those original mat files are over 100Mb each, and that's just crazy. Anyhow, I think the .zip has everything needed to use the data from these plots.
Anyhow. Here are plots of what are in the various transfer function arrays:
In my previous simulation results, I've always plotted W/m, which isn't exactly straightforward. We often think about the displacement that a given mirror actuator output will induce, but when we're locking the full IFO, radiation pressure effects modify the mechanical response depending on the current detuning, making the meaning of W/m transfer functions a little fuzzy.
So, I've redone my MIST simulations to report Watts of signal response due to actual actuator newtons, which is what we actually control with the digital system. Note, however, that these Watts are those that would be sensed by a detector directly at the given port, and doesn't take into account the power reduction from in-air beamsplitters, etc.
As an example, here are the SqrtInv and REFLDC CARM TFs for the anti-spring case:
The units of the SqrtInv plot are maybe a little weird, these TFs are the exact shape of the TRX W/N TFs with the DC value adjusted by the ratio of the DC sweep derivatives of TRX and SqrtInv.
All of the results live in /svn/trunk/modeling/PRFPMI_radpressure/
PMC is fine. There are sliders in the Phase Shifter screen (accessible from the PMC screen) that also needed touching.
PSL shutter is still closed until Steve is happy with the vacuum system - I guess we don't want to let high power in, in case we come all the way up to atmosphere and particulates somehow get in and get fried on the mirrors.
After the Great Computer Meltdown of 2014, we forgot about poor c0rga, which is why the RGA hasn't been recording scans for the past several months (as Steve noted in elog 10548).
Q helped me remember how to fix it. We added 3 lines to its /etc/fstab file, so that it knows to mount from Chiara and not Linux1. We changed the resolv.conf file, and Q made some simlinks.
Steve and I ran ..../scripts/RGA/RGAset.py on c0rga to setup the RGA's settings after the power outage, and we're checking to make sure that the RGA will run right now, then we'll set it back to the usual daily 4am run via cron.
EDIT, JCD: Ran ..../scripts/RGA/RGAlogger.py, saw that it works and logs data again. Also, c0rga had a slightly off time, so I ran sudo ntpdate -b -s -u pool.ntp.org, and that fixed it.
sudo ntpdate -b -s -u pool.ntp.org
In all of the fstabs, we're using chiara's IP instead of name, so that if the nameserver part isn't working, we can still get the NFS mounts.
On control room computers, we mount the NFS through /etc/fstab having lines like:
192.168.113.104:/home/cds /cvs/cds nfs rw,bg 0 0
fb:/frames /frames nfs ro,bg 0 0
Then, things like /cvs/cds/foo are locally symlinked to /opt/foo
For the diskless machines, we edited the files in /diskless/root. On FB, /diskless/root/etc/fstab becomes
master:/diskless/root / nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0
master:/usr /usr nfs sync,hard,intr,ro,nolock,rsize=8192,wsize=8192 0 0
master:/home /home nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0
none /proc proc defaults 0 0
none /var/log tmpfs size=100m,rw 0 0
none /var/lib/init.d tmpfs size=100m,rw 0 0
none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
none /sys sysfs defaults 0 0
master:/opt /opt nfs async,hard,intr,rw,nolock 0 0
192.168.113.104:/home/cds/rtcds /opt/rtcds nfs nolock 0 0
192.168.113.104:/home/cds/rtapps /opt/rtapps nfs nolock 0 0
("master" is defined in /diskless/root/etc/hosts to be 192.168.113.202, which is fb's IP)
and /diskless/root/etc/resolv.conf becomes:
nameserver 192.168.113.104 #Chiara
Pump spool valves V5, V4, V3 sweating a lot. VM3 and VC2 not so much.
They are VAT valves F28-62887-03, 11, 14 and so on ~15-16 years old.
I'm speculating that some plastic is aging-braking down at the atmospheric-pneumatic side of valves.
The vacuum side is not effected, according to vacuum pressure readings.
May be some condensation from the small turbos? No
I'm looking for an identical valve to examine, but I can not find one.
We are using industrial grade 99.96% Nitrogen to actuate these valves.
Valves are not effected are dry: VA6, V6, V7 and all annuloses.
Yes, our engineers are aware of this issue. They say:
The pneumatic actuator needs lubricant as the O-ring (Viton) slides in the cylinder. Without grease the O-ring would be abraded and leaking after only a relatively few cycles. The lubricant used in our pneumatic actuators is an emulsion of oil and Teflon flakes. Vibration, many cycles and sometimes high temperature lead to the separation of the oil and Teflon. That is apparently the issue you are seeing.
VAT is and has been testing and qualifying new lubricants, and this is one of the factors we are always looking to improve. The formula we used 15 years ago in these valves seems to have performed reasonable well. Our formula today should perform even better.
We realize this explanation does not help you with these existing valves, but 15 years of service is not too bad is it?
Steve -NOTE:bonnet seal is metal so there is no way this oil can get into our vacuum ( only if the bellow leaks )
Other thoughts from talking with Rana earlier:
Also, Q and I squished on the suspension connectors earlier tonight. MC2 was going wonky, which we feared might be because we were in that area working on Chiara earlier. Then, after squishing the MC connectors, the PRM started misbehaving, so we went and gave all the corner suspension connectors another squish. No suspension glitching problems since then.
We attempted some of the same old CARM offset reduction tonight, but from the other direction. (We have no direct knowledge of which is the spring and which is the anti-spring side)
We we able to get to, and sit at, arm powers on the order of 5. Really, we kind of wanted just to push things to try and inform our current ideas of what our limiting factor is, so as to appropriately expend our efforts.
We took many digital CARM OLTFs at different offsets; it never really looked like a burgeoning pole was about to make things unstable. The low frequency OLTF data had bad SNR, so it wasn't clear if we were losing gain there. We weren't at arm powers where we would expect the DC transmission curve to flatten out yet, from simulations (which is above a few tens).
My impression from at least our last lock loss was a DARM excursion. However, using the DRMI won't get rid of the second two points.
As part of trying to determine whether we require the AO path for lock acquisition, or if we can survive on just digital loops, I looked at the noise suppression that we can get with a digital loop.
I took a spectrum of POX, and calibrated it using a line driving ETMX to match the ALSX_FINE_PHASE_OUT_HZ channel, and then I converted green Hz to meters.
I then undid the LSC loop that was engaged at the time (XARM FMs 1,2,3,4,5,8 and the pendulum plant), to infer the free running arm motion.
I also applied the ALS filters (CARM FMs 1,2,3,5,6) and the pendulum plant to the free running noise to infer what we expect we could do with the current digital CARM filters assuming we were not sensor noise limited.
In the figure, we see that the free running arm displacement is inferred to be about 0.4 micrometers RMS. The in-loop POX signal is 0.4 picometers RMS, which (although it's in-loop, so we're not really that quiet) is already better than 1/10th the coupled cavity linewidth. Also, the CARM filters that we use for the ALS lock, and also the sqrtInvTrans lock are able to get us down to about 1 pm RMS, although that is not including sensor noise issues.
For reference, here are the open loop gains for the LSC filters+pendulum and ALS filters+pendulum that we're currently using. The overall gain of these loops have been set so the UGF is 150Hz.
It seems to me that as long as our sensors are good enough, we should be able to keep the arm motion down to less than 1/10th or 1/20th the coupled cavity linewidth with only the digital system. So, we should think about working on that rather than focusing on engaging the AO path for a while.
I've changed the LSC rack wiring a little bit, to give us some flexibility when it comes to REFL11.
Previous, the REFL11 demod I output was fed straight to the CM servo board, and the slow CM board output was hooked up to the REFL11I ADC channel. Thus, it wasn't really practical to ever even look at sensing angles in REFL11, since the I and Q inputs were subject to different signal paths/gains. (Also, doing LSC offsets would do wonky things to refl11 depending on the state of the switches on the CM board screen.)
Thus, I've hooked up the CM board slow output into the the previously existing, aptly named, CM_SLOW channel. The REFL11 demod board I output is split to IN1 of the CM board, and the REFL11 I ADC channel.
So, there is no longer hidden behavior in behind the REFL11 input filters, channels are what they claim to be, and the CM board output is just as easily accessible to the LSC filters as before.
Pump spool valves V5, V4, V3 sweating a lot. VM3 and VC2 not so much.
I'm speculating that some plastic is aging-braking down at the atmospheric-pneumatic side of valves.
The vacuum side is not effected, according to vacuum pressure readings.
I put a little script into ...../scripts/Admin that will check the fullness of Chiara's disk. We only have the mailx program installed on Nodus, so for now it runs on Nodus and sends and email when the chiara disk that nodus mounts is more than 97% full.
We're back! It was entirely my fault.
Some months ago I wrote a script that chiara calls every night, that rsyncs its hard drive to an external drive. With the power outage yesterday, the external drive didn't automatically mount, and thus chiara tried to rsync its disk to the mount point, which was at the time just a local folder, which made it go splat.
I'm fixing the backup script to only run if the destination of the rsync job is not a local volume.
We had a unexpected power shutdown for 5 sec at ~ 9:15 AM.
Chiara had to be powered up and am in the process of getting everything else back up again.
Steve checked the vacuum and everything looks fine with the vacuum system.
There was an equipment malfunction in one of Pasadena's substation that caused the outage. After about an 8 second delay, back up circuits restored power. This affected about 1/2 of the campus.
From: Steve Vass [mailto:firstname.lastname@example.org]
Sent: Tuesday, October 07, 2014 2:18 PM
To: Anchondo, Michael
Can you tell me about yesterday's power outage?
Chiara doesn't seem to be responding and I guess something happened 7 hrs ago.
I tried to hook up chiara to a monitor to reboot or atleast look for error messages; but it is not even detecting the external monitor (Tried changing monitors and vga cables; still see nothing).
I tried to ssh into it and only received errors :
NFS lookup failed for server XXX.XXX.XXX.XXX : error 5 (RPC: Timed out)
ssh: chiara: host/servname not known
Steve had the vacuum checked and everything seems fine with the status of the vacuum system atleast.
No exciting progress today. I did PSL green alignment for the Yarm, although I now think that the Xarm green needs realigning too.
Also, I was foiled for a while by ETMX jumping around. I think it's because the adapter board on the Xend rack didn't have any strain relief. So, I zip tied the heavy cable in a few places so that it's no longer pulling on the connector. Hopefully we won't see ETMX misbehaving as often now, so we won't have to go squish cables as often.
After Q brought back the IR, I went to check the green situation.
1. The end lasers had to be turned ON.
2. The heaters for the doubler crystals had to be enabled. The heaters are at the set values.
3. The X arm PZTs for the steering mirrors had to be powered up (Set voltage 100V and current 6.7mA)
4. I aligned the green to the already IR-aligned arms.
Green PSL alignment has to be done after Q finishes his work on the MC WFS.
The autolocker is now working, but I didn't change anything to make it so. I was just putting in some echo statements, to see where it was getting hung up, and it started working... This isn't the first time I've had this experience.
It turns out IOO had a bad BURT restore. I restored from 5AM this morning, the WFS are ok now.
I brought back the PMC, MC and Arms.
As per other slow computers, which Chris figured out in elog 10189, I added all the rest of the slow computers to Chiara's /etc/hosts file, so that they would come up when Manasa went and keyed the crates.
Computers that were already there:
Computers that I added today:
Manasa keyed all of these crates *except* for the vac computer, since Steve said that the vacuum system is up and running fine.
The last time we had a power failure IFO recovery elog
We had an unexpected power shutdown for 5 sec at ~ 9:15 AM.
PSL Innolight laser and the 3 units of IFO air conditions turned on.
The vacuum system reaction to losing power: V1 closed and Maglev shut down. Maglev is running on 220VAC so it is not connected to VAC-UPS. V1 interlock was triggered by Maglev "failure" message.
Maglev was reset and started. After Chiara was turned on manually I could bring up the vac control screen through Nodus and opened V1
"Vacuum Normal" valve configuration was recovered instantly.
It is arriving Thursday
There are several non scientific reasons.
It seems clever, but I wonder why use DTT and command line perl, instead of using the FE lockins or just demod the offline data or all of the other sensing matrix scripts made for the LSC (at 40m) or ASC (at LLO) ?
Yesterday, Koji and I measured the transfer function of pitch and yaw excitations of each MC mirror, directly to each quadrant of each WFS QPD.
When I last touched the WFS settings, I only used MC2 excitations to set the individual quadrant demodulation phases, but Koji pointed out that this could be incomplete, since motion of the curved MC2 mirror is qualitatively different than motion of the flat 1&3.
We set up a DTT file with twenty TFs (the excitation to I & Q of each WFS quadrant, and the MC2 trans quadrants), and then used some perl find and replace magic to create an xml file for each excitation. These are the files called by the measurement script Koji wrote.
I then wrote a MATLAB script that uses the magical new dttData function Koji and Nic have created, to extract the TF data at the excitation frequency, and build up the sensing elements. I broke the measurements down by detector and excitation coordinate (pitch or yaw).
The amplitudes of the sensing elements in the following plots are normalized to the single largest response of any of the QPD's quadrants to an excitation in the given coordinate, the angles are unchanged. From this, we should be able to read off the proper digital demodulation angles for each segment, confirm the signs of their combinations for pitch and yaw, and construct the sensing matrix elements of the properly rotated signals.
The axes of each quadrant look consistent across mirrors, which is good, as it nails down the proper demod angle.
The xml files and matlab script used to generate these plots is attached. (It requires the dttData functions however, which are in the svn (and the dttData functions require a MATLAB newer than 2012b))
We have two cold cathode gauges at the pump spool and one signal cable to controller. CC1 in horizontal position and CC1 in vertical position.
CC1 h started not reading so I moved cable over to CC1 v
I haven't been able to lock the DRMI tonight, neither with 1F and no arms nor 3F and arms held off with ALS... I tried previous recipes, and new combinations informed by simulations I've run, to no avail.
I touched the alignment of the green beat PD on the PSL table, since the X beatnote was rather low, but wasn't able to improve it by much. I never took a spectrum, since it wasn't my main focus tonight, but the low frequency motion of both arms on ALS, as observed by RIN, was good as I've ever seen it.
In our WFS work earlier today, Koji and I reset the WFS offsets, and it actually seems to have helped a good deal, in terms of the "fuzz" of MC REFL on the wall striptool. I had previously presumed this to be due to excess angular motion, but perhaps it is more accurately described as an alignment offset that let the nominal angular motion couple into the RIN more.
We made sensing matrix measurements for the IMC WFS and the MC2 QPD.
The data is under further analysis but here is some record of the current state to show
IMC Trans RIN and the ASC error signals with/without IMC ASC loops
The measureents were done automatically running DTT. This can be done by
The analysis is in preparation so that it provides us a diagnostic report in a PDF file.
KroneCrane Fred inspected and certified the 3 40m cranes for 2014. The vertex crane crane was load tested at fully extended position.
Small oil drops were found during prevent inspection of the vertex crane. They were wiped off. It took 231 days to grow this size.
Pump down reached "vacuum normal" state. IFO _P1 pressure 1e-4 torr
PSL shutter is opened.
IFO_P1 pressure 1.6e-5 torr after 6 days at atm
PS: PSL sliding door 11 was left open overnight. The PSL particle count will reach room counts in 20 seconds at low speed of HEPA
The IFO is ready for 3F DRMI comissioning
Pump down reached "vacuum normal" state. IFO _P1 pressure 1e-4 torr in 8 hrs actual pumping time
PSL shutter is opened.
We stopped pumping just short of 3 hours at 320 Torr. Pumping speed was 2.7 Torr / min with partially closed RV1 and butterfly valve/
RP1&3 roughing pump hose is disconnected. Butterfly valve removed. The vac envelope is closed.
This is our second stop. I will be back this afternoon. IFO P1 3.5 Torr
I've installed a new 2pin lemo cable going from the CM servo out to in2 of the MC servo board, and removed the temporary BNC. I used some electrical tape to give the cable some thickness where the lemo head screws on to try to strain relieve the solder joints; hopefully this cable is more robust than the last.
I put an excitation into the CM board, and saw it come out of MC_F, so I think we're set.
Q checked the earth quake stops of SRM and we put the ITMY & BS doors on.
Photos have been taken of the ITMY chamber, and uploaded to picasa. Here's a slideshow: