I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.
Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.
Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.
As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.
I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.
Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).
According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.
Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?
I can't find anything in the manual that describes the nature of the FAULT message. In fact, it's not mentioned at all. If the unit detects a fault at its output, I would expect a bit more information. This unit does a programmable level of input error protection, too, usually set at 100%. Still, there is no indication in the manual whether an input issue would be described as a fault; that usually means a short or lifted ground at the output.
Now that the old APC Smart-UPS 2200 is no longer in use by the vacuum system, I looked into whether it can be repurposed for the framebuilder machine. Yes, it can. The max power consumption of the framebuilder (a SunFire X4600) is 1.137kW. With fresh batteries, I estimate this UPS can power the framebuilder for >10 min. and possibly as long as 30 min., depending on the exact load.
@Chub/Jordan, this UPS is ready to be moved to rack 1X6/1X7. It just has to be disconnected from the wall outlet. All of the equipment it was previously powering has been moved to the new UPS. I have ordered a replacement battery (APC #RBC43) which is scheduled to arrive 9/09-11.
I'm in the lab this morning to interface the two new UPS units with the digital controls system. Will be out by lunchtime. The disruptions to the vac system should be very brief this time.
I'm leaving the lab shortly. We're not ready to switch over the vac equipment to the new UPS units yet.
The 120V UPS is now running and interfaced to c1vac via a USB cable. The unofficial tripplite python package is able to detect and connect to the unit, but then read queries fail with "OS Error: No data received." The firmware has a different version number from what the developers say is known to be supported.
The 230V UPS is actually not correctly installed. For input power, it has a general type C14 connector which is currently plugged into a 120V power strip. However this unit has to be powered from a 230V outlet. We'll have to identify and buy the correct adapter cable.
With the 120V unit now connected, I can continue to work on interfacing it with python remotely. The next implementation I'm going to try is item #2 of this plan [ELOG 15446].
I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:
I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.
Chub has placed the order for two new UPS units (115V for TP2/3 and a 220V version for TP1).
They will arrive within the next two weeks.
To use the Sensoray 2250 USB frame grabber:
Ensure you have the following packages installed: build-essential, libusb-dev
Download the Linux manual and linux SDK from the Sensoray website at:
Go to the Software and Manual tab near the bottom to find the links. The software can also be found on the 40m computers at /cvs/cds/caltech/users/josephb/sensoray/
The files are Manual2250LinuxV120.pdf and s2250_v120.tar.gz
Run the following commands in the directory where you have the files.
tar -xvf s2250_v120.tar.gz
sudo make modules_install
At this point plug in the 2250 frame grabber.
sudo modprobe s2250_ezloader
Now you can run the demo with
./sraydemo or ./sraydemo64
Options will show up on screen. A simple set to start with is "encode 0", which sets the recording type, "recvid test.mpg", which starts the recording in file test.mpg, and "stop", which stops recording. Note there is no on screen playback. One needs an installed mpeg player to view the saved file, such as Totem (which can screen cap to .png format) or mplayer.
All these instructions are on the first few pages of the Manual2250LinuxV120 pdf.
I have moved the USB flash drives from the electronics bench back into the middle drawer of the cabinet next to the AC which is west of the fridge. Drawer re-enlabeled.
After some system updates this evening, firefox can no longer handle the html input encoding for the elog. I'm not sure what happened. You can still use the "ELCode" or "plain" input encodings, but "HTML" won't work. The problem seems to be firefox 17. ottavia and rosalba were upgraded, while rossa and pianosa have not yet been.
I've installed chromium-browser (debranded chrome) on all the machines as a backup. Hopefully the problem will clear itself up with the next update. In the mean time I'll try to figure out what happened.
To use chromium: Appliations -> Internet -> Chromium
I can't create a new page on the 40m wiki. The page that I was trying to create is
I get this message when I try to save the new page:
Page could not get locked. Unexpected error (errno=13).
This address for wiki is obsolete. Recently it was switched to https://wiki-40m.ligo.caltech.edu/
Jamie is working on automatic redirection from the old wiki to the new place.
The new one uses albert.einstein authentication.
We were not able to fix the excess frequency noise of the AUX X laser by the usual laser diode current song and dance. Unfortunately, this level of noise is much too high to have any realistic chance of locking.
We're leaving things back in the IR beat -> phase tracker state with free running AUX lasers, on the off chance that there may be anything interesting to see in the overnight data. This may be limited by our lack of automatic beatnote frequency control. (Gautam will soon implement this via digital frequency counter). I've upped the FINE_PHASE_OUT_HZ_DQ frame rate to 16k from 2k, so we can see more of the spectrum.
For the Y beat, there is the additional weird phenomenon that the beat amplitude slowly oscillates to zero over ~10 minutes, and then back up to its maximum. This makes it hard for the phase tracker servo to stay stable... I don't have a good explanation for this.
Here's how we should diagnose the EX laser:
EDIT: Sleepy Eric doesn't understand loops. The conditions for this observation included active oplev loops. Thus, obviously, looking at the in-loop signal after the ASC signl joins the oplev signal will produce this kind of behavior.
After some talking with Rana, I set out on making an even better-er QPD loop. I made some progress on this, but a new mystery halted my progress.
I sought to have a more physical undertanding of the plant TF I had measured. Earlier, I had assumed that the 4Hz plant features I had measured for the QPD loops were coming from the oplev-modified pendulum response, but this isn't actually consistent with the loop algebra of the oplev servos. I had seen this feature in both the oplev and qpd error signals when pushing an excitation from the ASC-XARM_PIT (and so forth) FMs.
However, when exciting via the SUS-ETMX-OLPIT FMs (and so forth), this feature would not appear in either the QPD or oplev error signals. That's weird. The outputs of these two FMs should just be summed, right before the coil matrix.
I started looking at the TF from ASC-YARM_PIT_OUT to SUS-ETMY_TO_COIL_1_2, which should be a purely digital signal routing of unity, and saw it exhibit the phase shape at 4Hz that I had seen in earlier measurements. Here it is:
I am very puzzled by all of this. Needs more investigation.
The measured transimpedance of the latest POY11 PD matches my model very well up to 100 MHz. But at about ~216MHz I have a resonance that I can't really explain.
The following is a simplified illustration of the resonant circuit:
Perhaps my model misses that resonance because it doesn't include stray capacitances.
While I was tinkering with it, i noticed a couple of things:
- the frequency of that oscillation changes by grasping with finger the last inductor of the circuit (the 55n above); that is adding inductance
- the RF probe of the scope clearly shows me the oscillation only after the 0.1u series capacitor
- adding a small capacitor in parallel to the feedback resistor of the output amplifier increases the frequency of the oscilaltion
Where did you get the 55nH based notch from? I don't remember anything like that from the other LSC PD schematics. This is certainly a bad idea. You should remove it and put the notch back over by the other notch.
Why is it a bad idea?
You mean putting both the 2-omega and the 55MHz notches next to each other right after the photodiode?
We had a unexpected power shutdown for 5 sec at ~ 9:15 AM.
Chiara had to be powered up and am in the process of getting everything else back up again.
Steve checked the vacuum and everything looks fine with the vacuum system.
We had an unexpected power shutdown for 5 sec at ~ 9:15 AM.
PSL Innolight laser and the 3 units of IFO air conditions turned on.
The vacuum system reaction to losing power: V1 closed and Maglev shut down. Maglev is running on 220VAC so it is not connected to VAC-UPS. V1 interlock was triggered by Maglev "failure" message.
Maglev was reset and started. After Chiara was turned on manually I could bring up the vac control screen through Nodus and opened V1
"Vacuum Normal" valve configuration was recovered instantly.
It is arriving Thursday
The last time we had a power failure IFO recovery elog
I brought back the PMC, MC and Arms.
The autolocker is now working, but I didn't change anything to make it so. I was just putting in some echo statements, to see where it was getting hung up, and it started working... This isn't the first time I've had this experience.
It turns out IOO had a bad BURT restore. I restored from 5AM this morning, the WFS are ok now.
After Q brought back the IR, I went to check the green situation.
1. The end lasers had to be turned ON.
2. The heaters for the doubler crystals had to be enabled. The heaters are at the set values.
3. The X arm PZTs for the steering mirrors had to be powered up (Set voltage 100V and current 6.7mA)
4. I aligned the green to the already IR-aligned arms.
Green PSL alignment has to be done after Q finishes his work on the MC WFS.
As per other slow computers, which Chris figured out in elog 10189, I added all the rest of the slow computers to Chiara's /etc/hosts file, so that they would come up when Manasa went and keyed the crates.
Computers that were already there:
Computers that I added today:
Manasa keyed all of these crates *except* for the vac computer, since Steve said that the vacuum system is up and running fine.
Pump spool valves V5, V4, V3 sweating a lot. VM3 and VC2 not so much.
They are VAT valves F28-62887-03, 11, 14 and so on ~15-16 years old.
I'm speculating that some plastic is aging-braking down at the atmospheric-pneumatic side of valves.
The vacuum side is not effected, according to vacuum pressure readings.
May be some condensation from the small turbos? No
I'm looking for an identical valve to examine, but I can not find one.
We are using industrial grade 99.96% Nitrogen to actuate these valves.
Valves are not effected are dry: VA6, V6, V7 and all annuloses.
I uninstalled gstreamer-devel and gst-plugins-base-devel on Rosalba. Here is the command I ran:
$ sudo yum remove gstreamer-devel gstreamer-plugins-base-devel
Actually, I had installed these myself a few days earlier, before I knew that I should be recording such changes in the elog. I'm sorry!
The way the filter's transfer function has been measured is by a swept sine between the "SERVO INPUT" and the "PIEZO DRIVE OUTPUT" connection on the box front panel. The spectrum analyzer used for the measurement is the SR785 and the source amplitude is set at 0.1V.
The two transfer functions are clearly different. In particular the old one looks like a simple integrator, whereas the new one already includes some sort of boost.
That probably explains why the new one is unable to lock the PLL. Indeed what the PLL needs, at least to acquire lock, is an 1/f filter.
I thought the two boxes were almost identical, at least in the filter shapes. Also the two schematics available in the DCC coincide.
To me, they both look stable. I guess that the phase has to go to -180 deg to be unstable.
Why does the magnitude go flat at high frequencies? That doesn't seem like 1/f.
How about a diagram of what inputs and outputs are being measured and what the gain knob and boost switch settings are?
FYI: I stored the Universal PDH boxes in the RF cabiner in the Y arm.
[Koji, Annalisa, Gautam]
Annalisa noticed that over the weekend the Y-arm green PDH was locked to a sideband, despite not having changed anything on the PDH box (the sign switch was left as it was). On friday, we tried turning on and off some of the filters on the slow servo (C1ALS_Y_SLOW) which may have changed something but this warranted further investigation. We initially thought that the demodulation phase was not at the optimal value, and decided to try introducing some capacitances in the path from the function generator to the LO input on the universal PDH box. We modelled the circuit and determined that significant phase change was introduced by capacitances between 1nF and 100nF, so we picked out some capacitors (WIMA FKP) and set up a breadboard on which to try these out.
After some trial and error, Koji dropped by and felt that the loop was optimized for the old laser, the various loop parameters had not been tweaked since the new laser was installed. The following parameters had to be optimized for the new laser;
The setup was as follows:
The PDH error signal did not have very well-defined features, so Koji tweaked the LO frequency and the modulation depth till we got a reasonably well-defined PDH signal. Then we turned the excitation off and locked the cavity to green. The servo gain was then optimized by reducing oscillations in the error signal. Eventually, we settled on values for the Servo Gain, LO frequency and modulation depth such that the UGF was ~20kHz (determined by looking at the frequency of oscillation of the error signal on an Oscilloscope), and the PDH signal had well-defined features (while the cavity was unlocked). The current parameters are
We then proceeded to find the optimal demodulation phase by simulating the circuit with various capacitances between the function generator and the PDH box (circuit diagram and plots attached). The simulation seemed to suggest that there was no need to introduce any additional capacitance in this path (introducing a 1nF capacitance added a phase-lag of ~90 degrees-this was confirmed as the error-signal amplitude decreased drastically when we hooked up a 1nF capacitor on our makeshift breadboard). In the current configuration, the LO is connected directly to the PDH box.
Now that we are reasonably confident that the loop parameters are optimal, we need to stabilise the C1ALS_Y_SLOW loop to stabilise the beat note itself. Appropriate filters need to be added to this servo.
Circuit Diagram: 50 ohm input impedance on the source, 50 ohm output impedance seen on the PDH box, capacitance varied between 1nF and 100nF in steps.
Plots for various capacitances: Gold-green trace (largest amplitude) direct from LO, other traces at input to PDH box.
Someone for some reason added full-rate DAQ specification to some ADC3 channels in the c1sus IOP model (c1x02):
These appear to be associated with c1pem, so I'm guessing it was Den (particularly since he's the worst about making modifications to models and not telling anyone or logging or svn committing).
I'm removing them.
Does anyone know what the channels plugged in to the PEM ADCU, channels 5,6,7,8 are? They aren't listed in the C1ADCU_PEM.ini file which tells the channel list/dataviewer/everything about all the rest of the signals which are plugged into that ADCU, so I'm not sure if they are used at all, or if they're holdovers from some previous time. The cables are not labeled in a way that makes clear what they are. Thanks!
Steve mentioned two unlabelled optics were found at EX, relics from the Endtable upgrade.
These are now labelled and forked down on the SP table.
For the purpose of testing out the temperature sensors, I stole the PEM-SEIS_MC1X,Y,Z channels.
I unplugged Guralp NS1b, Guralp Vert1b, Guralp EW1b cables from the PEM ADCU(#10,#11,#12) near 1Y7 and put temp sensors in their place (temporarily).
I unplugged Guralp EW1b and Guralp Vert1b and plugged in temp sensors temporarily. Guralp NS1b is still plugged in.
Looking into control signals and error signals of the Y arm green PDH servo,
1. The saturation of feedback signal (PZT_OUT) at +/-4000 counts (less than 5V) comes from only the readout saturating. The signals looked fine on the oscilloscope.
2. We did a sine sweep at the PZT_OUT and optimized the LO frequency. The LO frequency did not need any change.
3. The error signal has some offset to it. We are not sure where this comes from.
We have been seeing that whenever green loses lock, the spot position moves down in pitch on the ETMYF camera and the GTRY camera. This led us to think about if the lock loss originated from the PDH or from the cavity.
We looked into dataviewer channels of green, IR, oplev and suspension for the following cases:
1. Green and IR PDH locked
2. Green locked and arms flashing for IR
3. Green shutter closed and IR PDH locked
4. Green shutter closed and arms flashing for IR
5. Arms flashing for IR and ETMY oplev servo turned off.
Dataviewer snapshots of glitch in all the above cases are saved in masayuki's folder users/masayuki/ALS/kicked_mirror/
In all the above cases, we could still see the glitch. We could conclude that the problem lied with the ETMY SUS.
Shown below is the dataviewer snapshot of ETMY sus and shadow sensor channels. The glitch exists even when the oplev servo is turned off pointing to problems associated with the ETMY suspension.
Does anyone know what the purpose of the indicated optic in Attachment #1 is? Can we remove it? It will allow a little more space around the elliptical reflector...
I don't think it was used. It is not on the diagram too. You can remove it.
I was receiving missing path error when I was trying to measure the MC spot positions. Jenne pointed out that Koji had moved all the unused scripts in scripts/ASS to /scripts/ASS/OBSOLETE yesterday and in the process one of the scripts that the MC spot position measurement script calls for (MeasureSpotPositions.py) must have also been moved to the OBSOLETE directory. I moved the script to /scripts/ASS/MC so that we know the script is being used and also changed its path in the main script.
I reproduced Gautam's sketch of the 1x1 and 1x2 Eurocrates into a pdf image that contains links to the appropriate DCCs in the legend (see attachement).
Thanks. Please update this wiki page too.
As of RCG version 2.1, recorded channels use the suffix "_DQ", instead of "_DAQ". I just rebuilt and installed the c1lsc model, which changed the channel names, therefore hosing the old daq channel ini file. Here's what I did, and how I fixed it:
$ ssh c1lsc
$ cd /opt/rtcds/caltech/c1/core/trunk
$ make c1lsc
$ make install-c1lsc
$ cd /opt/rtcds/caltech/c1/scripts
$ cd /opt/rtcds/caltech/c1/chans/daq
$ cat archive/C1LSC_110517_152411.ini | sed "s/_DAQ/_DQ/g" >C1LSC.ini
$ telnet fb 8087
I had created a python code to find the combination of hyperparameters that trains the neural network. The code (nn_hyperparam_opt.py) is present in the github repo. It's running in cluster since a few days. In the meanwhile I had just tried some combination of hyperparameters.
These give a low loss value of approximately 1e-5 but there is a large error bar for loss value since it fluctuates a lot even after 1500 epochs. This is unclear.
Input: 64*64 image frames of simulated video by applying beam motion sine wave of frequency 0.2Hz and at 10 frames per sec. This input data is given as an hdf5 file.
Train : 100 cycles, Test: 300 cycles, Optimizer = Nadam (learning rate = 0.001)
256 -> 128 -> 1
Activation : selu selu linear
Case 1: batch size = 48, epochs = 1000, loss function = mean squared error
Plots of output predicted by neural network (NN) & input signal has been shown in 1st graph & variation in loss value with epochs in 2nd graph.
Case 2: batch size = 32, epochs = 1500, loss function = mean squared logarithmic error
Plots of output predicted by neural network (NN) & input signal has been shown in 3rd graph & variation in loss value with epochs in 4th graph.
I tried to reduce the overfitting problem in previous neural network by reducing the number of nodes and layers and by varying the learning rate, beta factors (exponential decay rates of moving first and second moments) of Nadam optimizer assuming error of 5% is reasonable.
32 * 32 image frames (converted to 1d array & pixel values of 0 to 255 normalized) of simulated video by applying sine signal to move beam spot in pitch with frequency 0.2Hz and at 10 frames per second.
Total: 300 cycles , Train: 60 cycles, Validation: 90 cycles, Test: 150 cycles
Input --> Hidden layer --> Output layer
4 nodes 1 node
Activation function: selu linear
Batch size = 32, Number of epochs = 128, loss function = mean squared error
Learning rate = 0.00001, beta_1 = 0.8 (default value in Keras = 0.9), beta_2 = 0.85 (default value in Keras = 0.999)
Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment.
Changed number of nodes in hidden layer from 4 to 8. All other parameters same.
These plots show that when residual error increases basically the output of neural network has a smaller amplitude compared to the applied signal. This kind of training error is unclear to me.
When beta parameters of optimizer is changed farther from 1, error increases.