[Jenne & Sanjit]
Good news: We could successfully send filtered output to MC1 @ SUS.
We used 7 channels (different combinations of 3 seismometer and six accelerometer)
We tried some values of \mu (0.001-0.005) & gain on SUS_MC1_POSITION:MCL and C1ASS_TOP_SUS_MC1 (0.1-1).
C1:ASS-TOP_SUS_MC1_INMON is huge (soon goes up to few times 10000), so ~0.1 gains at two places bring it down to a reasonable value.
Bad news: no difference between reference and filtered IOO-MC_L power spectra so far.
Plan of action: figure out the right values of the parameters (\mu, \tau, different gains, and may be some delays), to make some improvement to the spectra.
** Rana: there's no reason to adjust any of the MCL gains. We are not supposed to be a part of the adaptive algorithm.
There was some uncertainty as to which channels were being input into the Adaptive Filtering screen, so I checked it out to confirm. As expected, the rows on the ASS_TOP_PEM screen directly correspond to the BNC inputs on the PEM_ADCU board in the 1Y6 (I think it's 6...) rack. So C1:ASS-TOP_PEM_1_INMON corresponds to the first BNC (#1) on the ADCU, etc.
After checking this out, I put text tags next to all the inputs on the ASS_TOP_PEM screen for all of the seismometers (which had not been there previously). Now it's nice and easy to select which witness channels you want to use for the adaptation.
I added a new database record (C1:PSL-FSS_RCPID_SETPOINT) to allow for changing of the RC setpoint while the loop is on. This will enable us to step the can's temperature and see the result in the NRPO's SLOWDC.
While I was mostly able to restart the c1ass computer earlier today, the filter banks were acting totally weird. They were showing input excitations when we weren't putting any, and they were showing that the outputs were all zero, even though the inputs were non-zero and the input and the output were both enabled. The solution to this ended up being to use the 2nd to last assfe.rtl backup file. Rana made a symbolic link from assfe.rtl to the 2nd to last backup, so that the startup.cmd script does not need to be changed whenever we alter the front end code.
The startup_ass script, in /caltech/target/gds/ which, among other things, starts the awgtpman was changed to match the instructions on the wiki Computer Restart page. We now start up the /opt/gds/awgtpman . This may or may not be a good idea though, since we are currently not able to get channels on DTT and Dataviewer for the C1:ASS-TOP_PEM channels. When we try to run the awgtpman that the script used to try to start ( /caltech/target/gds/bin/ ) we get a "Floating Exception". We should figure this out though, because the /opt/gds/awgtpman does not let us choose 2kHz as an option, which is the rate that the ASS_TOP stuff seems to run at.
The last fix made was to the screen snapshot buttons on the C1:ASS_TOP screen. When the screen was made, the buttons were copied from one of the other ASS screens, so the snapshots saved on the ASS_TOP screen were of the ASS_PIT screen. Not so helpful. Now the update snapshot button will actually update the ASS_TOP snapshot, and we can view past ASS_TOP shots.
c1ass had not been rebooted since before the filesystem change, so when I was sshed into c1ass I got an error saying that the NFS was stale. Sanjit and I went out into the cleanroom and powercycled the computer. It came back just fine. We followed the instructions on the wiki, restarting the front end code, the tpman, and did a burt restore of c1assepics.
You said that the use of FAXST was forbidden for phds and graduate students. I had to swear on the promise of not ever buying an other FAXST
I (think I) restarted DMF. It's on Mafalda, running in matlab (not the complied version which Rana was having trouble with back in the day). To start Matlab, I did "nohup matlab", ran mdv_config, then started seisBLRMS.m running. Since I used nohup, I then closed the terminal window, and am crossing my fingers in hopes that it continues to work. I would have used Screen, but that doesn't seem to work on Mafalda.
Just kidding. That plan didn't work. The new plan: I started a terminal window on Op540, which is ssh-ed into Mafalda, and started up matlab to run seisBLRMS. That window is still open.
Because Unix was being finicky, I had to open an xterm window (xterm -bg green -fg black), and then ssh to mafalda and run matlab there. The symptoms which led to this were that even though in a regular terminal window on Op540, ssh-ed to mafalda, I could access tconvert, I could not make gps.m work in matlab. When Rana ssh-ed from Allegra to Op540 to Mafalda and ran matlab, he could get gps.m to work. So it seems like it was a Unix terminal crazy thing. Anyhow, starting an xterm window on Op540m and ssh-ing to mafalda from there seemed to work.
Hopefully this having a terminal window open and running DMF will be a temporary solution, and we can get the compiled version to work again soon.
We measured the voltage noise of the heater used to control the RC can temperature. It is large.
The above scope trace shows the voltage directly on the monitor outputs of the heater power supply. The steps are from the voltage resolution of the 4116 DAC.
We also measured the voltage noise on the monitor plugs on the front panel. If these are a true representation of the voltage noise which supplies the heater jacket, then we can use it to estimate the temperature fluctuations of the can. Using the spectrum of temperature fluctuations, we can estimate the actual length changes of the reference cavity.
I used the new fax/scanner/toaster that Steve and Bob both love to scan this HP spectrum analyzer image directly to a USB stick! It can automatically make PDF from a piece of paper.
The pink trace is the analyzer noise with a 50 Ohm term. The blue trace is the heater supply with the servo turned off. With the servo on (as in the scope trace above) the noise is much much larger because of the DAC steps.
The RC thermal PID is now controllable from its own MEDM screen which is reachable from the FSS screen. The slowpid.db and psl.db have been modified to add these records and all seems to be working fine.
Also, I've attached the c1psl startup output that we got on the terminal. This is just for posterity.
I'm also done tuning the PID for now. Using Kp = -1.0, Ki = -0.01, and Kd = 0, the can servo now has a time constant of ~10 minutes and good damping as can be seen in the StripTool snap below. These values are also now in the saverestore.req so hopefully its fully commissioned.
I bet that its much better now than the MINCO at holding against the 24 hour cycle and can nicely handle impulses (like when Steve scans the table). Lets revisit this in a week to see if it requires more tuning.
All of the accelerometers and seismometers are plugged in and functional again. The cables to the back of the accelerometer preamp board (sitting under the BS oplev table) had been unplugged, which was unexpected. I finally figured out that that's what the problem was with half of the accelerometers, plugged them back in, and now all of the sensors are up and running.
TheSEIS_GUR seismometer is under MC1, and all the others (the other Guralp, the Ranger which is oriented vertically, and all 6 accelerometers) are under MC2.
The RGA isolation valve VM1 was closed since Aug 24, 2009 I installed the new UPS that time.
The last RGA scan in the log is from Aug 7, 2009 The vacuum rack UPS failed on Aug 15, 2009
I opened VM1 today so we can have ifo rga scan tomorrow.
Alex logged in around 10:30 this morning and, at our request, adjusted the configuration of fb40m to have 20 days of lookback.
I wasn't able to get him to elog, but he did email the procedure to us:
1) create a bunch of new "Data???" directories in /frames/full
2) change the setting in /usr/controls/daqdrc file
1) create a bunch of new "Data???" directories in /frames/full
2) change the setting in /usr/controls/daqdrc file
my guess is that the next step is:
3) telnet fb0 8087
I checked and we do, in fact, now have 480 directories in /frames/full and are so far using up 11% of our 13TB capacity. Lets try to remember to check up on this so that it doesn't get overfull and crash the framebuilder.
I have replaced the temporary clamps that were connecting the RC heater to its power supply with a new permanent connection.
In the IY1 rack, I connected the control signal of the RC PID temperature servo - C1:PSL-FSS_TIDALSET - to the input of the RC heater's power supply.
The signal comes from a DAC in the same rack, through a pair of wires connected to the J9-4116*3-P3 cross-connector (FLKM). I joined the pair to the wires of the BNC cable coming from the power supply, by twisting and screwing them into two available clamps of the breakout FKLM in the IY1 rack - the same connected to the ribbon cable from RC Tmeperature box.
Instead of opening the BNC cable coming from the power supply, I thought it was a cleaner and more robust solution to use a BNC-to-crocodile clamp from which I had cut the clamps off.
During the transition process, I connected the power supply BNC input to a a voltage source that I set at the same voltage of the control signal before I disconnected it (~1.145V).
I monitored the temperature signals and it looked like the RC Temperature wasn't significantly affected by the operation.
I have wiped out the 2008a install of 64-bit linux matlab and installed 2009a in its place. Enjoy.
I have added the records for the RC thermal PID servo into the psl/slowpid.db file which also holds the records for the SLOW servo that uses the NPRO-SLOW to minimize the NPRO-FAST. This new database will take effect upon the next PSL boot.
The perl script which runs the servo is scripts/PSL/FSS/RCthermalPID.pl. Right now it is using hard-coded PID parameters - I will modify it to use the on-screen values after we reboot c1psl.
The new screen C1PSL_FSS_RCPID.adl, the script, and the .db have been added to the SVN.
I have got some preliminary PID parameters which seem to be pretty good: The RCTEMP recovers in ~10 minutes from a 1 deg temperature step and the closed loop system is underdamped with a Q of ~1-2.
I'm leaving it running on op340m for now - if it goes crazy feel free to do a 'pkill RCthermalPID.pl'.
Tried to lock the interferometer but arm power didn't get over 65.
Tonight, after the weekend, I resumed the work on locking.
When I started the Mode Cleaner was unlocked because the MZ was also unlocked.
I aligned the MZ and the transmitted power reached about 2.5
Initially the interferometer lost lock at arm power of about 3-4. It looked like the alignment wasn't good enough. So I ran the alignment scripts a few times, first the scripts for the single parts and in the end the one for the full IFO.
Then I also locked again the MZ and this time the transmitted power got to about 4.
In the following locking attempts the the arm power reached 65 but then the lock got lost during the handing of CARM to C1:LSC-PD11_I
I'll keep working on that tomorrow night.
Since ~Aug. 27, the reference cavity has been running with no thermal control. This is not really a problem at the 40m; a 1 deg change of the glass cavity
will produce a 5 x 10-7 strain in the arm cavity. That's around 20 microns of length change.
This open loop time gave us the opportunity to see how good our cavity's vacuum can insulation is.
The first plot below shows the RCTEMP sensors and the RMTEMP sensor. RMTEMP is screwed down to the table close to the can and RCTEMP is on the can, underneath the insulation. I have added a 15 deg offset to RMTEMP so that it would line up with RCTEMP and allow us to see, by eye, what's happening.
There's not enough data here to get a good TF estimate, but if we treat the room temperature as a single frequency (1 / 24 hours) sine wave source, then we can measure the delay and treat it as a phase shift. There's a ~3 hour delay between the RMTEMP and RCTEMP. If the foam acts like a single pole low pass filter, then the phase delay of (3/24)*360 = 45 deg implies a pole at a ~3 hour period. I am not so sure that this is a good foam model, however.
The colorful plot is a scatter plot of RCTEMP v. RMTEMP. The color denotes the time axis - it starts out blue and then becomes red after ten days.
Steve noticed the RGA was not working today. It was powered on but no other lights were lit.
Turns out the c0rga machine had not been rebooted when the file system on linux1 was moved to the raid array, and thus no longer had a valid mount to /cvs/cds/. Thus, the scripts that were run as a cron could not be called.
We rebooted c0rga, and then ran ./RGAset.py to reset all the RGA settings, which had been reset when the RGA had lost power (and thus was the reason for only the power light being lit).
Everything seems to be working now. I'll be adding c0rga to the list of computers to reboot in the wiki.
Today I aligned the beam to PD3 (POX) since Steve had moved it.
The DC power read 1.3mV when the beam was on the PD.
I was told yesterday, that on Friday the construction people accidentally ripped out one of the 40m soil ground.....AND HOW MANY MORE ARE THERE? nobody knows.
It was ~8 ft long and 0.5" diameter buried in the ground. There is no drawing found to identify this exact building ground. They promised to replace this on Wednesday with a 10 ft long and 0.75" diameter.
The the wall will be resealed where the conduit enters the north west corner of the IFO room 104
There should be no concern about safety because the 40m building main ground is connected to the CES Mezzanine step-down transformer.
Atm1 is showing ground bus under N-breaker panel in 40m IFO room north-west corner.
The second ground bus is visible farther down south under M-breaker panel.
Atm2 is the new ground that will be connected to ground bus-N
The San Gabriel mountain has been on fire for 6 days. 144,000 acres of beautiful hillsides burned down and it's still burning. Where the fires are.
The 40m lab particle counts are more effected by next door building-gardening activity than the fire itself.
This 100 days plot shows that.
I removed POX rfpd to see how it is mounted on its base. It is here on the work bench just in case someone wants to use it the IFO over the week end.
I put POX back to it's place with markers. The pd was removed from it's base so it is for sure misaligned.
Old -pre 6/2009 LLO DCPD 3 mm od GTRAN photodiode
I stepped the TIDALSET and looked at what happened. Loop was closed with the very low gain.
The RED guy tells us the step/impulse response of the RC can to a step in the heater voltage.
The GREY SLOWDC tells us how much the actual glass spacer of the reference cavity lags the outside can temperature.
Since MINCOMEAS is our error signal, I have upped his SCAN period from 0.5 to 0.1 seconds in the database and reduced its SMOO from 0.9 to 0.0. I've also copied over the Fricke SLOW code and started making a perl PID loop for the reference cavity.
I made the changes to the psl.db to handle the new Temperature box hardware. The calibrations (EGUF/EGUL) are just copied directly from the LHO .db file (I have rsync'd their entire target area to here).
allegra:c1psl>diff psl.db~ psl.db
< field(DESC,"TIDALOUT- drive to the reference cavity heater")
< field(SCAN,".5 second")
< field(INP,"#C0 S28 @")
< field(DESC,"TIDALINPUT- tidal actuator input")
< field(SCAN,".5 second")
< field(INP,"#C0 S3 @")
> field(DESC,"TIDALINPUT- tidal actuator input")
> field(SCAN,".5 second")
> field(INP,"#C0 S3 @")
> field(DESC,"TIDALOUT- drive to the reference cavity heater")
> field(SCAN,".5 second")
> field(INP,"#C0 S28 @")
Summary: This afternoon we managed to get the temperature control of the reference cavity working again.
We bypassed the MINCO PID by connecting the temperature box error signal directly into EPICS.
We couldn't configure the PID so that it worked with the modified temperature box so we decided to just avoid using it.
Now the temperature control is done by a software servo by using the channel C1:PSL-FSS_MINCOMEAS as error signal and driving C1:PSL-FSS_TIDALSET (which we have clip-doodle wired directly to the heater input).
We 'successfully' used ezcaservo to stabilize the temperature:
ezcaservo -r C1:PSL-FSS_MINCOMEAS -s 26.6 -g -0.00003 C1:PSL-FSS_TIDALSET
We also recalibrated the channels:
with Peter King on the phone by using ezcawrite (EGUF and EGUL) but we didn't change the database yet. So please do not reboot the PSL computer until we update the database.
More details will follow.
Basically, in addition to the replacement of the resistors with metal film ones, Peter replaced the chip that provides a voltage reference.
The old one provided about 2.5 V, whereas the new one gets to about 7V. Such reference voltage somehow depends on the room temperature and it is used to generate an error signal for the temperature of the reference cavity.
Peter said that the new higher reference should work better.
It turned out that half an hour was too long. In less than that the reference cavity temperature passed the critical point when the temperature controller (located just below the ref cav power supply in the same rack) disables the input power to the reference cavity power supply.
The controller's display in the front shows two numbers. The first goes with the temperature of the reference cavity; the second is a threshold set for the first number. The power supply gets enabled only when the first number comes under the threshold value.
Now the cavity is cooling down and it will take about another hour for its temperature to be low enough and for the heater power supply to be powered.
The cavity temp cooled below SP2 set point 0.1 The Minco SP1 (present temp in Volts) now reading -0.037 so DC power supply was turned on and set to 12V 1A
The 40m Lab reference cavity temperature box S/N BDL3002 was modified as per DCN D010238-00-C.
R1, R2, R5, R6 was 10k now are 25.5k metal film
R11, R14 was 10k now are 24.9k metal film
R10, R15 was 10k now are 127k thick film - no metal film resistors available
R22 was 2.00k now is 2.21k
R27 was 10k now is 33.2k
U5, the LM-336/2.5 was removed
An LT1021-7, 7 V voltage reference was added. Pin 2 to +15V, pin 4 to ground, pin 6 to U6 pin 3.
Added an 8.87k metal film resistor between U6 pin 1 and U4 pin 6.
Added an 8.87k metal film resistor between U6 pin 1 and U4 pin 15.
The 10k resistor between J8 pin 1 and ground was already added in a previous modification.
In addition R3, R4, R7, R8, R12 and R13 were swapped out for metal film resistors of the same value
The jumper connection to the VME setpoint was removed, as per Rana's verbal instructions.
This disables the ability to set the reference cavity vacuum chamber temperature by computer.
There's no elog entry about what work has gone on today, but it looks like Peter took apart the reference cavity temperature control around 2PM.
I touched the reference cavity by putting my finger up underneath its sweater and it was nearly too hot to keep my finger in there. I looked at the heater power supply front panel and it seems that it was railed at 30 V and 3 A. The nominal value according to the sticker on the front is 11.5 V and 1 A.
So I turned down the current on the front panel and then switched it off. Otherwise, it would take it a couple of days to cool down once we get the temperature box back in. So for tonight there will definitely be no locking. The original settings are in the attached photo. We should turn this back on with its 1A setting in the morning before Peter starts so that the RC is at a stable temp by the evening. Its important NOT to turn it back on and let it just rail. Use the current limit to set it to 1 A. After the temperature box is back in the current limit can be turned back up to 2A or so. We never need the range for 3A, don't know why anyone set it so high.
While Peter King is still working on the reference cavity temperature box, I turned the power supply for the reference cavity's heater back on. Rana turned it off last night since the ref cav temperature box had been removed.
I just switched it on and turned the current knob in the front panel until current and voltage got back to their values as in Rana's picture.
I plan to leave it like that for half an hour so that the the cavity starts warming up. After that, I'll turn the current back to the nominal value as indicated in the front panel.
The reference cavity vacuum chamber temp is plotted starting Feb 22 of 2005
This plot suggest that the MINCO temp controller is not working properly.
In nodus, I moved the elog from /export to /cvs/cds/caltech. So now it is in the cvs path instead of a local directory on nodus.
For a while, I'll leave a copy of the old directory containing the logbook subdirectory where it was. If everything works fine, I'll delete that.
I also updated the reboot instructions in the wiki. some of it also is now in the SVN.
Is that the reason of the PSL craziness tonight? See attachment.
I just found the elog down and I restarted it.
The PSL Temperature Box (D980400-B-C, what kind of numbering scheme is that?) modified at LHO/LLO ~8 years ago to have better resolution on the in-loop temperature sensors.
I haven't been able to find a DCN / ECN on this, but there's an elog entry from Hugh Radkins here. I'm also attaching the PDF of the latest drawing (circa 2000) from the DCC.
The schematic doesn't show it, but I am guessing that the T_SENSE inputs are connected to the AD590 chips, and that 4 of these are attached somehow to the RefCav can. IF this is true, I don't understand why there are input resistors on the LT1125 of U1; the AD590 is supposed to be a current source ?
Peter King is supposed to be coming over to work on this today so whoever spots him should force/cajole/entice him to elog what he's done. Film him if necessary.
I also think R1-8 should be swapped into metal film resistors for stability. The datasheet says that it puts out 1 uA/K, so the opamps put out 10 mV/K.
J8 and JP1 should be shorted to disable both the tidal and VME control input. Both are unused and a potential source of drift.
Peter King is updating our temp box as Hugh did at Hanford Oct.22 of 2001 I still have not seen an updated drawing of this.
The LT 1021-7 reference chip will arrive tomorrow morning. This modification should be completed by noon.
** The link to the DCN from Hugh is here in the DCC.
The RAID array servicing the Frame builder was finally switched over to JetStor Sata 16 Bay raid array. Each bay contains a 1 TB drive. The raid is configured such that 13 TB is available, and the rest is used for fault protection.
The old Fibrenetix FX-606-U4, a 5 bay raid array which only had 1.5 TB space, has been moved over to linux1 and will be used to store /cvs/cds/.
This upgrade provides an increase in look up times from 3-4 days for all channels out to about 30 days. Final copying of old data occured on August 5th, 2009, and was switched over on that date.
Sadly, this was only true in theory and we didn't actually check to make sure anything happened.
We are not able to get lookback of more than ~3 days using our new huge disk. Doing a 'du -h' it seems that this is because we have not yet set the framebuilder to keep more than its old amount of frames. Whoever sees Joe or Alex next should ask them to fix us up.
Looks like all of the accelerometers and seismometers have been disconnected since early AM last Monday when Clara disconnected them for her sensor noise measurement.
As Rob noted last Friday, the UPS which powers the Vacuum rack failed. When we were trying to move the plugs around to debug it, it made a sizzling sound and a pop. Bad smells came out of it.
Ben came over this week and measured the quiescent power consumption. The low power draw level was 11.9 A and during the reboot its 12.2 A. He measured this by ??? (Rob inserts method here).
So what we want is a 120 V * 12.2 A ~ 1.4 kVA UPS with ~30-50% margin. We look for this on the APC-UPS site:
On Monday, we will order the SUA2200 from APC. It should last for ~25 minutes during an outage. Its $1300. The next step down is $200 cheaper and gives 10 minutes less uptime.
The new APC Smart -UPS 2200VA is now running at the vacuum rack. There are 2 load monitoring leds on out of 5
Maglev, dry pumps and roughing pumps are not using UPS.
The switch over went smoothly with Yoichi's help.
First we closed all vacuum valves and stopped the two small turbos.
Than turned power off to instruments in the vac-rack and VME: c1vac1 & c1vac2
Maglev was left running.
Now we moved the AC plugs from the wall receptacles over to the back of the UPS and powered them up.
Varian turbos were restarted and vacuum valves were restored in order to reach vacuum normal condition.
See 40m Vacuum System States and Sequences Manual of 10-24-2001
Linux 3 desk top computer is out of order at the pump spool. We should replace it.
The vacuum control screen can be pulled up on a lap top: /cvs/cds/caltech/medm/c0/ve/VacControl_BAK.adj
This morning I found that all the front end computers down. A failure of the RFM network drove all the computers down.
I was about to restart them all, but it wasn't necessary. After I power cycled and restarted C1SOSVME all the other computers and RFM network came back to their green status on the MEDM screen. After that I just had to reset and then restart C1SUSVME1/2.