TP2 dry_pump was changed at intake pressure 982 mTorr at 84,889 hrs This seal hold up for one year.
The rebuilt pump seal performing well at 28 mTorr
Steve & Bob,
Bob removed the head cover from the housing to inspect the condition of the the tip seal. The tip seal was fine but the viton cover seal had a bad hump. This misaligned the tip seal and it did not allow it to rotate.
It was repositioned an carefully tithened. It worked. It's starting current transiant measured 28 A and operational mode 3.5 A
This load is normal with an old pump. See the brand new DIP7 drypump as spare was 25 A at start and 3.1 A in operational mode. It is amazing how much punishment a slow blow ceramic 10A fuse can take [ 0215010.HXP ]
In the future one should measure the current pick up [ transient <100ms ] after the the seal change with Fluke 330 Series Current Clamp
It was swapped in and the foreline pressure dropped to 24 mTorr after 4 hours. It is very good. TP3 rotational drive current 0.15 A at 50K rpm 24C
Gautam and Steve,
Our TP3 drypump seal is at 360 mT [0.25A load on small turbo] after one year. We tried to swap in old spare drypump with new tip seal. It was blowing it's fuse, so we could not do it.
Noisy aux drypump turned on and opened to TP3 foreline [ two drypumps are in the foreline now ] The pressure is 48 mT and 0.17A load on small turbo.
Another big problem is the workstation application upgrades. The NDS protocol version has been incremented, which means that all the NDS client applications have to be upgraded. The new dataviewer is working fine (on pianosa), but dtt is not:
controls@pianosa:~ 0$ diaggui
diaggui: symbol lookup error: /ligo/apps/linux-x86_64/gds-2.15.1/lib/libligogui.so.0: undefined symbol: _ZN18TGScrollBarElement11ShowMembersER16TMemberInspector
dtt (diaggui) and dataviewer are now working on pianosa to retrieve realtime data and past data from DQ channels.
Unfortunately it looks like there may be a problem with trend data, though. If I try to retrieve 1 minute of "full data" with dataviewer for channel C1:SUS-ITMX_SUSPOS_IN1_DQ around GPS 1019089138 everything works fine:
Connecting to NDS Server fb (TCP port 8088)
T0=12-04-01-00-17-45; Length=60 (s)
60 seconds of data displayed
but if I specify any trend data (second, minute, etc.) I get the following:
Connecting to NDS Server fb (TCP port 8088)
Server error 18: trend data is not available
datasrv: DataWriteTrend failed in daq_send().
T0=12-04-01-00-17-45; Length=60 (s)
No data output.
Alex warned me that this might have happened when I was trying to test the new daqd without first turning off frame writing.
I'm not sure how to check the integrity of the frames, though. Hopefully they can help sort this out on Monday.
The defaults cds-crtools didn't come with some of the older ezcautils (like ezcaread, ezcawrite etc). This is now packaged for debian, so I installled them with sudo apt update && sudo apt install dtt-ezca-tools on rossa. Now, we don't have to needlessly substitute the commands in our old shell scripts with the more modern z read, z write etc.
I am wondering if there is a relative implicit minus sign between the z servo and ezcaservo commands...
Yesterday morning was dusty. I wonder why?
The PRM sus damping was restored this morning.
Yesterday afternoon at 4 the dust count peaked 70,000 counts
Manasa's alergy was bad at the X-end yesterday. What is going on?
There was no wind and CES neighbors did not do anything.
It is worth wiping table top covers. Use isopropanol soaked lint free wipes.
You should wipe off the table cover before you take it off next time.
It is important to turn up the PSL encloure HEPA Variac voltage if you are working in there. It takes less than 10 minutes to reach lab condition.
Lab air count normal. It is not logged. I have a notebook of particle count on the SP table next to the Met One counter.
Chris replaced some air condition filters and ordered some replacement filter today.
Please wet WIPE before opening chamber or optical table ! !
with methanol soaked kimwipes.
The Met One particle counter is located on CES wall, just behind ITMX chamber.
The numbers are not so bad, but have you ( ...a) asked the IFO lately?
e-log was repeatedly hanging and several attempts to start the daemon failed.
problem was solved after clearing the (firefox) browser cache, cookie, everything!!
I found the e-log has been down around 3:40pm, then I restarted the e-log. Now it's working.
Tonight I managed to lock CARM and DARM under ALS control only
ALS error signal tuning
To find the error signals for CARM/DARM, I turned on the oscillators (at 307.8 and 313.31 Hz respectively) with 150 counts and enabled FM10 (Notch for sensing matrix) in the CARM and DARM servo banks. I then removed the ALS offsets (C1:LSC-ALSX_OFFSET, C1:LSC-ALSY_OFFSET) and looked at the transfer functions shown in Attachment #2. I optimized the ALS blending until I maximized the CARM and DARM A to B paths and minimized CARM and DARM cross couplings. The signs were chosen to leave a phase of 0.
After measuring the OLTFs for eCARM and eDARM (loop closed with the A error point) and tuning the ALS error signals, I gradually blended the A and B paths and checked the OLTFs for CARM and DARM. During this I realized I needed to disable some of the notch violin filters because they sometimes made the DARM loop unstable after >50% blending. In the end the simultaneous CARM_A/DARM_A to CARM_B/DARM_B handoff was successful in 0.5 seconds. Attachment #3 shows the OLTFs under ALS control.
After getting nominally stable ALS control, I tried adding an offset. The LSC CARM offset range was insufficient, so I ended up directly scanning the C1:LSC-ALSX_OFFSET and C1:LSC-ALSY_OFFSET. The first couple of attempts the ramp time was set to 2.0 seconds, and a step of 0.01 was enough to break the lock. I managed to hold the control with as much as C1:LSC_CARM_A_IN1 offset by ~ 500 (rms ~ 200 counts). I roughly estimate this to be ~ 5% of the CARM pole which is 4 kHz in this case so overall 200 Hz which is not that large.
I made some changes to the elog on nodus:
I saw that the current version of the elog seems to be in the svn, so tried to svn the changes from nodus via ssh, but got this message:
"svn: This client is too old to work with working copy '/cvs/cds/caltech/elog/elog-2.7.5'; please get a newer Subversion client."
I feel I should svn this but don't want to *&#@ the svn/elog up.
For now I will leave it alone and ask a question: Is the folder /cvs/cds/caltech/elog/elog-2.7.5/ under SVN control? Is it also under CVS control?
TL;DR: New tab added to elog.
No damage. The BS sensor UR 0.220 V has been low for some times.
Dataviewer does not work for long term trend.
6.2M Bandon, OR did not trip any sus
Earthquake 4.4 Leo Carrillo Beach.
Some of the watchdogs tripped out.
quake coming through. I've re-enabled optic damping (except ETMY), and left off the oplevs for now. We can do a resonant-f check over the weekend.
Looks like it was a magnitude 5 near Olancha, where they sell really good fresh jerky. quake
latest news: there's actually been about a dozen earthquakes in Keeler in the last couple hours: http://earthquake.usgs.gov/eqcenter/recenteqsus/Maps/special/California_Nevada_eqs.php
Local eq shakes the lab
Large earthquake shakes Baja California, Mexico and 6 over Magnitude 5 aftershakes follow. The frontend computers are still down since Friday.
Shasky day yesterday postpones venting. We had about 11 shakes larger than mag 4.0 Mag5.5 was the largest at 13:58 Sunday, Aug 26 at the Salton Sea area.
Atm3, ITMX and ETMX did not come back to it's position
I was trying to get a lossmap measurement over the weekend but had some trouble first with the IMC and then with the PMC.
For the IMC: It was a bit too misaligned to catch and maintain lock, but I had a hard time improving the alignment by hand. Fortunately, turning on the WFS quickly once it was locked restored the transmission to nominal levels and made it maintain the lock for longer, but only for several minutes, not enough for a lossmap scan (can take up to an hour). Using the WFS information I manually realigned the IMC, which made locking easier but wouldn't help with staying locked.
For the PMC: The PZT feedback signal had railed and the PMC had been unlocked for 8+ hours. The PMC medm screen controls were generally responsive (I could see the modes on the CCDs changing) but I just couldn't get it locked. c1psl was responding to ping but refusing telnet so I keyed the crate, followed by a burt restore and finally it worked.
After the PMC came back the IMC has already maintained lock for more than an hour, so I'm now running the first lossmap measurements.
Southern Mexio is still shaking..... so as we
No sus tripped. Seimometers do not see the 5.3M ?
Lompoc 4.3M and 3.7M Avalon
M4 local earthquake at 10:10 UTC There is no sign of damage.
....here is an other one.........M5.8 Ferndale, CA at 16:40 UTC
I changed the names on a switch(SW1) in the C1:PSL-FSS screen. To do this I had to edit the psl.db database file in the directory /cvs/cds/caltech/target/c1psl. After this change, when I executed this screen, all fields in the C1PSL_FSS screen went blank. As the change to database file takes effect only after we restart the C1PSL machine (slow machine) I went ahead and reset the c1psl machine. I then used the burttoday to locate the most recent snapshot files and then used burtgooey to restore all the values in the c1psl.snap file.
Everything back to normal now.
If you come up with a good idea and want to add new things to current RT model;
1. Go to simLink directory and open matlab;
2. In matlab command line, type;
3. Open a model you want to edit.
4. Edit! CDS_PARTS has useful CDS parts.
There are some traps. For example, you cannot put cdsOsc in a subsystem
5. Compile your new model. See my elog #3787.
6. If you want to burt restore things;
7. Edit MEDM screens
8. Useful wiki page on making a new suspension MEDM screens;
Say, you want to edit all the similar medm screens named C1SUS_NAME_XXX.adl.
1. Go to /opt/rtcds/caltech/c1/medm/master and edit C1SUS_DEFAULTNAME_XXX.adl as you like.
2. Run generate_master_screens.py.
We should completely turn off the air conditioner when working on green locking.
Even if green beams propagates inside of chambers, the air conditioner does affect the spatial jitter of the beam.
The attached picture was taken when Steve and I were seeing how the green beam jittered.
At that time the beam was injected from the end table and going through inside of the ETM, the ITM and the BS camber.
Eventually it came out from the camber and hit the wall outside of the chamber. It was obvious, we could see the jittering when the air cond. was ON.
With no DARM offset, sweeping CARM shows an asymmetry between the state where we lock to a DARM spring and the state with a DARM anti-spring. This is why we have a link between the DARM and CARM optical springs.
For each DARM detune direction (positive or negative, spring or anti-spring), there is only one CARM direction which can yield a DC-based error signal lock with a CARM offset but no DARM offset, which is what we want.
For the proposed construction in the NW corner of the CES building (near the 40m BS chamber), they did a simulated construction activity on Wednesday from 12-1.
In the attached image, you can see the effect as seen in our seismometers:
this image is calculated by the 40m summary pages codes that Tega has been shepherding back to life, luckily just in time for this test.
Since our local time PDT = UTC - 7 hours, 1900 UTC = noon local. So most of the disturbance happens from 1130-1200, presumably while they are setting up the heavy equipment. If you look in the summary pages for that day, you can also see the IM lost lock. Unclear if this was due to their work or if it was coincidence. Thoughts?
I checked the effect of the arm length to the reflectance of the f2(=5*f1) sidebands.
Conclusion: If we choose L_arm = 38.4 [m], it looks sufficiently being away from the resonance
We may want to incorporate small change of the recycling cavity lengths so that we can compensate the phase deviation from -180deg.
f1 of 11.065399MHz is assumed. The carrier is assumed to be locked at the resonance.
Attachment 1: (Left) Amplitude reflectance of the arm cavity at f2 a a function of L_arm. (Right) Phase
Horizontal axis: Arm length in meter, Vertical Magnitude and Phase of the reflectance
At L=37.93 [m], f2 sidebands become resonant to the arm cavity. Otherwise, the beam will not be resonant.
Attachment 2: close-up at around 5 f1 frequency.
The phase deviation from the true anti resonance is ~0.7deg. This can be compensated by both PRC and SRC lengths.
I was told yesterday, that on Friday the construction people accidentally ripped out one of the 40m soil ground.....AND HOW MANY MORE ARE THERE? nobody knows.
It was ~8 ft long and 0.5" diameter buried in the ground. There is no drawing found to identify this exact building ground. They promised to replace this on Wednesday with a 10 ft long and 0.75" diameter.
The the wall will be resealed where the conduit enters the north west corner of the IFO room 104
There should be no concern about safety because the 40m building main ground is connected to the CES Mezzanine step-down transformer.
Atm1 is showing ground bus under N-breaker panel in 40m IFO room north-west corner.
The second ground bus is visible farther down south under M-breaker panel.
Atm2 is the new ground that will be connected to ground bus-N
elog was acting up again (not running), so I restarted it.
And again. This makes 4 times since lunchtime yesterday....something bad is up.
I've set up nodus to start the ELOG on boot, through /etc/init/elog.conf. Also, thanks to this, we don't need to use the start-elog.csh script any more. We can now just do:
controls@nodus:~ $ sudo initctl restart elog
I also tweaked some of the ELOG settings, so that image thumbnails are produced at higher resolution and quality.
elog died b/c someone somewhere did something which may or may not have been innocuous. I ran the script in /cvs/cds/caltech/elog to restart the elog (thrice).
I have now banned Warren from clicking on the elog from home