We have 2 sts-2 readout box - pink and blue. Pink outputs 12 DVC - this a problem of amplifier. This box has a rectifier (the box works from AC power) and an amplifier for velocity channels. Mass positions, calibration channels are connected by a wire from input to output. The amplifier for velocity channels does not work properly, so I connected velocity channels directly to the output - the signal from sts-2 is large enough even without amplification. When I plugged sts-2 to pink readout board, on the velocity output I saw ~4 VDC. Sts-2 was needed to be recentered. I pressed AUTOZERO command, but this did not work out. Before I had checked that this readout box indeed gives an autozero logical signal - 5VDC for ~2 sec. I think it does not provides sts-2 with enough current, seismometer needs 0.1 A in autozero regime.
Blue readout box after switching it to 1 sec regime and zeroing sts-2 started to output reasonable signal for gains = 10. I tried gains = 100, X velocity channel started to output noise. Now the gain is 10 and the response is 120 sec. But at least this box works. Still performance is not clear as well as noise level. To determine this I've put sts-2 to isolation box.
After I've put Guralps in the isolation and waited for a couple of days, Guralp noise has been improved a little more.
We couldn't scan the Y arm for 1FSR last night because the ALS servo breaks while sweeping.
We thought this might be from the amplitude fluctuation of the beat signal. The amplitude of the beat signal goes into the beatbox was about -5 dBm, which is not so enough for the beatbox to get good LO. So, we put an amplifier (and attenuators) and the amplitude became +1 dBm. The range beatbox can handle is about -3 dBm to +3 dBm, according to our calculation.
This increased stability of the lock, and we could scan the arm for 1FSR. Below is the plot of scanned ALS error signal (blue), Y arm IR PDH signal (green) and TRY (red).
For each slope, we can see two TEM00 peaks, some higer order modes(may be 01, 02, 02) and sidebands (large 11MHz, small 55MHz?).
We couldn't scan for more. This is still a mystery.
Also, we need to reduce residual Y arm length fluctuation more because we get funny TRY peak shape.
For C1:ALS-BEATY_COARSE_I_IN1, 1 count stands for 0.21 nm(see elog #6817). We sweeped 4000 peak to peak in 50 sec. So, the scan speed is about 17 nm/sec.
This means it takes about 0.06 sec to cross resonant peak.
Cavity build up time is about 2LF/(pi*c) ~ 40 usec. So, the scan is quasi-static enough.
Characteristic time scale for the Y end temperature control is about 10 sec, so Y end frequency is following the Y arm length change with temperature control.
Currently, sampling frequency of DQ channels are 2048 Hz. This means we have 100 points in a TRY peak. I think this is enough to get a peak height.
- Reduce RMS. We are trying to use a whitening filter.
- Find why we can't scan more. Why??
- ETMY coil gains may have some unbalance. We need to check
- Characterize Y end green frequency control. Koji and I changed them last week (see elog #6776).
- Calculate positions of RF SBs and HOMs and compare with this result.
Tried the script 3 times and it didn't come back. Pkill'd and then scripted. That worked.
Been non-functional for 3 weeks. Anyone else notice this? images missing since ~Sep 21.
I've re-submitted the Condor job; pages should be back within the hour.
Summary pages will be unavailable today due to LDAS server maintenance. This is unrelated to the issue that Rana reported.
Nice PSL summaries from LHO:
[Jamie, Jenne, Suresh, Steve, Koji, Kiwamu]
We got two green beams coming out from the chambers !
Summary of today's invac work :
- removed the access connector and the BS north door
- realigned the X and Y arm to the green beams.
- installed a HWP on the ETMY table to rotate the polarization of the green beam to P.
- repositioned the first periscope on the BS table.
- repositioned the second periscope on the IOO table.
- steered some green mirrors on the IOO and OMC chamber to let the Y green beam come out to the PSL table.
- installed a PBS in front of the first periscope to spatially overwrap two green beams.
- adjusted the incident angle of the PBS to maximize the power of the Y green beam, which is transmitted through it.
- steer two mirrors on the BS table to align the X green beam
- installed two beam dumps, one is near the PBS to eliminate a ghost in the X green beam, and the other is on the back side of IPPOS/ANG pick off window.
- closed the doors.
When steering the final green mirror on the OMC table, accidentally we changed the alignment of the MC incident mirror.
So the alignment of the incident beam going into MC has changed, and we haven't re-aligned it yet.
During we were installing the PBS on the BS table, we found that the allowable incident angle for the Y beam is about ~ 55 deg, which maximizes the amount of the transmitted Y green.
Since the PBS had been considered to be 45 deg incident in our optical layout, this required several modifications in the green mirrors.
To have a clear X green beam path going into the PBS, we had to slide the PBS and periscope to the West.
The periscope is now sitting on the very edge of the BS table, and in fact ~ 20% of the bottom plate of the periscope is already sticking out.
Also since 30% of the area of the PBS's post is on a hole, which is somehow for the stack, we had to use three dog clamps instead of a folk clamps to make the contact tight.
[Rana / Jenne / Kiwamu]
The ETMY suspension tower is currently sitting on the north side of the table for some inspections.
The adjustment of the OSEMs is ongoing.
(What we did)
+ Taken out two oplev mirrors, Jamie's windmill and a lemo patch panel.
+ Put some pieces of metal as makers for the original place
+ Put some makers on the distance of dLY = -25.49 cm = -10.04 inch from the original place (see the 40m wiki).
The minus sign means it will move away from the vertex.
+ Brought the ETMY suspension tower to the north side to do some inspections
+ Did some inspections by taking the noise spectra (#5141)
+ Adjusted the OSEM range and brought the magnets on the center of the OSEM holders by rotating and translating the OSEMs
+ During the work we found the proper PIT and YAW gains were about -5, which are the opposite sign from what they used to be.
+ Trying to minimize the cross couplings
JD: There is still some funny business going on, like perhaps the LR magnet isn't quite in the OSEM beam. We leave the optic free swinging, and will continue to investigate in the morning.
Also, EQ gave us a better (and not pwd protected) URL for the summary pages. Please replace your previous links with this new one:
Like Steve pointed out, the summary pages show that the y-arm transmission drifts a lot when locked. The OL summary page shows that this is all due to ITMY yaw.
Could be either that they coil driver / DAC is bad or that the suspension is poorly built. We need to dig into ITMY OL trends over long term to see if this is new or now.
Also, weather station needs a reboot. And does anyone know what the MC_F calibration is?
Dead again. No outputs for the past month. We really need a cron job to check this out rather than wait for someone to look at the web page.
Max tells us that soem conf files were bad and that he did something and now some pages are being made. But the PEM and MEDM pages are bank. Also the ASC tab looks bogus to me.
The summery pages are working at a slow motion speed. It's response time 12 minutes.
Last good page May 18, 2017
Not found, error message May 19 - June 4,2017
Blank plots, June 5, 2017
40m surfs: Nicole Ing, Iswita Saikia and Sonali Mohapatra received 40m specific safety training today.
Alex Cole and Craig Cahillane received 40m specific, basic safety training last week.
Andres Medina and Andrew "Harry" Hall received 40m specific safety training. They did general safety already and their laser safety training will be this after noon.
Pooja and Keirthana received 40m specific basic safety training.
Shruti and Sandrine received 40m specific basic safety training this morning.
The 40m lab specific safety training is done. The participants were
Stephanie Erickson, Clara Bennett, Chris Zimmerman, Zach Commings, Michelle Stephen surfs and Drew Cappel postock.
They have already went through the Caltech Safety Office laser and general safety training.
They still have to read, understand and sign the the SOP for the laser & lab
ITMX and PRM moved alot. BS and ITMY just a little based on oplev reference.
Remember that the Oplevs are not good references because of the temperature sensitivity. The week long trend shows lots of 24 hour fluctuations.
A plot showing that the daily variation in the OLs is sometimes almost as much as the full scale readout (-1 to +1).
ITMX, PRM and BS watchdogs are tripped. They were restored.
Stable MC was disabled so I can use MC_REFL 1 W beam to measure green glass .
This morning, at about 12 Koji found all the front-ends down.
At 1:45pm rebooted ISCEX, ISCEY, SOSVME, SUSVME1, SUSVME2, LSC, ASC, ISCAUX
Then I burtestored ISCEX, ISCEY, ISCAUX to April 2nd, 23:07.
The front-ends are now up and running again.
I restored damping to all SUSes except ITM-east. The ITMX OSEMs are being used in the clean assembly room.
Very, very cool!
Kiwamu (or whoever is here last tonight): please run the free-swing/kick script (/opt/rtcds/caltech/c1/scripts/SUS/freeswing) before you leave, and I'll check the matrices and update the suspensions tomorrow morning.
All suspentions were restored and MC locked. PRM side osem RMS motion was high.
Atm2, Why the PRM is 2x as noisy as the SRM ?
OSEM voltages to be corrected at upcoming vent: threshold ~ 0.7-1.2V, ( at 22 out of 50 )
ITMX_UL, UR, LL, LR, SD
ETMX_UL, UR, LL, LR, SD
SRM_UL, UR, LL
MC3_UL, LR, LL
Why are all the suspension watchdogs tripped? None of the suspension models are running on c1ioo, so they should be completely unaffected. Steve, did you find them tripped, or did you shut them off?
In either event they should be safetly turned back on.
I've turned off the coils. Though non of them are on the c1ioo, who knows what can happen when we'll try to run the models again.
Now that the replacement susaux machine is installed and fully tested, I renamed it from c1susaux2 to c1susaux and updated the DNS lookup tables on chiara accordingly.
Then we restarted daqd.
[Suresh / Kiwamu]
The c1lsc and c1sus machine were rebooted.
- - (CDS troubles)
After we restarted daqd and pressed some DAQ RELOAD buttons the c1lsc machine crashed.
The machine didn't respond to ssh, so the machine was physically rebooted by pressing the reset button.
Then we found all the realtime processes on the c1sus machine became frozen, so we restarted them by sshing and typing the start scripts.
However after that, the vertex suspensions became undamped, even though we did the burt restore correctly.
This symptom was exactly the same as Jenne reported (#5571).
We tried the same technique as Jenne did ; hardware reboot of the c1sus machine. Then everything became okay.
The burt restore was done for c1lsc, c1asc, c1sus and c1mcs.
- - (ITMX trouble)
During the trial of damping recovery, the ITMX mirror seemed stacked to an OSEM. The UL readout became zero and the rest of them became full range.
Eventually introducing a small offset in C1:SUS-ITMX_YAW_COMM released the mirror. The amount of the offset we introduced was about +1.
[Jamie, Brett, Jenne]
We made some small modifications to the sus_single_control suspension controller library part to get in/out the signals that Brett needs for his "global damping" work. We brought out the POS signal before the SUSPOS DOF filter, and we added a new GLOBPOS input to accommodate the global damping control signals. We added a new EPIC input to control a switch between local and global damping. It's all best seen from this detail from the model:
The POSOUT goto goes to an additional output. As you can see I did a bunch of cleanup to the spaghetti in this part of the model as well.
As the part has a new input and output now we had to modify c1sus, c1scx, c1scy, and c1mcs models as well. I did a bunch of cleanup in those models as well. The models have all been compiled and installed, but a restart is still needed. I'll do this first thing tomorrow morning.
All changes were committed to the userapps SVN, like they should always be.
We still need to update the SUS MEDM screens to display these new signals, and add switches for the local/global switch. I'll do this tomorrow.
During the cleanup I found multiple broken links to the sus_single_control library part. This is not good. I assume that most of them were accidental, but we need to be careful when modifying things. If we break those links we could think we're updating controller models when in fact we're not.
The one exception I found was that the MC2 controller link was clearly broken on purpose, as the MC2 controller has additional stuff added to it ("STATE_ESTIMATE"):
I can find no elog that mentions the words "STATE" and "ESTIMATE". This is obviously very problematic. I'm assuming Den made these modifications, and I found this report: 7497, which mentions something about "state estimation" and MC2. I can't find any other record of these changes, or that the MC2 controller was broken from the library. This is complete mickey mouse bullshit. Shame shame shame. Don't ever make changes like this and not log it.
I'm going to let this sit for a day, but tomorrow I'm going to remove replace the MC2 controller with a proper link to the sus_single_control library part. This work was never logged so it didn't happen as far as I'm concerned.
Most of the suspension look ok, with "badness" levels between 4 and 5. I'm just posting the ones that look slightly less ideal below.
pit yaw pos side butt
UL 0.466 1.420 1.795 -0.322 0.866
UR 1.383 -0.580 0.516 -0.046 -0.861
LR -0.617 -0.978 0.205 0.011 0.867
LL -1.534 1.022 1.484 -0.265 -1.407
SD 0.846 -0.632 -0.651 1.000 0.555
pit yaw pos side butt
UL 0.783 1.046 1.115 -0.149 1.029
UR 1.042 -0.954 1.109 -0.060 -1.051
LR -0.958 -0.926 0.885 -0.035 0.856
LL -1.217 1.074 0.891 -0.125 -1.063
SD 0.242 0.052 1.544 1.000 0.029
pit yaw pos side butt
UL 1.536 0.714 0.371 0.283 1.042
UR 0.225 -1.286 1.715 -0.084 -0.927
LR -1.775 -0.286 1.629 -0.117 0.960
LL -0.464 1.714 0.285 0.250 -1.070
SD 0.705 0.299 -3.239 1.000 0.023
pit yaw pos side butt
UL 1.335 0.209 1.232 -0.071 0.976
UR -0.537 1.732 0.940 -0.025 -1.068
LR -2.000 -0.268 0.768 0.004 1.046
LL -0.129 -1.791 1.060 -0.043 -0.911
SD -0.069 -0.885 1.196 1.000 0.239
pit yaw pos side butt
UL 1.103 0.286 1.194 -0.039 0.994
UR -0.196 -1.643 -0.806 -0.466 -1.113
LR -2.000 0.071 -0.373 -0.209 0.744
LL -0.701 2.000 1.627 0.217 -1.149
SD 0.105 -1.007 3.893 1.000 0.290
All suspension damping has been restored.
Earthquake of magnitude 5.0 shakes ETMY loose.
MC2 lost it's damping later.
I just spent the last hour checking in a bunch of uncommitted changes to stuff in the SVN. We need to be MUCH BETTER about this. We must commit changes after we make them. When multiple changes get mixed together there's no way to recover from one bad one.
Last login: Fri Sep 19 00:11:44 2008 from gwave-69.ligo.c
Sun Microsystems Inc. SunOS 5.9 Generic May 2002
svn: This client is too old to work with working copy '.'; please get a newer Subversion client
SunOS nodus 5.9 Generic_118558-39 sun4u sparc SUNW,A70 Solaris
I installed svn on op440m. This involved installing the following packages from sunfreeware:
apache-2.2.6-sol9-sparc-local libiconv-1.11-sol9-sparc-local subversion-1.4.5-sol9-sparc-local
db-4.2.52.NC-sol9-sparc-local libxml2-2.6.31-sol9-sparc-local swig-1.3.29-sol9-sparc-local
expat-2.0.1-sol9-sparc-local neon-0.25.5-sol9-sparc-local zlib-1.2.3-sol9-sparc-local
The packages are located in /cvs/cds/caltech/apps/solaris/packages. The command line to install
a package is "pkgadd -d " followed by the package name. This can be repeated on nodus to get
svn over there. (Kind of egregious to require an apache installation for the svn _client_, I
i tried to commit something this afternoon and got the following error message:
Error: Commit failed (details follow):
Error: Server sent unexpected return value (405 Method Not Allowed) in response to
Error: MKCOL request for '/svn/!svn/wrk/d2523f8e-eda2-d847-b8e5-59c020170cec/trunk/frank'
anyone had this before? what's wrong?
Yesterday and this morning's slow NFS disk access was caused by 'svndumpfilter' being run at linux1 to carve out the Noise Budget directory. It is being moved to another server; I think the disk access is back to normal speed now.
This morning I opened the chambers and started some in-vac works.
As explained in this entry, I swapped pzt mirror (A) and (C) successfully.
The chambers are still open, so don't be surprised.
(today's missions for IOO)
- cabling for the pzt mirrors
- energizing the pzt mirrors and slide them to their midpoint.
- locking and alignment of the MC
- realignment of the pzt mirrors and other optics.
- letting the beam go down to the arm cavity
As a result of the vacuum work, now the IR beam is hitting ETMX.
The spot of the transmitted beam from the cavity can be found at the end table by using an IR viewer.
BUT, what we really need (instead of just the DC sweeps) is the DC sweep with the uncertainty/noise displayed as a shaded area on the plot, as Nic did for us in the pre-CESAR modelling.
I've taken a first stab at this. Through various means, I've made an estimation of the total noise RMS of each error signal, and plotted a shaded region that shows the range of values the error signal is likely to take, when the IFO is statically sitting at one CARM offset.
I have not included any effects that would change the RMS of these signals in a CARM-offset dependent way. Since this is just a rough first pass, I didn't want to get carried away just yet.
For the transmission PDs, I measured the RMS on single arm lock. I also measured the incident power on the QPDs and thorlabs PDs for an estimate of shot noise, but this was ridiculously smaller than the in-loop RIN. I had originally though of just plotting sensing noise for the traces (i.e. dark+shot), because the amount of seismic and frequency noise in the in-loop signal obviously depends on the loop, but this gives a very misleading, tiny value. In reality we have RIN from the PRC due to seismic noise, angular motion of the optics, etc., which I have not quantified at this time.
So: for this first, rough, pass, I am simply multiplying the single transmission noise RMSs by a factor of 10 for the coupled RMS. If nothing else, this makes the SqrtInv signal look plausible when we actually practically find it to be plausible.
For the REFL PDs, I misaligned the ITMs for a prompt PRM reflection for a worst-case shot noise situation, and took the RMS of the spectra. (Also wrote down the dark RMSs, which are about a factor of 2 lower). I then also multiplied these by ten, to be consistent with the transmission PDs. In reality, the shot noise component will go down as we approach zero CARM offset, but if other effects dominate, that won't matter.
Enough blathering, here's the plot:
Now, in addition to the region of linearity/validity of the different signals, we can hopefully see the amount of error relative to the desired CARM offset. (Or, at least, how that error qualitatively changes over the range of offsets)
This suggests that we MAY be able to hop over to a normalized RF signal; but this is a pretty big maybe. This signal has the response of the quotient of two nontrivial optical plants, which I have not yet given much thought to; it is probably the right time to do so...