Thank you Ben Abbott forwarding this information:
QPD Amplifier D990272 https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?.submit=Number&docid=D990272&version= at the X-end. It plugs into a Generic QPD Interface, D990692, https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?.submit=Number&docid=D990692&version= according to my drawings, that should be in 1x4-2-2A.
Wrong: this is not an interface.
This is nice - how about figuring out how to plot the measurement and model on the same plot? I guess we need to figure out how to go from counts to Watts.
Trying to figure out what's wrong with the MC WFS:
1) The symptom seems to be that the control signals become very large in the pitch and then the loop breaks when they saturate. Usually this is due to a degenerate matrix or improper inversion. Most likely some of the BURT restore is bad or the analog gain for one of the WFS has been switched when Jamie was doing the "Guardian" debugging.
2) In checking this out, I found that several buttons on the WFS screens were not working (and apparently have never been working). Please try to test things in the future...The filter bank buttons in C1IOO_MC_TRANS_QPD were using relative path names; fixed these to use abs path names. The buttons in the WFS_MASTER for the IOO_PIT banks were using IOO_PITCH instead...
2.5) Recentered beams on WFS heads with MC alignment good and MC unlocked.
3) Main problem in the WFS still not found - disabling this in the autolocker.
Tried a bunch of stuff, but eventually just turned off the TRANS_QPD loops and loops are stable. Needs more debugging.
From the ALS overview screen, opening up the ETMX and ETMY screens gives these white fields. The PV info indicates that the blank fields were made with some macro variable substitution that didn't work well.
Why are these different from the SUS screens I get from the sitemap?
I've restarted the NDS2 process on Megatron so that we can use it for getting past data and eventually from outside the 40m.
1) from /home/controls/nds2 (which is not a good place for programs to run) I ran nds2-megatron/start-nds2
2) this is just a script that runs the binary from /usr/bin/ and then leaves a log file in ~/nds2/log/
3) I tested with DTT that I could access megatron:31200 and get data that way.
There is a script in usr/bin called nds2_nightly which seems to be the thing we should run by cron to get the channel list to get updated, but I' m not sure. Let's see if we can get an ELOG entry about how this works.
Then we want Jamie to allow some kind of tunneling so that the 40m data can be accessed from outside, etc.
For quite a while (no one knows how long), we've seen fluctuations in the 10-30 Hz seismic motion. This shows up as the purple trace on the seismic BLRMS on the wall projector.
The second plot shows that this is not only a periodic increase in the usual 29.5 Hz HVAC peak, but also an anomolous 32.2 Hz peak. Probably some malfunctioning machinery - maybe in the 40m or maybe on the roof.
Could be that this is OK, but it doesn't yet make sense to me. Can you please explain in words how this manages to apply the calibration rather than just add an extra gain to the phase tracking loop?
I'm not sure what's going on today but we're seeing ~80% packet loss on the 40MARS wireless network. This is obviously causing big problems for all of our wirelessly connected machines. The wired network seems to be fine.
I've tried power cycling the wireless router but it didn't seem to help. Not sure what's going on, or how it got this way. Investigating...
I'm still seeing some problems with this - some laptops are losing and not recovering any connection. What's to be done next? New router?
Its an increase in the microseismic peak. Don't know what its due to though.
I moved the old matlab directory from /cvs/cds/caltech/apps/linux64/matlab_o to /cvs/cds/caltech/apps/linux64/matlab_oo
and moved the previously current matlab dir from /cvs/cds/caltech/apps/linux64/matlab to /cvs/cds/caltech/apps/linux64/matlab_o.
And have installed the new Matlab 2013a into /cvs/cds/caltech/apps/linux64/matlab.
Since I'm not sure how well the new Matlab/Simulink plays with the CDS RCG, I've left the old one and we can easily revert by renaming directories.
The keyboard on Pianosa workstation has been flaky for the last several days at least. Today, it was having troubles mounting the linux1 file system and was hanging on boot.
People in the control room emailed Jamie and then grew afraid of the computer. Annalisa suggested that we put garlic on it since was clearly possessed.
Typing 'dmesg' at the command prompt, I found that there were thousands of messages like these:
[ 3148.181956] usb 2-1.2: new high speed USB device number 68 using ehci_hcd
[ 3149.773883] usb 2-1.2: USB disconnect, device number 68
[ 3150.228900] usb 2-1.2: new high speed USB device number 69 using ehci_hcd
[ 3152.076544] usb 2-1.2: USB disconnect, device number 69
[ 3152.787391] usb 2-1.2: new high speed USB device number 70 using ehci_hcd
[ 3154.123331] usb 2-1.2: USB disconnect, device number 70
[ 3154.578459] usb 2-1.2: new high speed USB device number 71 using ehci_hcd
So I replaced the existing Dell keyboard with an older Dell keyboard and the bad messages have stopped. No garlic was used.
Now that the 3f locking looks so cool for the PRMI, I suppose that the PRMI + arm stuff will be very successful.
At LLO, I've just noticed the screens that they have for the single pendulums / TTs. I'm attaching a screenshot of the one Zach is using for the steering into the OMC. We should grab these and replace our existing SUS screens with them.
I have modified the settings on the router that connects our Martian network to the outside world so that one can access the NDS2 server running on megatron:31200.
To get at the data you point your data getting client (Matlab, ligoDV, DTT, etc.) at our router and the megatron port will be forwarded to you:
is what you should point to. Now, it should be possible to run DetChar jobs (e.g. our 40m Summary pages) from the outside on some remote server. You can also grab 40m data on your laptop directly by using matlab or python NDS software.
Trying to take an image or movie of the ETMY Transmon cam, we got instead this attached image.
I think it is just some scattered green light, but others in the control room think that it is a message from somewhere or someone...
Yes, this was not ELOG'd by me, unfortunately. This was the MC tickler which I described to some people in the control room when I turned it on.
As Koji points out, with the MCL path turned off this injects frequency noise and pointing fluctuations into the MC. With the MCL path back on it would have very small effect. After the pumpdown we can turn it back on and have it disabled after lock is acquired. Unfortunately, our LOCKIN modules don't have a ramp available for the excitation and so this will produce some transients (or perhaps we can ezcastep it for now). Eventually, we will modify this CDS part so that we can ramp the sine wave.
Once you install a matlab newer than 2012a, you can install ligoDV as a matlab app and get the NDS2 client software for free. So you can easily get the 40m data from the outside world now and do the analysis on your own computer rather than login through nodus.
In the past, we used to use Stefan's 'ezcademod' or Matt's 'ezlockin' to do auto phase adjustment.
JoeB / Jamie are working on python replacements for these tools, but in the near term possibly I can make a bash script to use ezcaservo and the existing LOCKINs to do this.
I took the "aso-laptop" and made it into Ubuntu a couple months ago. Today I added it to the Martian network and then moved it to the X End.
I followed the instructions in (https://wiki-40m.ligo.caltech.edu/Network) and added it to the files in /var/named/chroot/var/named on linux1 and did the "service named restart".
The router already had his MAC address in its list (because Yoichi was illegally using his personal laptop on the Martian). The new laptop's name is 'asia'. This is a legal name according to our computer naming conventions and this Wikipedia page (http://en.wiktionary.org/wiki/Category:Italian_female_given_names). It has been added to the Name Pool on the wiki.
The terminal on the laptop still calls itself 'aso-laptop' so I need some help in fixing that. It successfully connects to 40MARS and displays a MEDM sitemap after sshing in to pianosa.
I use 'ssh -X -C' since I find that compression actually helps when the laptops are so far from the router.
Sun Aug 18 15:52:50 2013
Found the FB lights (C1:FEC-NN_FB_NET_STATUS and C1:DAQ-DC0_C1XXX_STATUS) RED for everything on the CDS_FE_STATUS screen.
I used the (! mxstream restart) button ro restart the mxstreams. Everything is green now.
PMC was out of lock- relocked it and the IMC locked itself as did the X & Y arms on IR. X was already green locked.
I noticed at LLO (?) that the LSC screen there uses up ~25-30% of the CPU time on a single core for the control room iMac workstations - this seems excessive.
Here is an accounting of CPU usage percentages for some of our screens:
These were measured using the program 'glances' on rosalba. MEDM running with only the sitemap used up 0.9% of a CPU. With the screens running, the fluctuation from sample to sample could be ~ +/- 0.5%. While the LSC screen seems to be the biggest pig, it is only big in comparison to small pigs. Certainly this pig has gotten bigger after getting sent to Louisiana.
JoeB and JamieR are working somewhat coherently on a set of python libraries to fulfill all of our command line CDS wants. This is being done mostly to satisfy The Guardian and the SkunkTools project.
I did an 'svn up' in /opt/rtcds/userapps (it might finish in ~1000 years) to get the things that they have so far (in particular, Joe's 'pyavg'). There's going to be some issues since the pylib stuff written by Yuta/Kiwamu has never been integrated with anything and is imported as 'epics' in many python scripts. As we move over to the new stuff there will be a lot of broken script functions since the new libraries are also used in that way.
While Jenne was plotting, I locked and aligned the MICH with AS55_Q. Then I aligned the PRM and locked PRMI using REFL55_I/Q with triggering on POP22, but no power normalization.
I used this to set the phase for REFL11 and REFL55 (driving PRM at 111.3 Hz and minimizing the Q response using the DTT Sine Response tool). I flipped the sign on REFL11 by
The REFL11 gain is ~50x larger than REFL55; this is with the 15 dB whitening gain on REFL55 and none for REFL11. What's going on here? The attached PDF shows the two time series with the free swinging PRMI and both phases set to ~ +/- 2 deg. The REFL55 signals have been scaled up by 50x.
So then we went in and looked at the RF signals at the demod boards. To do this we disconnected the RFPD test cables and hooked the RF Mon outputs into the 50 Ohm inputs on a scope. The following PNG images show the scope traces. The REFL11 (yellow) traces are too big!! See how small the REFL55 (green) are. REFL11 is saturating - need to fix.
/home/cds is >98% full - below are some of the usage numbers:
controls@rosalba:/users/OLD 0$ du -h --max-depth=1
controls@rosalba:/opt/rtcds/userapps 0$ du -h --max-depth=1
linux1:cds>nice du -h --max-depth=1
du: `./llo/chans/daq/archive': Permission denied
du: `./llo/chans/daq/old': Permission denied
One of the reasons that our disk is getting full is due to the scripts_archive directory. A backup script runs on op340m and makes a tar.bz2 file of the scripts directory and puts it in scripts_archive every morning at 6 AM.
On Oct 7, 2011, Koji fixed this script to point at our new scripts directory instead of the old /cvs/cds/caltech/scripts directory. Since then, however, no one has fixed the exclude file to NOT back up the junk that's in that directory. Its a 1.6 GB directory so its full of it.
I've deleted a bunch of junk from the scripts directory: this directory is for scripts, not for your personal home movies or junk data files. Put those in your USER directory. Put temporary data files in /tmp/. I've also added a few more patterns to the exclude file so that less .mpg, .png, .pdf, .dat, etc get stored every day. The new daily .tar.bz2 file wil be ~25 MB instead of 770 MB.
(also fixed the backup script to use 'env' to setup the perl environment and removed the hard-coded path to tar)
I've written a new TICKLE script using the newly found 'cavget' and 'cavput' programs. They are in the standard epics distribution as extension binaries. They allow multichannel read/write as well as ramping, delays, incremental steps, etc. http://www.aps.anl.gov/epics/tech-talk/2012/msg01465.php.
Running from the command line, they seem to work fine, but I've left it OFF for now. I'll switch it into the MC autolocker at some point soon.
Meh. 600 counts is too weak. You should fix the electronics so that the maximized green laser transmission gives more like ~10000 counts.
Just to rephrase somewhat:
We can put our scripts for the MICH, PRMI, and DRMI into the IFO CONFIGURE screens for now and then it should be easy to get them into the Guardian once Jamie has the bugs worked out.
This screen can also be used to setup and start the dither alignment for each configuration (once we have one working for DRMI / SRM).
Also, now that the notches/bandstop filters for the violin modes have been move from the SUS into the LSC, we should fix the triggering to engage them a few seconds after the boosts.
I have modified one of the spare demod boards that was sitting above the electronics bench (the one which was unlabeled - the others say 33MHz, 55MHz and 165MHz) to be the new AS110 demod board. In place of the T1 coil, and the C3 and C6 resistors, I have put the commercial splitter PSCQ-2-120+. In place of U5 (the low pass for the PD input) I have put an SCLF-135+.
OK, but what kind of filter should we be actually using? i.e. what purpose the 135 MHz low pass serve in contrast to a PHP-100+ ?
While we were trying to relock the MC after Jenne put back the RF box, we found there was some mysterious motion in MC2. After spending time trying to figure out where this was coming from, the source was found to be at LOCKIN2 of MC2 suspension "The MC TICKLER" that was left enabled. This was turned OFF and MC locked just fine after that.
EDIT JCD: The Tickler should be disabled, if the autolocker is disabled.
Sounds like this was just incidental, since the MC locked fine also with the tickler enabled for weeks.
The tickle is disabled by the down script, but there's no way to correctly handle all possible button pushes. If you want to disable the autolocker for some reason you should run mcdown before trying to lock. This will set up things with the correct settings.
You're right - down turns it on. Still, the fact that the same tickle recently causes a problem and didn't make 20% power fluctuations until now tells me that its not that the tickle amplitude is too large. Whatever changed recently is the problem.
There doesn't seem to be any coherence among the different directions of ground motion (as expected from seismic theory), so I am suspicious of such a low MICH noise.
controls@rosalba:~ 0$ cdsutilsTraceback (most recent call last): File "/ligo/apps/cdsutils/lib/cdsutils/__main__.py", line 7, in <module> from cdsutils import CMDS File "/ligo/apps/cdsutils/lib/cdsutils/__init__.py", line 4, in <module> from servo import servo File "/ligo/apps/cdsutils/lib/cdsutils/servo.py", line 1, in <module> from epics import PVImportError: No module named epicscontrols@rosalba:~ 1$
Mon Sep 16 19:40:32 2013
In May of 2013 Den wrote a PMC Autolocker because he ignored / didn't want to read anyone else's code. Later that year Yuta also wrote another one from scratch for the same reasons.
I tried to use both today, but neither one runs. Yuta's one doesn't run because he was using a bunch of private yuta library stuff in the yuta directory. That kind of programming style is pretty useless for us since it never works after some time.
So I re-activated and tested the PMCAutolock bash script (it is actually a symbolic link called "PMCAutolock" which points to AutoLock.sh). These scripts are all basically the same:
They turn off the loop (or turn down the gain) and then scan the PZT, look for a resonance, and then activate the loop.
One problem with the logic has been that turning off the loop makes the gain so low that the peak flashes by too fast. But leaving the loop ON and just sweeping with the gain turned down to -10 dB is also not good. That only reduces the UGF from 1 kHz to ~100 Hz. What we want is more like a 10 Hz UGF while scanning the length. SO, I edited the script to turn down the modulation depth on the EOM by that factor. After acquiring lock, it returns all settings to the nominal levels as defined on the PSL_SETTINGS screen.
I also changed the .bashrc aliases for the MEDM command so that if you type medm_good at the command line you get MEDM screens with scalable fonts. So you can stretch the screens.
I used a script (~PSL/PMC/testAutoLocker.sh) to unlock the PMC and run autlocker ~100 times to see how robust the new autlocker is.
It failed to grab it 2 out of 137 times. During those times it just went on trying to ramp the PZT even after it had gone to a rail. Once someone resurrects Rob's 'trianglewave' script we should be OK. Even so, I think this is good enough. Please try this out via the yellow button next time the PMC needs to be locked.
It usually takes 10-30 seconds to lock, depending upon where the fringe is compared to the upper voltage rail. Good enough.
OUr disk was getting full again. Turned out my "fix" to 25 MB was only a fix to 250 MB. Since we were getting disk full warnings on our Ubuntu workstations, I deleted some COMSOL.dmg files from users/zach/ and then started deleting every other tarball from the scripts_archive directory. ~221 GB are now free. Still need to fix the exclude file for scripts better.
I used our procedure from this entry to set the IMC board offset as well as the FSS board offset.
I found this afternoon that the MC was having trouble locking: the PC path was railing as soon as the boost was engaged. Could be that there's some misalignment on the PSL which has led to some RAM having to be canceled by this new offset. Let's see if its stable for awhile.
Today I noticed that there was a lot of noise at the Bounce and Roll eigenfrequencies for ETMY. I found that the bandstop filter were set at completely the wrong frequencies, so I've remade them.
The filters were last tuned by Leo in May of 2011. Even so, he left the frequencies at the frequencies of the old MOS suspensions which had f_bounce ~ 12 Hz.
The FOTON plot shows the OLD ones versus the NEW ones. The DTT spectra shows the oplev error signals in the usual state. I have also copied these over to the SUSPOS,PIT,YAW, and SIDE filter banks and turned them all ON.
I also turned OFF and deleted the 3 Hz RG filter that was there. There's no such peak in the error signal and even if one wanted to compensate for the stack mode, it should be a low Q filter, not this monster.
controls@rosalba:/opt/rtcds/caltech/c1/scripts/SUS 0$ ./setOLtramps
Old : C1:SUS-ETMX_OLPIT_TRAMP 0
New : C1:SUS-ETMX_OLPIT_TRAMP 2
Old : C1:SUS-ETMX_OLYAW_TRAMP 0
New : C1:SUS-ETMX_OLYAW_TRAMP 2
Old : C1:SUS-ETMY_OLPIT_TRAMP 2
New : C1:SUS-ETMY_OLPIT_TRAMP 2
Old : C1:SUS-ETMY_OLYAW_TRAMP 2
New : C1:SUS-ETMY_OLYAW_TRAMP 2
Old : C1:SUS-ITMX_OLPIT_TRAMP 0
New : C1:SUS-ITMX_OLPIT_TRAMP 2
Old : C1:SUS-ITMX_OLYAW_TRAMP 0
New : C1:SUS-ITMX_OLYAW_TRAMP 2
Old : C1:SUS-ITMY_OLPIT_TRAMP 0
New : C1:SUS-ITMY_OLPIT_TRAMP 2
Old : C1:SUS-ITMY_OLYAW_TRAMP 0
New : C1:SUS-ITMY_OLYAW_TRAMP 2
Old : C1:SUS-BS_OLPIT_TRAMP 0
New : C1:SUS-BS_OLPIT_TRAMP 2
Old : C1:SUS-BS_OLYAW_TRAMP 0
New : C1:SUS-BS_OLYAW_TRAMP 2
Old : C1:SUS-PRM_OLPIT_TRAMP 0
New : C1:SUS-PRM_OLPIT_TRAMP 2
Old : C1:SUS-PRM_OLYAW_TRAMP 0
New : C1:SUS-PRM_OLYAW_TRAMP 2
Old : C1:SUS-SRM_OLPIT_TRAMP 0
New : C1:SUS-SRM_OLPIT_TRAMP 2
Old : C1:SUS-SRM_OLYAW_TRAMP 0
New : C1:SUS-SRM_OLYAW_TRAMP 2
Done setting TRAMPs
The ETMX oplev signal looks kind of dead compared to the ETMY. It has no features in the spectra and the SUM is pretty low.
I noticed that the cal fields are still set to 1. To get it close to something reasonable, I calibrated it vs. the SUSPIT and SUSYAW values by giving it a step in angle and using 'tdsavg' plus some arithmetic.
OLPIT = 45 urads/ count
OLYAW = 85 urads / count
These are very rough. I don't even know what the accuracy is on the OSEM based calibration, so this ought to be redone in the way that Jenne and Gabriele did before.
The attached image shows the situation after "calibration" of ETMX. This OL system needs some noise investigation.
Having trouble again, starting around 1 hour ago. No one in the VEA. Adjusted the offset -seems to be OK again.
After seeing all of these spikes in the BLRMS at high frequency for awhile, I power cycled the Guralp interface box (@ 10:21 PM) to see if it would randomly recenter in a different place and stop glitching.
It did - needs to be better centered (using the paddle). Plot shows how the Z channel gets better after power cycle.
After relocking the PMC at a good voltage, Steve and I re-aligned the beam into the PMC by walking the last two steering mirrors. After maximizing the power, we also aligned the reflected beam by maximizing the PMC_REFL_DC with the unlocked beam.
Transmission is back to 0.84 V. We need Valera mode matching maintenance to get higher I guess. Maybe we can get a little toaster to keep the PMC PZT more in the middle of its range?
its an acquired taste, but its a must since we're sending an interferometer to India
I went down to investigate the issue with the extra noise that I found in the ETMY optical lever yesterday. There were several problems with the optical layout down there - I'm not sure if I remember them all now.
The main noise issue, however, appears to be not a layout issue at all. Instead its that the laser intensity noise has gone through the roof. See attached spectra of the quadrants (this is the way to diagnose this issue).
I'll ask Steve to either heal this laser or swap it out tomorrow. After that's resolved we'll need another round of layout fixing. I've done a couple of hours today, but if we want a less useless and noisy servo we'll have to do better.
NOTE: by looking at the OL quadrants, I've found a noisy laser, but this still doesn't explain the excess noise in the ETMX. That was the one that has a noisier error signal, not ETMY. By the coherence in the DTT, you can see that the ETMY OL is correctly subtracting and normalizing out the intensity noise of the laser. Seems like the ETMX electronics might be the culprit down there.
Not so fast! We need to plan ahead of time so that we don't have to repeat this ETMY layout another dozen times. Please don't make any changes yet to the OL layout.
Its not enough to change the optics if we don't retune the loop. Please do buy a couple of JDSU (and then we need to measure their intensity noise as you did before) and the 633 nm optics for the mode matching and then we can plan about the layout.