This is one of those unsolved door lock acquisition problems. Its been happening for years.
Please ask facilities to increase the strength of the door tensioner so that it closes with more force.
Pages still not working: PEM and MEDM blank.
The attached file is a python notebook that you can use to get data. Minimal syntax.
"## Get some 40m data using NDS"
Minute trend data seems not available using the NDS2 server. Its super slow using dataviewer from the control room.
Did some digging into the NDS2 config on megatron. It hasn't been updated in 2 years.
All of the stuff is run by the user 'nds2mgr'. The CronTab for this user was running all the channel name updates and server restarts at 3 AM each day; I've moved it to 5:05 AM. I don't know the password for this user, so I just did 'sudo su nds2mgr' to become him.
On megatron, in /home/nds2mgr/nds2-megatron/ there is a list of channels and configs. The file for the minute trend (C-M-ChanList.txt), hasn't been updated since Nov-2015. ???
Did we turn off minute trend writing in one of the recent FrameBuilder debug sessions? Seems we only have second trends in 2016. Maybe this explains why its so slow to get minute trends? Dataviewer has to rebuild it from second trend.
controls@nodus|frames > l
drwx------ 2 root root 16384 Jun 8 2009 lost+found/
drwxr-xr-x 2 controls controls 4096 Jul 14 2015 tmp/
-rw-r--r-- 1 controls controls 0 Jul 14 2015 test-file
drwxr-xr-x 5 controls controls 4096 Apr 7 2016 trend/
drwxr-xr-x 4 root root 4096 Apr 11 2016 archive/
drwxr-xr-x 789 controls controls 36864 Jan 13 19:34 full/
controls@nodus|frames > cd trend
controls@nodus|trend > l
drwxr-xr-x 258 controls controls 3342336 Jul 6 2015 minute_raw/
drwxr-xr-x 387 controls controls 36864 Nov 5 2015 minute/
drwxr-xr-x 969 controls controls 36864 Jan 13 19:49 second/
ITMY is not like the others. Real or just OSEM madness?
The "apt-get update" was failing on some machines because it couldn't find the 'Debian squeeze' repos, so I made some changes so that Megatron could be upgraded.
I think Jamie set this up for us a long time ago, but now the LSC has stopped supporting these versions of the software. We're running Ubuntu12 and 'squeeze' is meant to support Ubuntu10. Ubuntu12 (which is what LLO is running) corresponds to 'Debian-wheezy' and Ubuntu14 to 'Debian-Jessie' and Ubuntu16 to 'debian-stretch'.
We should consider upgrading a few of our workstations to Ubuntu 14 LTS to see how painful it is to run our scripts and DTT and DV. Better to upgrade a bit before we are forced to by circumstance.
I followed the instructions from software.ligo.org (https://wiki.ligo.org/DASWG/DebianWheezy) and put the recommended lines into the /etc/apt/sources.list.d/lsc-debian.list file.
but I still got 1 error (previously there were ~7 errors):
W: Failed to fetch http://software.ligo.org/lscsoft/debian/dists/wheezy/Release Unable to find expected entry 'contrib/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)
Restarting now to see if things work. If its OK, we ought to change our squeeze lines into wheezy for all workstations so that our LSC software can be upgraded.
Found that the BS whitening was off. Gautam says that "it has always been that way" and "there's nothing in the elog about this" and "I have no special relationship with Putin".
I looked at DV and DTT while turning the OSEM whitening back on. As expected, the sensor noise improved by 10x above 10 Hz. The time series shows no problems - its just less fuzzy now.
All OSEM spectra after the switch show on upper panel of plot. Lower panel shows comparison of BS UL before/after. To rotate the DTT PDF landscape output I typed this:
pdftk BS-white.pdf cat 1N output BSwhite.pdf
"if you see something, do something"
Oot on the streets and in the chat rooms, people often ask, "What is up with the MC_F calibration?".
Not being sure of the wiring in the c1ioo model, I have formed this screencap of today's model and put it here. The MC_LENGTH and MC_FREQ are the filter banks which would calibrate these channels. In the filter banks there were various version of a 'dewhite' filter. They were all approximately z=150, p=15, g =1 @ DC, but with ~1% differences. I don't trust their provenance and so I've enforced symmetry and fixed their names to reflect what they are (150:15). I have also turned on one filter in MC_FREQ so that now the whitening of the Pentek Interface board is compensated.
Why is this TF 1/f? It should be -20 dB/decade if MC_F is in units of Hz* and MCL is a pendulum response. Perhaps its because the combination of the Koji summing box, the Thorlabs HV driver, and the Pomona box forms an additional 1/f ? IF so, this would explain the TF we see. Once we get confirmation from Koji, we can load the TF into the MC_FREQ filter bank and then MC_F will be in units of Hz (as will the summary pages).
(along the way I've also turned off the craaaazzzy servo input enable tickling that gets put in the MC AutoLocker every April Fool's leap year - resist the temptation)
Since we have a frequency counter system here and some oscillators, I wonder if we can just calibrate the MC_L and MC_F directly using a mixer lashed up to one of the counters. If so, and we can get the stabilized laser frequency noise down below 10 mHz/rHz, maybe this is a viable alternative method to the photon calibrators. Counting zero crossings is more honest than counting photons.
I tried to follow these instructions today to make the Simulink Webview accessible:
controls@nodus|public_html > ln -sfn /users/public_html/FE /export/home/
The story is: we currently don't expose the whole /users/public_html folder. Instead, we are symlinking the folders from public_html to /export/home/ on nodus, which is where apache looks for things
So, I fixed the links on the Core Optics page by running:
controls@nodus|~ > ln -sfn /users/public_html/40m_phasemap /export/home/
But...I got a "403 Forbidden" message. What is the secret handshake to get this to work? And why have we added this extra step of security?
Seems like this stops working every ~2 years. Its been busted since early 2016 according to cron, so I fixed up the paths and restored some missing files and committed things to the SVN (with comments!) and now its working and grabbing the Web viewable versions of the front end models. Just need to restore its viewability and then the world can watch our models any time.
Back in 2011, JoeB wrote some entries on how to automatically update the Simulink webview stuff.
Somehow, the cron broke down over the years. I reran the matlab file by hand today and it worked fine, so now you can see the up to date models using the internet.
I suppose before directory listings were turned off we should have fixed the script to make an index.html, but that's how it goes with "up"-grades. How about re-allow directory listing so that our old links for webview work again?
EQ: https://nodus.ligo.caltech.edu:30889/FE is live
Might be. Or it might be in the satellite box cabling. Hard to tell without a tester. I recommend you squish the cables on there and lock the MC and get back to the usual business. I'll check on sat. box with Ben.
Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...
"Why does the word wrapping not work in our browsers with ELOG?" I sometimes wonder. Some of the elogs are fine, but often the 40m one has the text run off the page.
I found that this is due to people uploading HUGE images. If you need to do this, just use the shrink feature in the elog compose window so that we only have to see the thumbnail at first. Otherwise your 12 MP images will make it hard to read everyone else's entries.
I think this cron job is running on NODUS (our gateway) instead of our scripts machine:
*/1 * * * * /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh >> /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log 2>&1
Based on Jenne's chiara disk usage monitoring script, I made a script that checks the N2 pressure, which will send an email to myself, Jenne, Rana, Koji, and Steve, should the pressure fall below 60psi. I also updated the chiara disk checking script to work on the new Nodus setup. I tested the two, only emailing myself, and they appear to work as expected.
The scripts are committed to the svn. Nodus' crontab now includes these two scripts, as well as the crontab backup script. (It occurs to me that the crontab backup script could be a little smarter, only backing it up if a change is made, but the archive is only a few MB, so it's probably not so important...)
moreover this script has a 90MB log file full of not finding its channel
I wish this script was in python instead of BASH and I wish it would run on megatron instead of nodus (why can't megatron send us email too?) and I wish that this log file would get wiped out once in awhile. Currently its been spitting out errors since at least a month ago:
Tue Jan 31 14:10:02 PST 2017 : N2 Pressure:
Channel connect timed out: 'C1:Vac-N2pres' not found.
(standard_in) 1: syntax error
Someone installed "Debian" on allegra. Why? Dataviewer doesn't work on there. Is there some advantage to making this thing have a different OS than the others? Any objections to going back to Ubuntu12?
I think this then allows us to have the low noise OCXO signals everywhere with enough oomph.
I don't know if anyone looked at the time series (not trend) or spectrum of the Microphone after installation, but it looks bad and featureless to me. Is the Microphone broken?
This shows the spectrum from early this morning and again from tonight. You can see that it is bi-stable in its noise properties. This thing is busted; we're now removing it from the PSL so that it doesn' light it self on fire.
and the song remains the same...
the version of SVN on these workstations is ahead of the one on the other workstations so now we can't do 'svn up' on any of the Ubuntu12 machines. One allegra and optimus I get this error:
controls@allegra|GWsummaries> svn up
svn: E180001: Unable to connect to a repository at URL 'file:///cvs/cds/caltech/svn/trunk/GWsummaries'
svn: E180001: Unable to open an ra_local session to URL
svn: E180001: Unable to open repository 'file:///cvs/cds/caltech/svn/trunk/GWsummaries'
My elog negligence punchcard is getting pretty full... It's pretty much for the same reason as using Debian for optimus; much of the workstation software is getting packaged for Debian, which could offload our need for setting things up in a custom 40m way. Hacking the debian-focused software.ligo.org repos into Ubuntu has caused me headaches in the past. Allegra wasn't being used often, so I figured it was a good test bed for trying things out.
The dataviewer issue was dataviewer's inability to pull the `fb` out of `fb:8088` in the NDSSERVER env variable. I made a quick fix for it in the dataviewer launching script, but there is probably a better way to do it.
I'm not sure if its possible to downgrade our chans repo back to the old one, but I highly recommend that no one do 'svn upgrade' in any of our repos until we remove all of the Debian installs in the 40m lab or hire a full-time sysadmin.
Re-aligned the beam going into the PMC today around 5 PM. I noticed that its all in pitch and since I moved both of the mirrors by the same amount it is essentially a vertical translation.
I wonder if the PMC is just moving up and down due to thermal expansion in the mount? How else would we get a pure vertical translation? Need to remember next time if the beam goes up or down, and by how many knob turns, and see how it correlates to the lab temperature.
In working on automatic DARM loop design, we have this code:
the things in there like mkCost*, etc. have examples of the cost functions that are used. It may be useful to look at those and then make a similar cost function calculation for the MCL/MCF loop.
True - its an issue. Koji and I are updating zita into Ubuntu16 LTS. If it looks like its OK with various tools we'll swap over the others into it. Until then I figure we're best off turning allegra back into Ubuntu12 to avoid a repeat of this kind of conflict. Once the workstations in the LLO control room are running smoothly on a new OS for a year, we can transfer into that. I don't think any of us wants to be the CDS beta tester for DV or DTT.
c1iool0 was down again. Rather than key the crate, this time I just pushed the reset button on the front and it came back.
As move towards the wonderfulness of AcroMag, we also have to buy a computer to handle all of these IOCs. Let's install the new c1iool0 over by the SUS computer.
To remind myself about how to put filter caps on the mini-circuits RF Amps, I looked at Koji's recent elog. Its mostly about op-amps, but the idea holds for us.
We want a big (~100 uF) electrolytic with a 50V rating for the +24V RF Amp. And then a 50V ceramic capacitor of ~0.1 uF close to the pins. Remember that the power feed through on the Mini-circuits case is itsself a capacitive feedthrough (although I guess its a ~100 pF).
Later, we should install in this box an active EMI filter (e.g. Vicor)
I would think that we want to fix the I/Q orthog inside the demod board by trimming the splitter. Mixing the Q phase signal to the I would otherwise allow coupling of low frequency Q phase junk from HOMs into the MC lock point.
Of course this doesn't matter for the IMC locking as we only use the I phase signal, but
Question for Craig: What does the SNR of our lines have to be? IF we're only trying to calibrate the actuator in the audio band over long time scales, it seems we could get by with more frequency noise. Assuming we want a 1% calibration at 50-500 Hz, what is the requirement on the frequency noise PSD curve?
Yikes. Please change the all teh WFS DQ channels sample rates from 2048 down to 512 Hz. I doubt we ever need anything about 180 Hz.
There is sometimes an issue with this: if our digital AA filters are not strong enough, the noise about above 256 Hz can alias into the 0-256 Hz band. We ought to check this quantitatively and make some elog statement about our AA filters. This issue is also seen in DTT when requesting a low frequency spectrum: DTT uses FIR filters which are sometimes not sharp enough to prevent this issue.
OK, but the questions still stands: "Assuming we want a 1% calibration at 50-500 Hz, what is the requirement on the frequency noise PSD curve?"
We get SNR in two ways: the amplitude of applied force and the integration time. So we are limited in two ways: stability of the lock to applied forces and time of locklosses / calibration fluctuations.
The fringes seen on the oscope are mostly likely due to the interference from multiple light beams. If there are laser beams hitting mirrors which are moving, the resultant interference signal could be modulated at several Hertz, if, for example, one of the mirrors had its local damping disabled.
Huh? So should we ask them to put the container back? Or do you have some other theory about ETMX tripping that is not garbage related?
ETMX sus damping recovered.
Note: The giant metal garbage container was moved from the south west corner of CES months ago.
The input offset on the MC length servo board changes the lock point of the length loop (by how much? need to calibrate this slider into meters & Hz).
The SUM signal on the MC WFS is ~few 1000. This is several times larger than the pit/yaw signals. This is bad. it means that the TEM00 mode on the WFS (or what the WFS interperets as a TEM00) is larger than the TEM01/10 that its supposed to measure.
So if the beam moves on the WFS head it will convert this large common mode signal into a differential one.
We moved the MC Servo offset around from -3 to +3 V today and saw that it does affect the transmitted light level, but we need to think more to see how to put the offset at the real center of the resonance. This is complicated by the fact that the MCWFS loops seem to have some several minutes time constant so things are essentially always drifting.
I changed the McREFL SMOO to make it easier to use this noisy channel to diagnose small alignment changes:
caput C1:IOO-MC_RFPD_DCMON.SMOO 0.1
This measurement looks bogus - the difference between dark and not dark is not significant enough to believe. Need to figure out how to match better into the ADC range.
The MC was sort of misaligned. It was locking on some vertical HOMs. So I locked it and aligned the suspensions to the input beam (not great; we should really align the input beam to the centered spots on the MC mirrors).
With the HOMs reduced I looked at the MC servo board gains which Guatam has been fiddling with. It seems that since the Mod Depth change we're getting a lot more HOM locks. You can recognize this by seeing the longish stretches on the strip tool where FSS-FAST is going rail-to-rail at 0.03 Hz for many minutes. This is where the MC is locked on a HOM, but the autolocker still thinks its unlocked and so is driving the MC2 position at 0.03 Hz to find the TEM00 mode.
I lowered the input gain and the VCO gain in the mcdown script and now it very rarely locks on a HOM. The UGF in this state is ~3-4 kHz (I estimate), so its just enough to lock, but no more. I tested it by intentionally unlocking ~15 times. It seems robust. It still ramps up to a UGF of ~150 kHz as always. 'mcdown' commited to SVN.
For sensing matrix, better to use single frequency sine response. We don't want to measure around the bounce or above the 28 Hz cutoff filters in the MC SUS.
What readouts do we have for the PMC length? If we could have a calibrated & whitened error and control signal for the PMC up to 16 kHz, perhaps we could see at what frequencies we can use it as a faux-RefCav.
Going to the summary pages and looking at 'Today' seems to break it and crash the browser. Other tabs are OK, but 'summary' is our default page.
I've noticed this happening for a couple of days now. Today, I moved the .ini files which define the config for the pages from the old chans/ location into the /users/public_html/detcharsummary/ConfigFiles/ dir. Somehow, we should be maintaining version control of detcharsummary, but I think right now its loose and free.
Debian doesn't like EPICS. Or our XY plots of beam spots...Sad!
No, not confused on that point. We just will not be testing OS versions at the 40m or running multiple OS's on our workstations. As I've said before, we will only move to so-called 'reference' systems once they've been in use for a long time.
Ubuntu16 is not to my knowledge used for any CDS system anywhere. I'm not sure how you expect to have better support for that. There are no pre-compiled packages of any kind available for Ubuntu16. Good luck, you big smelly doofuses. Nyah, nyah, nyah.
Very, very cool!
What you have drawn looks good to me: the cut should be between TP3 and pin3 of the AD620. This should maintain the DC coupled respons for the single-pin LEMO and backplane EPICS monitors.
We want to use the PMC signal down to low frequencies, so the filter on the input of the AD620 should have a low frequency cutoff, but we should take care not to spoil the noise of the AD620 with a high impedance resistor.
It has a noise of 100 nV/rHz and 1 pA/rHz at 1 Hz. If you use 47 uF and 10 kOhm, you'll get fc = 1/2/pi/R/C ~ 0.3 Hz so that would be OK.
I just did remote apt-get update, apt-get upgrade, and then reboot on nodus. ELOG started up by itself.
good cal. I wonder if this data also gives us a good measurement of the cavity pole or if the photo-thermal self-locking effect ruins it. You should look at the data for the positive sweeps and negative sweeps and see if they give the same answer for the cavity poles. Also, maybe we can estimate the PMC cavity pole using the sidebands as well as the carrier and see if they give the same answer?
I'm suspicious of this temperature sensor comparison. Usually, what they mean by accuracy is not the same as what we mean. I would not buy these yet. How about we just use what Caryn used several years ago (elog search) ?
PS Steve LM34
AIC Wiki updated to latest stable version of DokuWiki: 2017-02-19b "Frusterick Manners" + CAPTCHA + Upgrade + Gallery PlugIns
Our minute trends are still not available through NDS2 from the outside world due to the bad config of the DAQ, but I can confirm that we still have the minute-raw capability. This is 111 days of Seismic BLRMS.
However, it seems we're only able to get ~1 week of lookback on our second trends and that is low-down dirty shame. We used to have over a month of second trend lookback before the last decade of 'upgrades'.
we installed a new curved 34" doublewide monitor on Rossa, but it seems like it has a defective dead pixel region in it. Unless it heals itself by morning, we should return it to Amazon. Please don't throw out he packing materials.
Shipped back 4-17-2017
What's the reasoning behind setting the the gain to this new value? i.e. why do these 'margins' determine what the gain should be?
one of these signals does not look like the others: explanation?
We ought to put the camera software on the shared disk; I don't think there's any speed reasons that it needs to be local.
Its OK to use optimus as the camera server for testing at the moment, but once we have things running, we'll install a few more cameras. With ~4-5 GigE running, we may not want to share with optimus, since we're also using it for comsol and skymap calculations.