Formatted and re-installing OS on rossa for the 3rd or 4th time this year. I suggest that whoever is installing software and adjusting video settings please stop.
If you feel you need to tinker deeply, use ottavia or zita and then be prepared to show up and fix it.
While I was moving the UPS around, the network lights went out for Rossa, so I may have damaged the network interface or cable. Debugging continues.
wonder if its possible to do variable finesse locking
Gabriele mentioned that Virgo used arm trans PDH for this, but I guess we could possibly use POX/POY to start and bring in the PRM with 50% MICH trans
I'm curious to see if we really need the 1611, or if we can calibrate the diode laser vs. the 1611 one time and then just use that calibration to get the absolute cal for the DUT.
Got the network to work again just by unplugging the power cord and letting it sit for awhile. But corrupted OS by trying to install Nvidia drivers.
I think this offset setting thing is not so good. People do this every few years, but putting offsets in servos means that you cannot maintain a stable alignment when there are changes in the laser power, PMC trans, etc. The better thing is to do the centering of the WFS spots with the unlcoked beam after the control offsets have been offloaded to the suspensions.
It would be good if you and Shruti can look at how to change the parameters in Zero so as to do a fit to the measured data. Usually, in scipy.optimize we give it a function with some changeable params, so maybe there's a way to pass params to a zero object in that way. I think Ian and Anchal are doing something similar to their FSS Pockel's cell simulator.
During our EX AM/PM setups, I don't think we bumped the PDH gain knob (and I hope that the knob was locked). Possible drift in the PZT response? Good thing Shruti is on the case.
Is there a loop model of green PDH that agrees with the measurement? I'm wondering if something can be done with a compensation network to up the bandwidth or if the phase lag is more like a non-invertible kind.
back on new Rossa from Xi computing
Update: Sun Nov 3 18:08:48 2019
Update: Fri Nov 15 00:00:26 2019:
this is due to the Equivalence Principle: local accelerations are indistinguishable from spacetime curvature. On a spherical Earth, the local gradient of the metric points in the direction towards the center of the Earth, which is colloquially known as "down".
I don't understand why the z-axis motion reported by the T240 is ~10x lower at 10 mHz compared to the X and Y motions. Is this some electronics noise artefact?
at 1 Hz' this effect is not large so that's real translation. at lower frequencies a ground tilt couples to the horizontal sensors at first order and so the apparent signal is amplified by the double integral. drawing a free body diagram u can c that
x_apparent = (g / s^2) * theta
but for vortical this not tru because it already measures the full free fall and the tilt only shows up at 2nd order
We turned off many excessive violin mode bandstop filters in the LSC.
Due to some feedforward work by Jenne or EQ some years ago, we have had ~10 violin notches on in the LSC between the output matrix and the outputs to the SUS.
They were eating phase, computation time, and making ~3 dB gain peaking in places where we can't afford it. I have turned them off and Gautam SDF safed it.
Offensive BS shown in brown and cooler BS shown in blue.
To rotate the DTT landscape plot to not be sideways, use this command (note that the string is 1east, not least):
pdftk in.pdf cat 1east output out.pdf
The large ground motion at 1 Hz started up again tonight at around 23:30. I walked around the lab and nearby buildings with a flashlight and couldn't find anything whumping. The noise is very sinusoidal and seems like it must be a 1 Hz motor rather than any natural disturbance or traffic, etc. Suspect that it is a pump in the nearby CES building which is waking up and running to fill up some liquid level. Will check out in the morning.
Estimate of displacement noise based on the observed MC_F channel showing a 25 MHz peak-peak excursion for the laser:
dL = 25e6 * (13 m / (c / lambda)
= 1 micron
So this is a lot. Probably our pendulum is amplifying the ground motion by 10x, so I suspect a ground noise of ~0.1 micron peak-peak.
(this is a native PDF export using qtgrace rather than XMgrace. uninstall xmgrace and symlink to qtgrace.)
and so it begins...until this is finished I have turned off the projector and moved the striptools to the big TV (time to look for Black Friday deals to replace the projector with a 120 inch LED TV)
...maybe the opto-mechanical CARM plant is changing as a function of the CARM offset...
Even assuming 50% error in the calibration factors, it's hard to explain the swing of TRX/TRY when the CARM offset is brought to zero.
if the RP don't fit
u must acquit
sweep the laser amplitude
to divine the couplin w certitude
filter Q seems too high,
but what precisely is the proper way to design the IF filter?
seems like we should be able to do it using math instead of feelins
Izumi made this one so maybe he has an algorythym
Recently, accordian to Gautam, the NDS2 server has been dying on Megatron ~daily or weekly. The prescription is to restart the server.
Also, megatron is running Ubuntu 12 !! Let's decide on a day to upgrade it to a Debian 18ish....word from Rolf is that Scientific Linux is fading out everywhere, so Debian is the new operating system for all conformists.
# this function gets some data (from the 40m) and saves it as
# a .mat file for the matlabs
# Ex. python -O getData.py
from scipy.io import savemat,loadmat
import scipy.signal as sig
from astropy.time import Time
Yehonathan, please center the EX seismometer.
The attached PDF shows the seismometer signals (I'm assuming that they're already calibrated into microns/s) during the lab tour for the art students on 11/1. The big spike which I've zoomed in on shows the time when we were in the control room and we all jumped up at the same time. There were approximately 15 students each with a mass of ~50-70 kg. I estimate that out landing times were all sync'd to within ~0.1 s.
I have re-centered the EX (and EY) seismometers. They are Guralp CMG40-T, and have no special centering procedure except cycling the power a few times. I turned off the power on their interface box, then waited 10s before turning it back on.
The fist atm shows the comparison using data from 8-9 PM Saturday night:
The IBM laptop at EX was running Ubuntu 14, so I allowed it to start upgrading itself to Ubuntu16 as it desired. After it is done, I will upgrade it to 18.04 LTS. We should have them all run LTS.
I noticed recently that Megatron was running Ubuntu 12, so I've started its OS upgrade.
Megatron and IMC autolocking will be down for awhile, so we should use a different 'script' computer this week.
Mon Dec 9 14:52:58 2019
upgrade to Ubuntu 14 complete; now upgrading to 16
Megatron is now running Ubuntu 18.04 LTS.
We should probably be able to load all the LSC software on there by adding the appropriate Debian repos.
I have re-enabled the cron jobs in the crontab.
The MC Autolocker and the PSL NPRO Slow/Temperature control are run using 'initctl', so I'll leave that up to Shruti to run/test.
idk - I'm recently worried about the 'thermal self locking' issue we discussed. I think you should try to measure the linewidth by scanning (with low input power) and also measure the TF directly by modulating the power via the AOM and taking the ratio of input/output with the PDA55s. I'm curious to see if the ringdown is different for low and high powers
I plan to model the PD+AOM as a lowpass filter with an RC time constant of 12us and undo its filtering action on the PMC trans ringdown measurement to get the actual ringdown time.
Is this acceptable?
This is an ole SURF report on thermal self-locking that may be of use (I haven't read it or checked it for errors, but Royal was pretty good analytically, so its worth looking at)
wiped and install Debian 10 on rossa today
still to be done: config it as CDS workstation
please don't try to "fix" it in the meantime
doesn't seem so anomolous to me; we're getting ~25 dB of gain range and the ideal range would be 40 dB. My guess is that even thought this is not perfect, the real problem is elsewhere.
I changed the office area thermostate near Steve's desk from 68F to 73F today. Please do not change it.
If anyone from facilities comes to adjust something, please put the details in the elog on the same day so that we can know to undo that change rather than chase down other drifts in the system.
Could you please put physical units on the Y-axis and also put labels in the legend which give a physical description of what each trace is?
It would also be good to a separate plot which has the IR locking signal and the green locking signal along with this out of loop noise, all in the same units so that w can see what the ratio is.
to make the comparisons meaningfully
one needs to correct for the feedback changes
when doing the AM sweeps of cavities
make sure to cross-calibrate the detectors
else you'll make of science much frivolities
much like the U.S. elections electors
Yesterday evening I took nearly all of the masks, gloves, gowns, alcohol wipes, hats, and shoe covers. These were the ones in the cleanroom cabinets at the east end of the Y-arm, as well as the many boxes under the yarm near those cabinets.
This photo album shows the stuff, plus some other random photos I took around the same time (6-7 PM) of the state of parts of the lab.
do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.
If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.
that's pretty great performance. maybe you can also upload some code so that we can do it later too - or maybe in the 40m GIT
I wonder how much noise is getting injected into PRC length at 10-100 Hz due to this. Any change the PRC ERR?
I just now modified the /etc/rsyncd.conf file as per Dan Kozak's instructions. The old conf file is still there with the file name appended with today's date.
I then enabled the rsync daemon to run on boot using 'enable'. I'll ask Dan to start the file transfers again and see if this works.
controls@nodus|etc> sudo systemctl start rsyncd.service
controls@nodus|etc> sudo systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
controls@nodus|etc> sudo systemctl status rsyncd.service
● rsyncd.service - fast remote file copy program daemon
Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-04-13 16:49:12 PDT; 1min 28s ago
Main PID: 4950 (rsync)
└─4950 /usr/bin/rsync --daemon --no-detach
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd: Started fast remote file copy program daemon.
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd: Starting fast remote file copy program daemon...
There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.
Might allow for better scatter measurements - not that we need more signal, but it could allow us to use shorter exposure times and reduce blurring due to the wobbly beams.
I had set up the 4395 to do this automatically a few years ago, but it looked at the FSS/IMC instead. When the PCDRIVE goes high there is this excess around ~500 kHz in a broad hump.
But the IMC loop gain changes sometimes with alignment, so I don't know if its a loop instability or if its laser noise. However, I think we have observed PCDRIVE to go up without IMC power dropping so my guess is that it was true laser noise.
This works since the IMC is much more sensitive than PMC. Perhaps one way to diagnose would be to lock IMC at a low UGF without any boosts. Then the UGF would be far away from that noise making frequency. However, the PCDRIVE also wouldn't have much activity.
This is the doc from Keita Kawabe on why the WFS heads should be rotated.
apt install source-highlight
then modified bashrc to point to /usr/share instead of /usr/bin
It would be good to have a corner plot with all the distances/ RoCs. Also perhaps a Jacobian like done in this breathtaking and seminal work.
was dead again this morning - JZ notified
current restart instructions (after ssh to megatron):
sudo su nds2mgr
make -f test_restart
so far it has run through the weekend with no problems (except that there are huge log files as usual).
I have started to set up monit to run on megatron to watch this process. In principle this would send us alerts when things break and also give a web interface to watch monit. I'm not sure how to do web port forwarding between megatron and nodus, so for now its just on the terminal. e.g.:
monit>sudo monit status
Monit 5.25.1 uptime: 4m
monitoring status Monitored
monitoring mode active
on reboot start
load average [0.15] [0.22] [0.25]
cpu 0.6%us 1.0%sy 0.2%wa
memory usage 1001.4 MB [25.0%]
swap usage 107.2 MB [1.9%]
uptime 40d 17h 55m
boot time Tue, 14 Apr 2020 17:47:49
data collected Mon, 25 May 2020 11:43:03
monitoring status Monitored
monitoring mode active
on reboot start
parent pid 1
effective uid 4666
uptime 3d 1h 22m
cpu total 0.0%
memory 19.4% [776.1 MB]
memory total 19.4% [776.1 MB]
security attribute unconfined
disk read 0 B/s [2.3 GB total]
disk write 0 B/s [17.9 MB total]
data collected Mon, 25 May 2020 11:43:03
how bout corner plot with power signals and oplevs? I think that would show not just linear couplings (like your coherence), but also quadratic couplings (chesire cat grin)
I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS.
It avoids us having to force them all to UPPER in the scripts and channel lists.
does the FLIR have an option to export image with a colorbar?
How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps?
maybe we should make a "dd" copy of pianosa in case rossa has issues and someone destroys pianosa by accidentally spilling coffee on it.
So, in summary, rossa is now all set up for use during lock acquisition. However, until this machine has undergone a few months of testing, we should freeze the pianosa config and not mess with it.
in the lab, checkin on the WFS
Sun Jul 5 18:25:50 2020
I redid Gautam's measurements to get a baseline before changing the head, and my results are very different: To me it looks like the WFS2 quadrants are all OK.
I've left the setup as is in case either me or Gautam want to double check. If we're agreed on this response, I'll remove the notches and disable the RF attenuators.
Sun Jul 5 21:42:45 2020
sudo usermod -a -G lpadmin controls
and then was able to add Grazia to the list of printers for Rossa by following the instructions on the 40m Wiki.
I installed color syntax highlighting on Rossa using the internet (https://superuser.com/questions/71588/how-to-syntax-highlight-via-less). Now if you do 'less genius_code.py', it will be highlighting the python syntax.
when I try 'sitemap' on rossa I get:
medm: error while loading shared libraries: libreadline.so.6: cannot open shared object file: No such file or directory