More tomorrow, but I tried the following tonight:
Chub has placed the order for two new UPS units (115V for TP2/3 and a 220V version for TP1).
They will arrive within the next two weeks.
I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:
I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.
We can probably learn something about the interferometer / top level BHD plan with an in-air BHD setup, even if the noise is bad. Here are some thoughts about how we would do it.
For this first attempt, we don't really care about the PRC filtering. So possible places to pick off an LO beam are:
In all cases, I think the easiest option to actually route whatever beam we choose into a fiber, and then bring it over to whatever cavity we choose to use for an OMC. I'm assuming whatever phase control technique we end up using can cancel the fiber phase noise at relevant frequencies.
LO phase control
There is a question about the range, but I think these are the only two realistic options we can implement on a reasonable time scale.
Again, there are a few options. Here are some pros and cons that come to my mind.
If we can do a vent (we'd just need a single chamber open), I'd go for the option of getting the copper OMC out and using that. Attachment #1 shows the approximate sizes of the various components (OMMT, OMC cavity, DCPDs), while Attachment #2 shows a rough sketch of where things would go on the AP table, with the rectangles approximately to scale.
I'd made a c1omc model sometime ago. Basically, I think we have sufficient ADC/DAC channels in the c1ioo machine for any of the options listed above - but using the copper OMC and associated peripherals would allow the easiest interfacing.
I noticed these streaky lines again today (but they were not a problem last night). It is annoying if we have to reboot this machine all the time. I wonder if this has something to do with missing drivers. When I ran sudo apt update && sudo apt upgrade, I got several lines like (this isn't the whole stack trace)
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/ucode_unload.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/ucode_load.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/unload_bl.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/bl.bin for module nouveau
Is this indicative of the graphics drivers being installed incorrectly? I am hesitant to mess with this because I think in the past, it was always trying to update some graphics driver that crashed the whole machine into some weird state where we have to wipe the drive and do a fresh re-install of the OS.
Should we just follow these instructions? The graphics card is apparently Quadro P400, which is one of the supported ones according to the list of supported devices.
Or just swap donatella and rossa monitors and defer the problem for later?
yes, I rebooted yesterday to fix the 'steaking white lines' problem in the video/display
Here is the procedure for setting up the three new BHD front-ends (c1bhd, c1sus2, c1ioo - replacement). This plan is based on technical advice from Rolf Bork and Keith Thorne.
The overall topology for each machine is shown here. As all our existing front-ends use (obsolete) Dolphin PCIe Gen1 cards for IPC, we have elected to re-use Dolphin Gen1 cards removed from the sites. Different PCIe generations of Dolphin cards cannot be mixed, so the only alternative would be to upgrade every 40m machine. However the drivers for these Gen1 Dolphin cards were last updated in 2016. Consequently, they do not support the latest Linux kernel (4.x) which forces us to install a near-obsolete OS for compatibility (Debian 8).
I will be in the Clean and Bake lab today from 9am to 3pm
I wanted to try using rossa as my locking workstation today. However, a few problems became quickly evident. Basically, any of our scripts that rely on the cdsutils package (there are MANY) will not work on rossa, because of some library error. This machine is running Debian 10, while the cdsutils package is being loaded from a pre-compiled install on the shared drive, so perhaps this isn't surprising?
Digging a little more, I found that actually, a version of cdsutils that actually works with python3 is actually shipped with the standard cds-workstation meta-package. This is great news, and we should try and use this where possible I guess. Deferring further debugging for daytime work.
Anyway, I added a symlink: sudo ln -s /usr/lib/x86_64-linux-gnu/libncurses.so.6 /usr/lib/x86_64-linux-gnu/libncurses.so.5, and installed wmctrl using sudo apt install wmctrl.
I will be in the clean and bake lab today from 9am to 3pm.
I want to be able to run the dither alignment servo with the PRFPMI locked - I've been thinking about what the scheme should be, and I list here some questions I had while thinking about this.
Last Tuesday evening, while attempting the PRFPMI locking, I noticed a strange feature in the LSC signals, which is shown in Attachment #1 (the PDF exported by dataviewer is 14MB so I upload the jpeg instead). As best as I can tell, the REFL33 and POP22 channels show an abrupt jump in the signal levels, while the other channels do not. POP110 shows a slight jump at around the same time, and the large excursion in AS110_Q actually occurs a few seconds later, and is probably some angular excursion of the PRC/BS. I'm struggling to interpret how this can be explained by some interferometric mechanism, but haven't come up with anything yet. The LO for the 3f error signals is the 2f field, but then why doesn't the POP110 channel show a similar jump if there is some abrupt change in the resonant condition? Is such a change even feasible from a cavity length change point of view? Or did the sideband frequency somehow abruptly jump? But if so, why is the jump much more clearly visible in one sideband than the other?
Does anyone have any ideas as to what could be going on here? This may give some clue as to what's up with the weird sensing matrices, but may also be something boring like broken electronics...
As suggested last week, Hang and I have reviewed the A+ BHD status (DRD, CDD, and reviewers' comments) and compiled a list of key unanswered questions which could be addressed through Finesse analysis.
In anticipation of others helping with this modeling effort, we've tried to break questions into self-contained projects and estimated their level of difficulty. As you'll see, they range from beginner to Finesse guru.
Indeed, this is now fixed by following instructions from here. I rebooted rossa at ~1250 PDT and confirmed that resolv.conf didn't get overwritten. The resolv.conf file also now has the following useful lines at the head:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
maybe we're supposed to edit something besides resolv.conf since that gets over-written on boot for some linux OS
I will be in the Clean and Bake lab today from 8:30am to 4pm
This is strange - I was definitely able to launch medm when I was working on this machine remotely on Friday. But now, there does seem to be a problem with this shared library being missing.
First of all, I installed mlocate to find where the shared library files are installed. Then I made the symlink, and now sitemap seems to work again.
Weirdly, my changes to /etc/resolv.conf got overwritten somehow. Was this machine rebooted? Uptime suggests it's only been running for ~6 hours at the time of writing of this elog.
sudo apt install mlocate
sudo ln -s /usr/lib/x86_64-linux-gnu/libreadline.so.7 /usr/lib/x86_64-linux-gnu/libreadline.so.6
when I try 'sitemap' on rossa I get:
medm: error while loading shared libraries: libreadline.so.6: cannot open shared object file: No such file or directory
sudo usermod -a -G lpadmin controls
and then was able to add Grazia to the list of printers for Rossa by following the instructions on the 40m Wiki.
I installed color syntax highlighting on Rossa using the internet (https://superuser.com/questions/71588/how-to-syntax-highlight-via-less). Now if you do 'less genius_code.py', it will be highlighting the python syntax.
medm: error while loading shared libraries: libreadline.so.6: cannot open shared object file: No such file or directory
in the lab, checkin on the WFS
Sun Jul 5 18:25:50 2020
I redid Gautam's measurements to get a baseline before changing the head, and my results are very different: To me it looks like the WFS2 quadrants are all OK.
I've left the setup as is in case either me or Gautam want to double check. If we're agreed on this response, I'll remove the notches and disable the RF attenuators.
Sun Jul 5 21:42:45 2020
maybe we should make a "dd" copy of pianosa in case rossa has issues and someone destroys pianosa by accidentally spilling coffee on it.
So, in summary, rossa is now all set up for use during lock acquisition. However, until this machine has undergone a few months of testing, we should freeze the pianosa config and not mess with it.
As part of an ongoing effort to improve airflow in workspaces/bathrooms on campus, I have installed an air scrubber unit in each of the bathrooms at the 40m lab.
In an effort to make a second usable workstation, I did the following (remotely) on rossa today (not necessarily in this order, I wasn't maintaining a live log so I forgot):
Note that this version of the "crtools" is rather new. Please, use them and if there is an issue, report the errors! I am going to occassionally try lock acquisition using rossa.
wiped and install Debian 10 on rossa today
still to be done: config it as CDS workstation
please don't try to "fix" it in the meantime
I re-connected the 3 accelerometers located near the MC1/MC3 chamber. It was a bit tedious to get the cabling sorted - I estimate the cable is ~80m long, and the excess length had to be wound around a spool (see Attachment #1), which wasn't really a 1 person job. It's neat-ish for now, but I'm not entirely satisfied. I think we should get shorter cables (~20m), and also mount the pre-amp/power units in a rack instead of leaving it on the floor. The pre-amp settings are x100 for all three channels. The MC2 channels are powered, but are unconnected to the seismometers - it was too tedious to unroll the other spool yesterday. Apart from this, the cable for the "Z" channel had to be re-seated in the strain relief clamp.
I did not enable any of the CDS filters that convert the raw signal into physical units, so for now, these channels are just recording raw counts.
Update 7pm: the spectra in the current config are here - not sure what to make of the MC2_Z channel appearing to show lower noise?
Update July 13 2020 430pm: This afternoon, I hooked up the MC2 accelerometer channels too...
I will be in the Clean and Bake lab from 9am to 4pm today.
This problem reared its ugly head again. I am inclined to believe the problem is electronic and not on the light, since the POY channels seem immune to this issue (see Attachment #1). I will investigate in the daytime tomorrow. Note that while the POX photodiode head has ~twice the transimpedance than POY (per measurement), the POY signal gets amplified by a ZHL-500-HLN amplifier before heading to the demod electronics (nominal gain is 19dB = x9). There is also some imbalance in the light level at the photodiodes I guess, because overall, the PDH fringe is ~twice as large for the Y arm as the X arm. Basically, the y-axes of the attached plot cannot be directly compared between POX and POY.
Mostly this is an annoyance - right now, the POX signal is only used for locking and dither aligning the X arm cavity, and so once that is done, the locking can proceed (as long as the other channels, e.g. REFL11, aren't glitching as well...)
I injected some sensing lines and measured their responses in the various photodiodes, with the interferometer in a few different configurations. The results are summarized in Attachments #1 - #3. Even with the PRMI (no arm cavities) locked on 1f error signals, the MICH and PRCL signals show up in nearly the same quadrature in the REFL port photodiodes, except REFL165. I am now thinking if the output (actuation) matrix has something to do with this - part of the MICH control signal is fed back to the PRM in order to minimize the appearance of the MICH dither in the PRCL error signal, but maybe this matrix element is somehow horribly mistuned?
Some other mysteries that I will investigate further:
I blew the long lock last night because I forgot to not clear the ASS offsets when trying to find the right settings for running the ASS system at high power. Will try again tonight...
Lock the PRMI on carrier and measure the sensing matrix, see if the MICH and PRCL signals look sensible in 1f and 3f photodiodes.
I will be in the clean and bake lab today from 9am to 4pm.
Sigh. Do we have a spare sat box?
A more comprehensive report has been uploaded here. I'll zip the data files and add them there too. In summary:
I'll upload the data and analysis notebook + liso fit files to the wiki as well shortly. The data, a Jupyter notebook making the plots, and the LISO fit files have been uploaded here.
I didn't do it this time but it'd be nice to also do the noise measurement and get an estimate for the shot-noise intercept current.
While I have the data, I will fit this and post a more complete report on the wiki.
There was no improvement to the situation overnight. So, I did the following today:
IMC is now locked again, I will monitor for glitching/stability.
Update 6pm PDT: as shown in Attachment #1, there is a huge difference in the stability of the lock after the sat box swap. Let's hope it stays this way for a while...
I'll leave the MC1 box open overnight and see if that improves the situation, and if not, I'll switch in the SRM satellite box tomorrow.
I will be in the Clean and Bake lab today from 11:30am to 4pm
Hmm I can't seem to export with the colorbar, might be just my phone though. I tried to add some "cursors" with the temperature at a few spots, but the font color contrast is poor so you have to squint really hard to see the temperatures in the photo I attached.
does the FLIR have an option to export image with a colorbar?
How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps?
Judging by the summary pages, some 18 hours after this change was made and the board re-installed, the MC1 shadow sensors began to report frequent glitches. I can't think of a plausible causal connection, especially given the 18 hour time lag, but also hard to believe there isn't one? As a result, the IMC is no longer able to stay locked for extended periods of time. I did the usual cable squishing, and also took off the lid to see if that helps the situation.
While the reduced series resistance means there is more current flowing through the slow path,
The attached FLIR camera image re-inforces what we already know, that the thermal environment inside the satellite box is horrible. The absolute temperature calibration may be off, but it was difficult to touch the components with a bare finger, so I'd say its definitely > 70 C.
I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.
While the vacuum system was knocked out, I measured the RF transimpedance (using the AM laser setup, didn't do the shot noise intercept current measurement for now) of all the RFPDs (except PMC REFL). At the very least, the following photodiodes are suspect:
For the remaining photodiodes, I measure a transimpedance that is within ~20% of what is on the wiki page. The notches may benefit from some retuning. While I have the data, I will fit this and post a more complete report on the wiki.
Update July 6 1145am: WFS response plots now have legends mapping quadrants, and I've also added the response of a spare PDA10CF (which is now the new POP22/POP110 photodiode).
I will be in the Clean and Bake lab today from 11am to 4pm.
As expected, the requested voltage no longer exceeds the Acromag DAC range, it is now more like 2.5 V. However, I still notice that the MC REFL spot moves somewhat diagonally on the camera image - so maybe the coil gains are seriously imbalanced? Anyway, the WFS control signals can once again be safely offloaded to the slow bias voltages once again, preserving the fast ADC range for other actuation.
The Johnson noise of the series resistor has now increased by a factor of 2, from ~6.4 pA/rtHz to 12.8 pA/rtHz. Assuming a current to force coefficient of 1.6 mN/A per coil, the length noise of the cavity is expected to be 12.8e-12 * 0.064/0.25/(2*pi*100)^2 ~ 8e-18 m/rtHz at 100 Hz. In frequency units, this is 80 uHz/rtHz. I think our IMC noise is at least 10 times higher than this at 100 Hz (in any case, the noise of the coil driver is NOT dominated by the series resistance). Attachment #1 confirms that there isn't any significant MCF noise increase, and I will check with the arm cavity too. Nevertheless, we should, if possible, align the optic better and use as high a series resistance as possible.
The watchdog for MC1 was disabled and the board was pulled out for this work. After it was replaced, the IMC re-locks readily.
But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.
I will be in the Clean and Bake lab from 11pm to 4pm
This earthquake tripped all suspensions and ITMX got stuck. The watchdogs were restored and the stuck optic was released. The IFO was re-aligned, POX/POY and PRMI on carrier locking all work okay.
Per the discussion at the meeting today, the plan of action is:
If I missed something, please add here.
I want some input about what the short-term (next two weeks) commissioning goals should be.
I will be in the Clean and Bake lab today from 10am to 4pm.
I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS.
It avoids us having to force them all to UPPER in the scripts and channel lists.
This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging.
For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time.
Edit: The new interlock flag channel is named C1:Vac-interlock_flag.
Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed.
The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍
The vac system is going down at 11 am today for planned maintenance:
The machine needed a hard reboot as it was un-ssh-able.
The exact time that the machine went down is unknown because the blinkys were not DQ-ed. I've now added these to the EDCU to make these channels actually useful, and we may look back on the reliability (or otherwise) of the Acromag system. To my memory, this is the ~5th time one of the new Acromag servers has needed a hard reboot. While this may be less frequent (?) than the VME machines, perhaps there is some other reason for these dropouts. Maybe something to do with the martian network?
Anyway the machine is back up and running now.
I will be at the 40m today from 11am to 4pm.
We will advise when the work is completed.
The PSL shutter was closed from the vacuum interlock trip. Today, I did the following:
All looks good for now. I will probably get back to PRFPMI locking Monday.
Before the vacuum fracas, the locking was pretty robust. With some human servoing of the input beam, I could maintain locks for ~1 hour. My primary goals were:
I didn't succeed in either so far.
I guess apart from this, we want to run the ALS scan to try and infer something about the absorption-induced thermal lens. I guess at this point, the costs outweigh the benefits in trying to bring in the SRC as well, since we will be changing the SRC config?
In ELOG 15368, I had claimed that the POP QPD based feedback servo actuating on the PRM stabilized the lock. I now believe this scheme of sensing using the POP QPD and feeding back to the PRM is not a good topology for stabilizing the PRC angular motion.
I would also like to bring up the topic of implementing some WFS for the interferometer fields again, there doesn't seem to be any mention of this in the procurement/planning for the BHD. It is not obvious to me yet that we need WFS and not just DC QPDs from a noise point of view, but at least we should discuss this.
Tip Seals were replaced on the forepumps for TP2 and TP3, and both are ready to be installed back onto the forelines.
TP2 Forepump Ultimate Pressure: 180 mtorr
TP3 Forepump Ultimate Pressure: 120 mtorr