Following instructions here, I installed ndscope on this machine. DTT still could not be be run from this machine, and I want to use this today - so I ran the following commands from the K. Thorne setup instructions.
yum clean metadata
yum install cds-workstation pcaspy subversion redhat-lsb gnuradio google-chrome-stable xorg-x11-drv-nvidia epel-release redhat-lsb
Now diaggui can be opened, and spectra can be made. I'm moving this laptop to its new home at EY.
Milind pointed out that all boxes on the medm screens were white. I didn't have diagnostics from the medm screens, so I started following the troubleshooting steps on the restart procedures page.
It seemed like maybe a frontend problem. I tried telnet-ing into several of the fe, and wasn't able to access c1psl. The section on c1psl mentions that if this machine crashes, the screens will go white and the crate needs to be turned off and on. Millind did this.
Now, most of the status lights are restored (screenshot).
Milind: I did a burtrestore following this and locked the PMC following the steps described in this elog.
Kruthi noticed that she could not login to rossa from giada.
I checked /etc/resolv.conf and it was
so obviously it is useless to refer localhost (i.e. giada) as a nameserver.
I copied our usual resolv.conf to giada as following:
Giada's ssh known_host had unupdated entry for rossa, so I had to clean it up, but after that we can connect to rossa from giada just by "ssh rossa".
I think rana did some more changes to this workstation to make it useful for commissioning activities - but the MEDM screens were still white blanks. The problem was that the firewalld wasn't disabled (last two steps of the KThorne setup wiki). I disabled it. Now donatella can run MEDM, ndscope and StripTool. DTT doesn't work to get online data because of a "Synchronization Error", I'm not bothering with this for now. I think Kruthi successfully demonstrated the fetching of offline data with DTT.
I brought one CPU (Dell T3500) and one 28" monitor from Mike Pedraza's office in Downs to the 40m. It is on Steve's desk right now, pending setup. The machine already has Solidworks and Altium installed on it, so we can set it up at our leisure. The login credentials are pasted on the CPU with a post-it should anyone wish to set it up.
Chub brought the replacement Supermicro we ordered to the 40m today. I stored it at the SW entrance to the VEA, along with the other Supermicro. At the time of writing, we have, in hand, two (unused) Supermicro machines. One is meant for EY and the other is meant for c1psl/c1iool0. DDR3 RAM and 120 GB SSD drives have also been ordered, but have not yet arrived (I think, Chub, please correct me if I'm wrong).
Update 20190802: The DDR3 RAM and 120 GB SSD drives arrived, and are stored in the FE hardware cabinet along the east arm. So at the time of writing, we have 2 sets of (Supermicro + 120GB HD + 4GB RAM).
We should ask Chub to reorder several more SuperMicro rackmount machines, SSD drives, and DRAM cards. Gautam has the list of parts from Johannes' last order.
The VEA laptop asia was configured to be able to connect to too many WiFi networks - it was getting conflicted in its default position at the vertex and trying to hop between networks, for some reason trying to connect to networks that had poor signal strength. I deleted all options from the known networks except 40MARS. Now the network connection seems much more stable and reliable.
cdsutils is not working on rossa.
Import cdsutils produces this error:
In : import cdsutils
OSError Traceback (most recent call last)
<ipython-input-2-949babce8459> in <module>()
----> 1 import cdsutils
/ligo/apps/linux-x86_64/cdsutils-480/lib/python2.7/site-packages/cdsutils/__init__.py in <module>()
---> 55 import awg
56 except ImportError:
/ligo/apps/linux-x86_64/cdsutils-480/lib/python2.7/site-packages/cdsutils/awg.py in <module>()
---> 32 import sys, numpy, awgbase
33 from time import sleep
34 from threading import Thread, Event, Lock
/ligo/apps/linux-x86_64/cdsutils-480/lib/python2.7/site-packages/cdsutils/awgbase.py in <module>()
17 libawg = CDLL('libawg.so')
18 libtestpoint = CDLL('libtestpoint.so')
---> 19 libSIStr = CDLL('libSIStr.so')
/ligo/apps/anaconda/lib/python2.7/ctypes/__init__.pyc in __init__(self, name, mode, handle, use_errno, use_last_error)
365 if handle is None:
--> 366 self._handle = _dlopen(self._name, mode)
368 self._handle = handle
OSError: libSIStr.so: cannot open shared object file: No such file or directory
please no one touch the UPS: last time it destroyed ROSSA. Please ask Chub to order the replacement batteries so we can do this in a controlled way (fully shutting down ALL workstations first). Last time we wasted 8 hours on ROSSA rebuilding.
There was an alarm sound from the Smart-UPS 2200 sitting under the workstation. I see that the 'replace battery' light is red, and this elog tells me that these batteries are replaced every ~1-4 years; the last replacement was march 2016. Holding down the 'test' button for 2-3 seconds results in the alarm sound and does not clear the replace battery indicator.
Chub wanted to get the correct part number for the replacement UPS batteries which necessitated opening up the UPS. To be cautious, all the workstations were shutdown at ~9:30am while the unit is pulled out and inspected. While looking at the UPS, we found that the insulation on the main power cord is damaged at both ends. Chub will post photos.
However, despite these precautions, rossa reports some error on boot up (not the same xdisp junk that happened before). pianosa and donatella came back up just fine. It is remotely accessible (ssh-able) though so maybe we can recover it...
please no one touch the UPS: last time it destroyed ROSSA. Please ask Chub to order the replacement batteries so we can do this in a controlled way (fully shutting down ALL workstations first). Last time we wasted 8 hours on ROSSA rebuilding
I booted Rossa in rescue mode; though I see no errors on bootup, I still see the same error ("a problem has occurred") after boot, and a prompt to logout. I powered rossa off/on (single short press of power button), no change.
Booting in debug mode, I see that the error occurs when mounting /cvs/cds, with the error
Which is odd, because when I boot in recovery mode, is mounts /cvs/cds successfully.
I booted in emergency mode by adding to the boot command
but didn't have the appropriate root password to troubleshoot further (the usual two didn't work).
Formatted and re-installing OS on rossa for the 3rd or 4th time this year. I suggest that whoever is installing software and adjusting video settings please stop.
If you feel you need to tinker deeply, use ottavia or zita and then be prepared to show up and fix it.
While I was moving the UPS around, the network lights went out for Rossa, so I may have damaged the network interface or cable. Debugging continues.
Got the network to work again just by unplugging the power cord and letting it sit for awhile. But corrupted OS by trying to install Nvidia drivers.
back on new Rossa from Xi computing
Update: Sun Nov 3 18:08:48 2019
Update: Fri Nov 15 00:00:26 2019:
and so it begins...until this is finished I have turned off the projector and moved the striptools to the big TV (time to look for Black Friday deals to replace the projector with a 120 inch LED TV)
the upgrade seems to have been successfully executed - the machine was restarted at ~430pm local time. Projector remains off and diagnostic striptools are on the samsung.
The IBM laptop at EX was running Ubuntu 14, so I allowed it to start upgrading itself to Ubuntu16 as it desired. After it is done, I will upgrade it to 18.04 LTS. We should have them all run LTS.
I noticed recently that Megatron was running Ubuntu 12, so I've started its OS upgrade.
Megatron and IMC autolocking will be down for awhile, so we should use a different 'script' computer this week.
Mon Dec 9 14:52:58 2019
upgrade to Ubuntu 14 complete; now upgrading to 16
Megatron is now running Ubuntu 18.04 LTS.
We should probably be able to load all the LSC software on there by adding the appropriate Debian repos.
I have re-enabled the cron jobs in the crontab.
The MC Autolocker and the PSL NPRO Slow/Temperature control are run using 'initctl', so I'll leave that up to Shruti to run/test.
wiped and install Debian 10 on rossa today
still to be done: config it as CDS workstation
please don't try to "fix" it in the meantime
upgrade was done
cronjob testing wasn't one by one 😢
burt snapshots were gone
i brought them back home 🏠
The burt snapshotting is still not so reliable - for whatever reason, the number of snapshot files that actually get written looks random. For example, the 14:19 backup today got all the snaps, but 15:19 did not. There are no obvious red flags in either the cron job logs or the autoburt log files. I also don't see any clues when I run the script in a shell. It'll be good if someone can take a look at this.
I've also been noticing that the IMC Autolocker scripts are running rather sluggishly on Megatron recently. Some evidence - on Feb 11 2019, the time between the mcup script starting and finishing is ~10 seconds (I don't post the raw log output here to keep the elog short). However, post upgrade, the mean time is more like ~45-50 seconds. Rana mentioned he didn't install any of the modern LIGO software tools post upgrade, so maybe we are using some ancient EPICS binaries. I suspect the cron job for the burt snapshot is also just timing out due to the high latency in channel access. Rana is doing the software install on the new rossa, and once he verifies things are working, we will try implementing the same solution on megatron. The machine is an old Sun Microsystems one, but the system diagnostics don't signal any CPU timeouts or memory overflows, so I'm thinking the problem is software related...
There were a bunch of medm processes stalled on megatron (connected with screenshot taking). To see if they were interfering with the other scripts, I killed all of the medm processes, and commented out the line in the crontab that runs the screenshots every 10 mins. Let's see if this improves stability.
We found that the caput commands were taking much longer to execute on megatron than on pianosa (for example). Suspecting that this had something to do with the fact that megatron was using EPICS binaries from the shared NFS drive which were compiled for a much older OS, I installed the latest stable release of EPICS on megatron. The new caput commands execute much faster. I also added the local EPICS directory to the head of the $PATH variable used by the MC autolocker and FSS Slow scripts, so that they use the new caput command. But mcup is still slow - maybe my new path definition isn't picked up and it is still using the NFS binaries? To be looked into...
Allegra had no network cable and no mouse. We found Allegra'snetwork cable (black) and connected it.
I found a dirty old school mouse and connected it.
I wiped Allegra and now I'm currently installing debian 10 on allegra following Jon's elog.
04/01 update: I forgot to mention that I tried installing cds software by following Jamie's instruction: I added the line in /etc/apt/sources.list.d/lscsoft.list: "deb http://software.ligo.org/lscsoft/debian/ stretch contrib". But this the only thing I managed to do. The next command in the instructions failed.
deb http://software.ligo.org/lscsoft/debian/ stretch contrib"
Today I finished implementing loopback monitors of the up/down state of the slow controls machines. They are visible on a new MEDM screen accessible from Sitemap > CDS > Slow Machine Status (pictured in attachment 1). Each monitor is a single EPICS binary channel hosted by the slow machine, which toggles its state at 1 Hz (an alive "blinker"). For each machine, the monitor is defined by a separate database file named c1[machine]_state.db located in the target directory.
Sitemap > CDS > Slow Machine Status
This is implemented for all upgraded machines, which includes every slow machine except for c1auxey. This is the next and final one slated for replacement.
The blinkers are currently implemented as soft channels, but I'd like to ultimately convert them to hard channels using two sinking/sourcing BIO units. This will require new wiring inside each Acromag chassis, however. For now, as soft channels, the monitors are sensitive to a failure of the host machine or a failure of the EPICS IOC. As hard channels, they will additionally be sensitive to a failure of the secondary network interface, as has been known to happen.
Each slow machine's IOC had to be restarted this afternoon to pick up the new channels. The IOCs were restarted according to the following procedure.
The intial recovery of c1susaux did not succeed. Most visibly, the alignment state of the IFO was not restored. After some debugging, we found that the restart of the modbus service was partially failing at the final burt-restore stage. The latest snapshot file /opt/rtcds/caltech/c1/burt/autoburt/latest/c1susaux.snap was not found. I manually restored a known good snapshot from earlier in the day (15:19) and we were able to relock the IMC and XARM. GV and I were just talking earlier today about eliminating these burt-restores from the systemd files. I think we should.
In an effort to make a second usable workstation, I did the following (remotely) on rossa today (not necessarily in this order, I wasn't maintaining a live log so I forgot):
So, in summary, rossa is now all set up for use during lock acquisition. However, until this machine has undergone a few months of testing, we should freeze the pianosa config and not mess with it.
Note that this version of the "crtools" is rather new. Please, use them and if there is an issue, report the errors! I am going to occassionally try lock acquisition using rossa.
maybe we should make a "dd" copy of pianosa in case rossa has issues and someone destroys pianosa by accidentally spilling coffee on it.
sudo usermod -a -G lpadmin controls
and then was able to add Grazia to the list of printers for Rossa by following the instructions on the 40m Wiki.
I installed color syntax highlighting on Rossa using the internet (https://superuser.com/questions/71588/how-to-syntax-highlight-via-less). Now if you do 'less genius_code.py', it will be highlighting the python syntax.
when I try 'sitemap' on rossa I get:
medm: error while loading shared libraries: libreadline.so.6: cannot open shared object file: No such file or directory
This is strange - I was definitely able to launch medm when I was working on this machine remotely on Friday. But now, there does seem to be a problem with this shared library being missing.
First of all, I installed mlocate to find where the shared library files are installed. Then I made the symlink, and now sitemap seems to work again.
Weirdly, my changes to /etc/resolv.conf got overwritten somehow. Was this machine rebooted? Uptime suggests it's only been running for ~6 hours at the time of writing of this elog.
sudo apt install mlocate
sudo ln -s /usr/lib/x86_64-linux-gnu/libreadline.so.7 /usr/lib/x86_64-linux-gnu/libreadline.so.6
medm: error while loading shared libraries: libreadline.so.6: cannot open shared object file: No such file or directory
yes, I rebooted yesterday to fix the 'steaking white lines' problem in the video/display
maybe we're supposed to edit something besides resolv.conf since that gets over-written on boot for some linux OS
Indeed, this is now fixed by following instructions from here. I rebooted rossa at ~1250 PDT and confirmed that resolv.conf didn't get overwritten. The resolv.conf file also now has the following useful lines at the head:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
I wanted to try using rossa as my locking workstation today. However, a few problems became quickly evident. Basically, any of our scripts that rely on the cdsutils package (there are MANY) will not work on rossa, because of some library error. This machine is running Debian 10, while the cdsutils package is being loaded from a pre-compiled install on the shared drive, so perhaps this isn't surprising?
Digging a little more, I found that actually, a version of cdsutils that actually works with python3 is actually shipped with the standard cds-workstation meta-package. This is great news, and we should try and use this where possible I guess. Deferring further debugging for daytime work.
Anyway, I added a symlink: sudo ln -s /usr/lib/x86_64-linux-gnu/libncurses.so.6 /usr/lib/x86_64-linux-gnu/libncurses.so.5, and installed wmctrl using sudo apt install wmctrl.
I noticed these streaky lines again today (but they were not a problem last night). It is annoying if we have to reboot this machine all the time. I wonder if this has something to do with missing drivers. When I ran sudo apt update && sudo apt upgrade, I got several lines like (this isn't the whole stack trace)
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/ucode_unload.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/ucode_load.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/unload_bl.bin for module nouveau
W: Possible missing firmware /lib/firmware/nvidia/gp108/acr/bl.bin for module nouveau
Is this indicative of the graphics drivers being installed incorrectly? I am hesitant to mess with this because I think in the past, it was always trying to update some graphics driver that crashed the whole machine into some weird state where we have to wipe the drive and do a fresh re-install of the OS.
Should we just follow these instructions? The graphics card is apparently Quadro P400, which is one of the supported ones according to the list of supported devices.
Or just swap donatella and rossa monitors and defer the problem for later?
After some consultation with Erik von Reis at LHO, this workstation is progressing towards being usable for most commissioning tasks. DTT, awggui, foton, and MEDM are all now working well. The main limitation now comes from the fact that many of our python scripts are written for python2, and rossa doesn't have many dependencies installed for python2. I see no reason to build these dependencies on rossa for python2, we should not have to work with an unsupported language. But at the same time, I don't want to completely wipe all our python2 scripts, and make them python3, because this would involve a lot of tedious testing that I'm not prepared to undertake at the moment (the problem is compounded by the fact that pianosa does not have many dependencies installed for python3).
So what I have done in the interim is make python3 versions of the most important scripts I need to get the PRFPMI locking working - they are in the scripts directory and have the same names as their python2 counterparts, but have a 3 appended to their names. So when working on rossa, these are the scripts that are called. Eventually, after a lot more testing, we can depracate the old scripts. Currently, where applicable, the MEDM screens allow for either the python2 or python3 version of the script to be called.
Please, for the time being, do not try and install any new packages on rossa unless you are prepared to debug any problems caused and return the machine to a workable state. If you find some issue with a missing package on rossa, (i) make a note of it on the elog, and (ii) if possible, set up your own conda environment for testing and install dependencies to that environment only.
I too, would prefer py3 for everything, but aren't all the cdsutils / gaurdian things still python2?
Is it possible to just make a python2 conda environment on rossa? I would guess that its simple and won't interfere with the regular operation of that machine.
In fact, all these utilities are now available in python3. There may be some bugs (e.g. this), but I've checked basic functionality and things look usable enough for development to proceed. While we can have a python2 env on rossa, I think it's unnecessary.
I installed QTgrace using yum on donatella. Both Grace and XMgrace are broken due to some boring fight between the Fedora package maintainers and the (non existent) Grace support team. So I have symlinked it:
controls@donatella|bin> sudo mv xmgrace xmgrace_bak
controls@donatella|bin> sudo ln -s qtgrace xmgrace
I checked that dataviewer works now for realtime and playback. Although the middle click paste on the mouse doesn't work yet.
Activated MATLAB license on megatron
Activated MATLAB license on donatella
So actually, it was the C1PSL channels that had died. We did the following to get them back:
I wanted to try hosting some docker images on a "private" server, so I installed Docker on nodus following the instructions here. The install seems to have succeeded, and as far as I can tell, none of the functionality of nodus has been disturbed (I can ssh in, access shared drive, elog seems to work fine etc). But if you find a problem, maybe this action is responsible. Note that nodus is running Scientific Linux 7.3 (Nitrogen).
At 09:34 PST I noted a glitch in the controls room as the machines went down except for c1ioo. Briefly, the video feeds disappeared from the screens, though the screens themselves didn't lose power. At first I though this was some kind of power glitch, but upon checking with Jordan, it most likely was related to some system crash. Coming back to the controls room, I could see the MC reflection beam swinging, but unfortunately all the FE models came down. I noticed that the DAQ status channels were blank.
I ssh into c1ioo no problem and ran "rtcds stop c1ioo c1als c1omc", then "rtcds restart c1x03" to do a soft restart. This worked, but the DAQ status was still blank. I then tried to ssh into c1sus and c1lsc without success, similarly c1iscex and c1iscey were unreachable. I went and did a hard restart on c1iscex by switching it off, then its extension chassis, then unplugging the power cords, then inverting these steps, and could ssh into it from rossa. I ran "rtcds start c1x01" and saw the same blank DAQ status. I noticed the elog was also down... so nodus was also affected?
Anchal got on zoom to offer some assistance. We discovered that the fb1 and nodus were subject to some kind of system reboot at precisely 09:34. The "systemctl --failed" command on fb1 displayed both the daqd_dc.service and rc-local.service as loaded but failed (inactive). Is it a good idea to try and reboot the fb1 machine? ... Anchal was able to bring elog back up from nodus (ergo, this post).
Although it probably needs the DAQ service from the fb1 machine to be up and running, I tried running the scripts/cds/rebootC1LSC.sh script. This didn't work. I tried running sudo systemctl restart daqd_dc from the fb1 machine without success. Running systemctl reset-failed "worked" for daqd_dc and rc-local services on fb1 in the sense that they were no longer output from systemctl --failed, but they remained inactive (dead) when running systemctl status on them. Following from 15303 I succeeded in restarting the daqd services. Turned out I needed to manually start the open-mx and mx services in fb1. I rerun the restartC1LSC script without success. The script fails because some machines need to be rebooted by hand.
[paco, koji, tega, ian]
Today in the morning the name server / network file system running in chiara failed. This resulted in donatella/pianosa/rossa shell prompts to hang forever. It also made sitemap crash and even dropping into a bash shell and just listing files from some directory in the file system froze the computer. Remote ssh sessions on nodus also had the same symptoms.
A little after 1 pm, we started debugging this issue with help from Koji. He suggested we hook a monitor, keyboard, and mouse onto chiara as it should still work locally even if something with the NFS (network file system) failed. We did this and then we tried for a while to unmount the /dev/sdc1/ from /home/cds/ (main file system) and mount /dev/sdb1/ from /media/40mBackup (backup copy) such that they swap places. We had no trouble unmounting the backup drive, but only succeeded in unmounting the main drive with the "lazy" unmount, or running "umount -l". Running "df" we could see that the disk space was 100% used, with only ~ 1 GB of free space which may have been the cause for the issue. After swapping these disks by editing the /etc/fstab file to implement the aforementioned swapping, we rebooted chiara and we recovered the shell prompts in all workstations, sitemap, etc... due to the backup drive mounting. We then started investigating what caused the main drive to fill up that quickly, and noted that weirdly now the capacity was at 85% or about 500GB less than before (after reboot and remount), so some large file was probably dumped into chiara that froze the NFS causing the issue.
At this point we tried opening the PSL shutter to recover the IMC. The shutter would not open and we suspected the vacuum interlock was still tripped... and indeed there was an uncleared error in the VAC screen. So with Koji's guidance we walked to the c1vac near the HV station and did the following at ~ 5:13 PM -->
We made sure that P1a (main vacuum pressure) was dropping and before continuing we decided to look back to see what the nominal vacuum state was that we should try to restore.
We are currently searching the two systems for diffrences to see if we can narrow down the culprit of the failure.
We are pumping the main volume with TP2. Once P1a reached the pressure ~2.2mtorr, we could open the PSL shutter. The TP2 voltage went up once but came down to ~20V. It's close to nominal now.
We wondered if we should use TP3 or not. I checked the vacuum pressure trends and found that the annulus pressures were going up. So we decided to open the annulus valves.
The current vacuum status is as shown in the MEDM screenshot.
There is no trend data of the valve status (sad)
We found the files that took excess space in the chiara filesystem (see Attachment 1). They were error files from the summary pages that were ~ 50 GB in size or so located under /home/cds/caltech/users/public_html/detcharsummary/logs/. We manually removed them and then copied the rest of the summary page contents into the main file system drive (this is to preserve the information backup before it gets deleted by the cron job at the end of today) and checked carefully to identify the actual issue for why these files were as large in the first place.
We then copied the /detcharsummary directory from /media/40mBackup into /home/cds to match the two disks.