40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 95 of 344  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  13461   Sun Dec 3 05:25:59 2017 gautamConfigurationComputerssendmail installed on nodus

Pizza mail didn't go out last weekend - looking at logfile, it seems like the "sendmail" service was missing. I installed sendmail following the instructions here: https://tecadmin.net/install-sendmail-server-on-centos-rhel-server/

Except that to start the sendmail service, I used systemctl and not init.d. i.e. I ran systemctl start sendmail.service (as root). Test email to myself works. Let's see if it works this weekend. Of course this isn't so critical, more important are the maintenance emails that may need to go out (e.g. disk usage alert on chiara / N2 pressure check, which looks like nodus' responsibilities). 

  13462   Sun Dec 3 17:01:08 2017 KojiConfigurationComputerssendmail installed on nodus

An email has come at 5PM on Dec 3rd.

 

  13463   Mon Dec 4 22:06:07 2017 johannesOmnistructureComputersAcromag XEND progress

I wired up the power distribution, and ethernet cables in the Acromag chassis today. For the time being it's all kind of loose in there but tomorrow the last parts should arrive from McMaster to put everything in its place. I had to unplug some of the wiring that Aaron had already done but labeled everything before I did so. I finalized the IP configuration via USB for all the units, which are now powered through the chassis and active on the network.

I started transcribing the database file ETMXaux.db that is loaded by c1auxex in the format required by the Acromags and made sure that the new c1auxex2 properly functions as a server, which it does.

ToDo-list:

  • Need to calibrate the +/- 10V swing of the analog channels via the USB utility, but that requires wiring the channels to the connectors and should probably be done once the unit sits in the rack
  • Need to wire power from the Sorensens into the chassis. There are +/- 5V, +/- 15V and +/- 20V present. The Acromags need only +12V-32V, for which I plan to use the +20V, and an excitation voltage for the binary channels, for which I'm going to wire the +5V. Should do this through the fuse rails on the side.
  • The current slow binary channels are sinking outputs, same as the XT1111 16-channel module we have. The additional 4 binary outputs of the XT1541 are sourcing, and I'm currently not sure if we can use them with the sos driver and whitening vme boards that get their binary control signals from the slow system.
  • Confirm switching of binary channels (haven't used model XT1111 before, but I assume the definitions are identical to XT1121)
  • Setup remaining essential EPICS channels and confirm that dimensions are the same (as in both give the same voltage for the same requested value)
  • Disconnect DIN cables, attach adapter boards + DSUB cables
  • Testing

 

Quote:

[Aaron, Johannes]

We configured the AtomServer for the Martian network today. Hostname is c1auxex2, IP is 192.168.113.49. Remote access over SSH is enabled.

There will be 6 acromag units served by c1auxex2.

Hostname Type IP Address
c1auxex-xt1221a 1221 192.168.113.130
c1auxex-xt1221b 1221 192.168.113.131
c1auxex-xt1221c 1221 192.168.113.132
c1auxex-xt1541a 1541 192.168.113.133
c1auxex-xt1541b 1541 192.168.113.134
c1auxex-xt1111a 1111 192.168.113.135

Some hardware to assemble the Acromag box and adapter PCBs are still missing, and the wiring and channel definitions have to be finalized. The port driver initialization instructions and channel definitions are currently locally stored in /home/controls/modbusIOC/ but will eventually be migrated to a shared location, but we need to decide how exactly we want to set up this infrastructure.

  • Should the new machines have the same hostnames as the ones they're replacing? For the transition we simply named it c1auxex2.
  • Because the communication of the server machine with the DAQ modules is happening over TCP/IP and not some VME backplane bus we could consolidate machines, particularly in the vertex area.
  • It would be good to use the fact that these SuperMicro servers have 2+ ethernet ports to separate CDS EPICS traffic from the modbus traffic. That would also keep the 30+ IPs for the Acromag thingies off the Martian host tables.
  13468   Thu Dec 7 22:24:04 2017 johannesOmnistructureComputersAcromag XEND progress

 

Quote:
 
  • Need to calibrate the +/- 10V swing of the analog channels via the USB utility, but that requires wiring the channels to the connectors and should probably be done once the unit sits in the rack
  • Need to wire power from the Sorensens into the chassis. There are +/- 5V, +/- 15V and +/- 20V present. The Acromags need only +12V-32V, for which I plan to use the +20V, and an excitation voltage for the binary channels, for which I'm going to wire the +5V. Should do this through the fuse rails on the side.
  • The current slow binary channels are sinking outputs, same as the XT1111 16-channel module we have. The additional 4 binary outputs of the XT1541 are sourcing, and I'm currently not sure if we can use them with the sos driver and whitening vme boards that get their binary control signals from the slow system.
  • Confirm switching of binary channels (haven't used model XT1111 before, but I assume the definitions are identical to XT1121)
  • Setup remaining essential EPICS channels and confirm that dimensions are the same (as in both give the same voltage for the same requested value)
  • Disconnect DIN cables, attach adapter boards + DSUB cables
  • Testing

Getting the chassis ready took a little longer than anticipated, mostly because I had not looked into the channel list myself before and forgot about Lydia's post which mentions that some of the switching controls have to be moved from the fast to the slow DAQ. We would need a total of 5+5+4+8=22 binary outputs. With the existing Acromag units we have 16 sinking outputs and 8 sourcing outputs. I looked through all the Eurocrate modules and confirmed that they all use the same switch topology which has sourcing inputs.

While one can use a pull-down resistor to control a sourcing input with a sourcing output,

pulling down the MAX333A input (datasheet says logic low is <0.8V) requires something like 100 Ohms for the pull down resistor, which would require ~150mA of current PER CHANNEL, which is unreasonable. Instead, I asked Steve to buy a second XT1111 and modified the chassis to accomodate more Acromag units.

I have now finished wiring the chassis (except for 8 remaining bypass controls to the whitening board which need the second XT1111), calibrated all channels in use, confirmed all pin locations via the existing breakout boards and DCC drawings for the eurocrate modules, and today Steve and I added more fuses to the DIN rail power distribution for +20V and +15V.

There was not enough contingent free space in the XEND rack to mount the chassis, so for now I placed it next to it.

c1auxex2 is currently hosting all original physical c1auxex channels (not yet calc records) under their original name with an _XT added at the end to avoid duplicate channel names. c1auxex is still in control of ETMX. All EPICS channels hosted by c1auxex2 are in dimensions of Volts. The plan for tomorrow is to take c1auxex off the grid, rename the c1auxex2 hosted channels and transfer ETMX controls to it, provided we can find enough 37pin DSub cables (8). I made 5 adapter boards for the 5 Eurocrate modules that need to talk to the slow DAQ through their backplane connector.

  13469   Fri Dec 8 12:06:59 2017 johannesOmnistructureComputersc1auxex2 ready - but need more cables

The new slow machine c1auxex2 is ready to deploy. Unfortunately we don't have enough 37pin DSub cables to connect all channels. In fact, we need a total of 8, and I found only three male-male cables and one gender changer. I asked Steve to buy more.

Over the past week I have transferred all EPICS records - soft channels and physical ones - from c1auxex to c1auxex2, making changes where needed. Today I started the in-situ testing

  1. Unplugged ETMX's satellite box
  2. Unplugged the eurocrate backplane DIN cables from the SOS Driver and QPD Whitening filter modules (the ones that receive ao channels)
  3. Measured output voltages on the relevant pins for comparison after the swap
  4. Turned off c1auxex by key, removed ethernet cable
  5. Started the modbus ioc on c1auxex2
  6. Slow machine indicator channels came online, ETMX Watchdog was responsive (but didn't have anything to do due to missing inputs) and reporting. PIT/YAW sliders function as expected
  7. Restoring the previous settings gives output voltages close to the previous values, in fact the exact values requested (due to fresh calibration)
  8. Last step is to go live with c1auxex2 and confirm the remaining channels work as expected.

I copied the relevant files to start the modbus server to /cvs/cds/caltech/target/c1auxex2, although kept local copies in /home/controls/modbusIOC/ from which they're still run.

I wonder what's the best practice for this. Probably to store the database files centrally and load them over the network on server start?

  13487   Mon Dec 18 17:48:09 2017 ranaUpdateComputersrossa: SL7.3 upgrade continues

Following instructions from LLO-CDS fo the rossa upgrade. Last time there were some issues with not being to access the LLO EPEL repos, but this time it seems to be working fine.

After adding font aliases, need to run 'sudo xset fp rehash' to get the new aliases to take hold. Afterwards, am able to use MEDM and sitemap just fine.

But diaggui won't run because of a lib-sasl error. Try 'sudo yum install gds-all'.

diaggui: error while loading shared libraries: libsasl2.so.2: cannot open shared object file: No such file or directorycrying (have contacted LLO CDS admins)

X-windows keeps crashing with SL7 and this big monitor. Followed instructions on the internet to remove the generic 'Nouveau' driver and install the proprietary NVDIA drivers by dropping to run level 3 and runnning some command line hoodoo to modify the X-files. Now I can even put the mouse on the left side of the screen and it doesn't crash. laugh

  13504   Fri Jan 5 17:50:47 2018 ranaConfigurationComputersmotif on nodus

I had to do 'sudo yum install motif' on nodus so that we could get libXm.so.4 so that we could run MEDM. Works now.

  13539   Fri Jan 12 12:31:04 2018 gautamConfigurationComputerssendmail troubles on nodus

I'm having trouble getting the sendmail service going on nodus since the Christmas day power failure - for some reason, it seems like the mail server that sendmail uses to send out emails on nodus (mx1.caltech.iphmx.com, IP=68.232.148.132) is on a blacklist! Not sure how exactly to go about remedying this.

Running sudo systemctl status sendmail.service -l also shows a bunch of suspicious lines:

Jan 12 10:15:27 nodus.ligo.caltech.edu sendmail[6958]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 10:15:45 nodus.ligo.caltech.edu sendmail[6958]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+10:49:16, xdelay=00:00:39, mailer=esmtp, pri=5432408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 11:15:23 nodus.ligo.caltech.edu sendmail[10334]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 11:15:31 nodus.ligo.caltech.edu sendmail[10334]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+11:49:02, xdelay=00:00:27, mailer=esmtp, pri=5522408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 12:15:25 nodus.ligo.caltech.edu sendmail[13747]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 12:15:42 nodus.ligo.caltech.edu sendmail[13747]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+12:49:13, xdelay=00:00:33, mailer=esmtp, pri=5612408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable

 

Why is nodus attempting to email umakant.rapol@iiserpune.ac.in?

  13540   Fri Jan 12 16:01:27 2018 KojiConfigurationComputerssendmail troubles on nodus

I personally don't like the idea of having sendmail (or something similar like postfix) on a personal server as it requires a lot of maintenance cost (like security update, configuration, etc). If we can use external mail service (like gmail) via gmail API on python, that would easy our worry, I thought.

  13542   Fri Jan 12 18:22:09 2018 gautamConfigurationComputerssendmail troubles on nodus

Okay I will port awade's python mailer stuff for this purpose.

gautam 14Jan2018 1730: Python mailer has been implemented: see here for the files. On shared drive, the files are at /opt/rtcds/caltech/c1/scripts/general/pizza/pythonMailer/

gautam 11Feb2018 1730: The python mailer had never once worked successfully in automatically sending the message. I realized this may be because I had put the script on the root user's crontab, but had setup the authentication keyring with the password for the mailer on the controls user. So I have now setup a controls user crontab, which for now just runs the pizza mailing. let's see if this works next Sunday...

Quote:

I personally don't like the idea of having sendmail (or something similar like postfix) on a personal server as it requires a lot of maintenance cost (like security update, configuration, etc). If we can use external mail service (like gmail) via gmail API on python, that would easy our worry, I thought.

 

  13545   Sat Jan 13 02:36:51 2018 ranaConfigurationComputerssendmail troubles on nodus

I think sendmail is required on nodus since that's how the dokuwiki works. That's why the dokuwiki was trying to send an email to Umakant.

  13546   Sat Jan 13 03:20:55 2018 KojiConfigurationComputerssendmail troubles on nodus

I know it, and I don't like it. DokuWiki seems to allow us to use an external server for notification emails. That would be the way to go.

  13681   Tue Mar 13 20:03:16 2018 johannesConfigurationComputersc1auxex replacement

I assembled the rack-mount server that will long-term replace c1auxex, so we can return the borrowed unit to Larry.

SUPERMICRO SYS-5017A-EP Specs:

  • Intel Atom N2800 (2 cores, 1.8GHz, 1MB, 64-bit)
  • 4GB (2x2GB) DDR3 RAM
  • 128 GB SSD

IMG_20180313_105154890.jpg      IMG_20180313_133031002.jpg

I installed a standard Debian Jessie distribution, with option LXDE for minimal resource usage. Steps taken after fresh install

  1. Give controls sudo permission: usermod -aG sudo controls
  2. mkdir /cvs/cds
  3. apt-get install nfs-common
  4. Added line "chiara:/home/cds              /cvs/cds        nfs     rw,bg,nfsvers=3" to end of /etc/fstab
  5. Configured network adapter in /etc/network/interfaces
            iface eth0 inet static
            address 192.168.113.48
            netmask 255.255.255.0
            gateway 192.168.113.2
            dns-nameservers 192.168.113.104 131.215.125.1 131.215.139.100
            dns-search martian

    I first assigned the IP 192.168.113.59 of the original c1auxex, but for some reason my ssh connections kept failing mid-session. After I switched to a different IP the disruption no longer happened.
  6. Add lines "search martian" and "nameserver 192.168.113.104" to /etc/resolv.conf
  7. apt-get install openssh-server
    At this point the unit was ready for remote connections on the martian network, and I moved it to the XEND.
  8. Added lines to /home/controls/.bashrc to set paths and environment variables:
    export PATH=/cvs/cds/rtapps/epics-3.14.12.2_long/base/bin/linux-x86_64:/cvs/cds/rtapps/epics-3.14.12.2_long/extensions/bin/linux-x86_64:$PATH
    export HOST_ARCH=linux-x86_64
    export EPICS_HOST_ARCH=linux-x86_64
    export RPN_DEFNS=~/.defns.rpn
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/cvs/cds/rtapps/epics-3.14.12.2_long/base/lib/linux-x86_64:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/lib/linux-x86_64/:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/asyn/lib/linux-x86_64
  9. apt-get install libmotif-common libmotif4 libxp6 (required to run burtwb utility)

The server is ready to take over for c1auxex2 and does not need any local epics compiled, since it can run the 3.14.12.2_long binaries in /cvs/cds.

Attachment 1: IMG_20180313_105154890.jpg
IMG_20180313_105154890.jpg
Attachment 2: IMG_20180313_133031002.jpg
IMG_20180313_133031002.jpg
  13682   Wed Mar 14 23:58:30 2018 johannesConfigurationComputersc1auxex replacement

I replaced the borrowed server with the permanent one today. Before Removing the current server, Before, I performed several additional preparations:

  • Updated Chiara hostables to IP 192.168.113.48 for c1auxex
  • apt-get install procserv
  • copied ETMXaux2.* files in /cvs/cds/caltech/target/c1auxex2 to ETMXaux.* and changed references from /opt/rtcds/epics (which was a local directory on c1auxex2) to /cvs/cds/rtapps/epics-3.14.12.2_long in the copied files
  • Added instruction
    Environment="LD_LIBRARY_PATH=/cvs/cds/rtapps/epics-3.14.12.2_long/base/lib/linux-x86_64:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/lib/linux-x86_64/:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/asyn/lib/linux-x86_64"
    to /etc/systemd/system/modbusIOC.service  (required for burtwb dependencies)

Then I replaced the server:

  1. IFO was in LSC mode with both arms locked
  2. Backed up ETMX alignment using save feature in IFOalign screen
  3. Disengaged LSC mode
  4. Shut down ETMX watchdog
  5. Disconnected ETMX satellite box
  6. Shut down c1auxex2 and c1auxex
  7. Performed the server swap
  8. Booted c1auxex
  9. Made sure EPICS channels were back online and channel defaults were restored
  10. Reconnected satellite box
  11. Turned on watchdog
  12. Turned on OpLevs
  13. Engaged LSC mode -> both arms were instantly locked

I returned c1auxex2 to Larry, who needed it back asap because of some hardware failure

Steve: Acromag XT1221 ordered 3-15-18

  13683   Thu Mar 15 16:00:25 2018 Larry WallaceSummaryComputersCert renewal for NODUS

The cert for nodus has been renewed for another 2 years.

The following is the basic procedure for getting a new cert: (Note certs are only good for two years as of 2018)
openssl req -sha256 -nodes -newkey rsa:2048 -keyout nodus.ligo.caltech.edu.key -out nodus.ligo.caltech.edu.csr
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:CaliforniaLocality Name (eg, city) []:Pasadena
Organization Name (eg, company) [Internet Widgits Pty Ltd]:California Institute of Technology
Organizational Unit Name (eg, section) []:LIGO
Common Name (eg, YOUR name) []:nodus.ligo.caltech.edu

Leave the e-mail address, challenge password and optional company name blank. A new private key will be generated.
chown root nodus.ligo.caltech.edu.key
chgrp root nodus.ligo.caltech.edu.key
chmod 0600 nodus.ligo.caltech.edu.key

The nodus.ligo.caltech.edu.csr file is what is sent in for the cert.
This file should be sent to either ryan@ligo.caltech.edu or security@caltech.edu and copy wallace_l@ligo.caltech.edu.

A URL llink with the new cert to be downloaded will be sent to the requestor.

Once the files are downloaded, the new cert and intermediate cert, they can be copied and renamed.

The PEM-encoded host certificate by itself is saved at:

  /etc/httpd/ssl/nodus.ligo.caltech.edu.crt

The nodus.ligo.caltech.edu.key file should be in the same directory or whichever directory is indicated in the ssl.conf located in /etc/httpd/conf.d/  directory.

httpd will need to be restarted in order for it to see the new cert.

 

  13687   Mon Mar 19 14:39:09 2018 johannesConfigurationComputersc1auxex replacement

[gautam, johannes]

The temperature control output channel for the XEND seismometer wasn't working properly. The EPICS channel existed, could be written to and read from, but no physical voltage was observed on the (confirmed properly) wired connector.

The Acromag DAC that outputs this channel was completely spare in the original scheme and does not serve any other channels at the moment. We found it to be unresponsive to ping from the host machine (reminder: the Acromags are on their own subnet with IPs 192.168.114.xxx connected to the secondary ethernet adapter of c1auxex), while all others returned the ping just fine. The modules have daisy-chained ethernet connections, and the one Acromag unit behind the unresponsive one in the chain was still responding to ping and its channels were working, so it couldn't have been a problem with the (ethernet) cabling.

Gautam and I power-cycled the chassis and server, which resolved the issue. The channel is now outputting the requested voltage on the Out1 BNC connector of the chassis (front). When I was setting up the whole system and did frequent rebooting and IP-redefinitions I have seen network issues arise between server and Acromags. In particular, when changing the network settings server-side, the Acromags needed to reboot occasionally. So this whole problem was probably due to the recent server-swap, as the chassis had not been power-cycled since.

 

During the debugging we also found that the c1psl2 channels were not working. This was because I had overlooked to update the epics environment variables for the modbus path defined in /cvs/cds/caltech/target/c1psl2/npro_config.cmd from the local installation /opt/epics/ (which doesn't exist on the new server anymore) to the network location /cvs/cds/rtapps/epics-3.14.12.2_long/. This has been fixed and the slow diagnostic PSL channels are recording again.

  13761   Wed Apr 18 17:15:35 2018 ranaConfigurationComputersNODUS: no xmgrace for dataviewer

Turns out, there is no RPM for XmGrace on Scientific Linux 7. Since this is the graphic output of dataviewer, we can't use dataviewer through X windows until this gets fixed. CDS is looking into a xmGrace replacement, but it would be better if we can hijack a alt RH repo to steal a temporary xmgrace RPM. KT has been pinged.

  13897   Wed May 30 12:13:13 2018 ranaUpdateComputersNODUS: rsyncd + frames

To get our rsync back to LDAS back up, I followed instructions from Dan Kozak:

  1. mounted /frames from fb1: I modified /etc/fstab
  2. modified /etc/rsyncd.conf to allow access from LDAS
  3. restarted rsync as daemon: 'sudo /usr/bin/rsync --daemon --config=/etc/rsyncd.conf'

Next need to figure out what the SL7 protocol is for running this as a daemon after boot - some kind of init.d thing probably

  13904   Thu May 31 17:47:12 2018 KojiUpdateComputersmegatron process cleaning up

megatron had full of zombie medm processes due to some of the screenshot scripts.

I also found that apache2 is running on megatron without any configuration. I just disable it by

sudo update-rc.d apache2 disable

  13906   Thu May 31 22:59:27 2018 KojiConfigurationComputersShorewall on nodus

[Jonathan Koji]

Shorewall (http://shorewall.net/), a tool to configure iptables, was installed on nodus.
The description about shorewall setting on nodus can be found here: https://wiki-40m.ligo.caltech.edu/NodusShorewallSetting

NDS (31200) on megatoron is not enabled outside of the firewall yet.

  14023   Tue Jun 26 22:06:33 2018 ranaUpdateComputersrossa: SL7.3 upgrade continues: DTT is back

I used the following commands to get diaggui to run on rossa/SL7:

controls@rossa|lib64> ls -lrt libsasl*
-rwxr-xr-x. 1 root root 121296 Feb 16  2016 libsasl2.so.3.0.0
lrwxrwxrwx. 1 root root     17 Dec 18  2017 libsasl2.so -> libsasl2.so.3.0.0
lrwxrwxrwx. 1 root root     17 Dec 18  2017 libsasl2.so.3 -> libsasl2.so.3.0.0
controls@rossa|lib64> sudo ln -s libsasl2.so.3.0.0 libsasl2.so.2
controls@rossa|lib64> ls -lrt libsasl*
-rwxr-xr-x. 1 root root 121296 Feb 16  2016 libsasl2.so.3.0.0
lrwxrwxrwx. 1 root root     17 Dec 18  2017 libsasl2.so -> libsasl2.so.3.0.0
lrwxrwxrwx. 1 root root     17 Dec 18  2017 libsasl2.so.3 -> libsasl2.so.3.0.0
lrwxrwxrwx. 1 root root     17 Jun 26 22:02 libsasl2.so.2 -> libsasl2.so.3.0.0

 

Basically, I have set up a symbolic link to point sasl2.so.2 to sasl2.so.3.0.0. I've asked LLO again for some guidance on whether or not to find some backport in a non-standard SL7 repo. IF they reply, we may later replace this link with a regular file.

For the nonce, diaggui runs and is able to show us the spectra. We also got swept sine to work. But the FOTON launched from inside of AWGGUI doesn't inherit the sample frequency of the excitation channel so we can't filter noise injections from awggui yet.

  14025   Wed Jun 27 19:05:20 2018 ranaUpdateComputersrossa: SL7.3 upgrade continues: DTT is back

UNELOGGED: someone has changed Pianosa from Ubuntu/Dumbian into SL7. Might be hackers.

Donatella is now the only Ubuntu machine in the control room. I propose we keep it this way for another month and then go fully SL7 if we find no bugs with Pianosa/Rossa.

  14026   Wed Jun 27 19:37:16 2018 KojiConfigurationComputersNew NAT router installed

[Larry, Koji]

We replaced the NAT router between martian and the campus net. We have the administrative web page available for the NAT router, but it is accessible from inside (=martian) as expected.

We changed the IP address registration of nodus for the internet so that the packets to nodus is directed to the NAT router. Then the NAT router forwards the packets to actual nodus only for the allowed ports. Because of this change of the IP we had a few confusions. First of all, martian net, which relies on chiara for DNS resolution. The 40m wifi router seemed to have internal DNS cache and required to reboot to make the IP change effective.

The WAN side cable of nodus was removed.

We needed to run "sudo rndc flush" to force chiara's bind9 to refresh the cache. We also needed to restart httpd ("sudo systemctl restart httpd") on nodus to make the port 8081 work properly. 

So far, ssh (22), web services (30889), and elog (8081, 8080) were tested. We also need to test megatron NDS port forwarding and rsync for nodus, too.

Finally I turned off the firewall rules of shorewall on nodus as it is no longer necessary.

More details are found on the wiki page. https://wiki-40m.ligo.caltech.edu/FirewallSetting

Attachment 1: P_20180627_193357.jpg
P_20180627_193357.jpg
  14104   Wed Jul 25 22:46:15 2018 gautamConfigurationComputersNDS access from outside

After this work, I've been having some trouble getting data with Python NDS. Eventually, I figured out that the nds connection request has to be pointed at '131.215.115.200' (the address of the NAT router which faces the outside world), port 31200 (it used to work with 'nds40.ligo.caltech.edu' or '131.215.115.189'). So the following snippet in python allows a connection to be opened. Offline access of frame data via NDS2 now seems possible.

import nds2
conn = nds2.connection('131.215.115.200',31200)
Quote:
 

So far, ssh (22), web services (30889), and elog (8081, 8080) were tested. We also need to test megatron NDS port forwarding and rsync for nodus, too.Finally I turned off the firewall rules of shorewall on nodus as it is no longer necessary.

  14121   Wed Aug 1 16:23:48 2018 KojiSummaryComputersTransition of the main NFS disk on chiara

[Gautam Koji]

Taking the opportunity to shutdown c1ioo for adding a DAC card, we shutdown chiara and worked on moving of the main disk to the bigger home.

We shutdown most of the martian machines including the control machines, megatron, optimus, and nodus.

- Before shutting down chiara, we ran rsync to make the 4TB disk (used to be teh backup) and /cvs/cds synced.

sudo rsync -a --progress /home/cds/ /media/40mBackup

- Modified /etc/fstab

proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/sda1 during installation
UUID=972db769-4020-4b74-b943-9b868c26043a /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=a3f5d977-72d7-47c9-a059-38633d16413e none            swap    sw              0       0
UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0
#fb:/frames      /frames nfs     ro,bg

UUID=92dc7073-bf4d-4c58-8052-63129ff5755b   /home/cds    ext4    defaults,relatime,commit=60    0   0

- Shutdown chiara. Put the 4TB disk in the chassis. We also installed a new disk (but later it turned out that it only has 2TB...)

- Restart the mahcine. This already made the 4TB disk mounted as /cvs/cds .

- Restart bind9 with DHCP for the diskless clients (cf. https://wiki-40m.ligo.caltech.edu/CDS/How_to_join_martian)

sudo service bind9 restart
sudo service isc-dhcp-server restart

- Looks like /etc/resolv.conf is automatically overwritten by a tool or something everytime we restart the machine!? I still don't know how to avoid this. (cf.  https://www.ctrl.blog/entry/resolvconf-tutorial). But at least for today we manually wrote /etc/resolv.conf

controls@chiara|backup> cat /etc/resolv.conf
# Dynamic
resolv.conf(5) file for
glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.113.104
nameserver 131.215.125.1
nameserver 8.8.8.8

search martian

  14122   Wed Aug 1 19:41:15 2018 gautamSummaryComputersRTCDS recovery, c1ioo changes

[Gautam Koji]

After this work, we recovered the nominal RTCDS state. The main points were:

  1. We needed to restart the bind9 service on chiara such that the FEs knew their IP addresses upon reboot and hence, could get their root filesystems over NFS.
  2. We recovered suspension local damping, IMC locking and POX/POY locking with nominal arm transmission.

Some stuff that is not working as usual:

  1. The EX QPD is reporting strange transmission values - even with the PRM completely misaligned, it reports transmission of ~30. But we were able to lock the Xarm with the Thorlabs PD and revover transmission of ~1.15.
  2. The X arm green does not stay locked to the cavity - the alignment looks fine, and the green flashes are strong, but the lock does not hold. This shouldn't be directly connected to anything we did today since the Green PDH servo is entirely analog.

I made a model change in c1x03 (the IOP model on c1ioo) to add a DAC part. The model compiled, installed and started correctly, and looking at dmesg on c1ioo, it recognises the DAC card as what it is. Next step is to use a core on c1ioo for a c1omc model, and actually try driving some signals.

Note that the only change made to the c1ioo expansion chassis was that a DAC card was installed into the PCIe bus. The adaptor card which allows interfacing the DAC card to an AI board was already in the expansion chassis, presumably from whenever the DAC was removed from this machine.

*I think I forgot to restart optimus after this work...

Attachment 1: CDS_overview.png
CDS_overview.png
  14123   Wed Aug 1 20:44:57 2018 gautamSummaryComputersc1omc model (re?)created

The main motivation behind adding a DAC card in c1ioo was to setup an RTCDS model for the OMC. Attachment #1 shows the new look CDS overview screen. Here is what I did.

Mostly, I followed instructions from when I setup the model for the EX green PZTs.


Simulink model:

The model is just a toy for now (CDS parameters, ADC block and 2 CDS filter modules). I leave it to Aaron to actually populate it, check functionality etc. The path to the model is /opt/rtcds/caltech/c1/userapps/release/isc/c1/models/c1omc.mdl. I am listing the parameters set on the CDS_PARAMETERS block:

  • host = c1ioo
  • site = c1
  • rate = 16k
  • dcuid = 27 (which I chose after making sure that this dcuid was not used on this list which I also updated by adding c1omc and moving c1imc to "old")
  • specific_cpu = 6 (again chosen after checking the available CPUs in the above list and confirming using the cset utility).
  • adc_Slave = 1
  • shmem_daq = 1
  • no_rfm_dma = 1
  • biquad = 1

Building and installing model:

Once the model was installed, I logged into c1ioo, and built and installed the models using the usual rtcds make and rtcds install instructions. Before starting the model, I edited /diskless/root.jessie/etc/rtsystab to allow c1omc to be run on c1ioo. Using sudo cset set, I verified that CPU #6 is no longer listed (if I understand correctly, the RTCDS system takes over the core).


MEDM:

To reflect all this on the MEDM CDS OVERVIEW screen, I just edited the screen.

  • Moved the orange explanation of bits over to the c1iscey panel to make space in the c1ioo panel.
  • Edited the macros to reflect the c1omc parameters.

DAQD:

Finally, I followed the instructions here to get the channels into frames and make all the indicators green. Went into fb and restarted the daqd processes. All looks good smiley. I'm going to leave the model running overnight to investigate stability. I forgot to svn commit the model tonight, will do it tomorrow.


The testing plan (at least initially) is to install the AA and AI boards from the OMC rack in 1X1/1X2. Then we will have short SCSI cables running from the ADC/DAC to these. The actual HV driving stages will remain in the OMC rack (NE corner of AS table).

@Steve, can we get 10 Male-Female D9 cables so that we can run them from 1X1/1X2 to the OMC rack?


Unrelated to this work: There were 2 crashes of the models on c1lsc, one ~6pm and one right now ~1030pm. The restart script brought everything back gracefully  yes...

Attachment 1: CDS_OVERVIEW_withOMC.png
CDS_OVERVIEW_withOMC.png
  14126   Thu Aug 2 20:54:18 2018 gautamSummaryComputersc1omc model looks stable

Actually, c1lsc had crashed again sometime last night so I had to reboot everything this morning. I used the reboot script again, but I increased the sleep time between trying to start up the models again so that I could walk into the VEA and power cycle the c1lsc expansion chassis, as this kind of frequent model crash has been fixed by doing so in the past. Sure enough, there have been no issues since I rebooted everything at ~1030 in the morning. 

The c1omc model itself has been stable as well, though of course, there is nothing in there at the moment. I may do a check of the newly installed DAC tomorrow just to see that we can put out a sine wave.

Steve has ordered the D-sub cabling that will allow us to route signals between AA/AI boards in 1X1/1X2 to the HV PZT electronics in the OMC rack. Things look setup for a measurement next week. Aaron will post a block diagram + photoz of what box goes where in the electronics racks.

  14127   Thu Aug 2 23:09:25 2018 ranaSummaryComputersX Green "Mystery" solved

I'm going to guess that this was me: I was disconnecting some octopus power strip nonsense down there (in particular illuminators and cameras), so I might have turned off the AUX rack by mistake.

Quote:

I walked down to the X end and found that the entire AUX laser electronics rack isn't getting any power. There was no elog about this.

I couldn't find any free points in the power strip where I think all this stuff was plugged in so I'm going to hold off on resurrecting this until tomorrow when I'll work with Steve.

Quote:

The X arm green does not stay locked to the cavity - the alignment looks fine, and the green flashes are strong, but the lock does not hold. This shouldn't be directly connected to anything we did today since the Green PDH servo is entirely analog.

  14138   Mon Aug 6 09:42:10 2018 KojiSummaryComputersTransition of the main NFS disk on chiara

Follow up:

- At least it was confirmed that the local backup (4TB->2TB) is regularly running every morning.

- The 2TB disk was used up to 95%. To ease the size of the remaining space, I have further compressed the burt snapshot folders. (~2016). This released another 150GB. The 2TB is currently used up to  87%.

Prev

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sdc1      3845709644 1731391748 1918967020  48% /home/cds
/dev/sdd1      2113786796 1886162780  120249888  95% /media/40mBackup

Now

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sdc1      3845709644 1731706744 1918652024  48% /home/cds
/dev/sdd1      2113786796 1728124828  278287840  87% /media/40mBackup

 

  14197   Wed Sep 12 22:22:30 2018 KojiUpdateComputersSSL2.0, SSL3.0 disabled

LIGO GC notified us that nodus had SSL2.0 and SSL3.0 enabled. This has been disabled now.
The details are described on 40m wiki.

  14283   Wed Nov 7 19:20:53 2018 gautamUpdateComputersPaola Battery Error

The VEA vertex laptop, paola, has a flashing orange indicator which I take to mean some kind of battery issue. When the laptop is disconnected from its AC power adaptor, it immediately shuts down. So this machine is kind of useless for its intended purpose of being a portable computer we can work at optical tables withno. The actual battery diagnostics (using upower) don't report any errors. 

  14382   Thu Jan 3 21:17:49 2019 ranaConfigurationComputersWorkstation Upgrade: Donatella -> Scientific Linux 7.2

donatella was one of our last workstations running ubuntu12. we installed SL7 on there today

  1. had to use a DVD; wouldn't boot from USB stick
  2. made sure to use userID=1001 and groupID=1001 at the initial install part
  3. went to the Keith Thorne LLO wiki on SL7
  4. The 'yum update' command failed due to a gstreamer conflict. I did "yum remove gstreamer1-plugins-ugly-free-1.10.4-3.el7.x86_64" and then it continued a bit more.
  5. Then there are ~20 errors related to gds-crtools that look like this:Error: Package: gds-crtools-2.18.12-1.el7.x86_64 (lscsoft-production) Requires: libMatrix.so.6.14()(64bit)

  6. I re-ran the yum install .... command using the --skip-broken command and that seemed to complete, although I guess the GDS stuff will not work.
  7. Installed: terminator, inconsolata-fonts, 
  8. Installed XFCE desktop as per K Thorne:  yum groupinstall "Xfce" -y
  9.  
Attachment 1: IMG_20190103_205158.jpg
IMG_20190103_205158.jpg
  14464   Mon Feb 18 19:16:55 2019 ranaSummaryComputersnew laptop setup: ASIA

The old IBM laptop (Asia) has died from a fan error after 7 years. WE have a new Lenovo 330 IdeaPad to replace it:

  1. to enter bios, the usual FN keys don't work. Power off laptop. Insert paperclip into small hole on laptop side with upside-down U symbol. Laptop powers up into BIOS setup.
  2. Insert SL 7.6 DVD into drive
  3. Change all settings from modern UEFI into Legacy support. Change Boot order to put CDROM first.
  4. Boot.
  5. Touchpad is not detected. Hookup mouse for setup.
  6. Delete windows partition.
  7. Setup wireless network according to (https://wiki-40m.ligo.caltech.edu/Network). Computer name = asia.martian. 
  8. Set root password. Do not create user (we want to make the controls acct later using the command line so that we can set userID and groupID both to 1001).
  9. Begin install...lots of disk access noises for awhile...

Install done. Touchpad not recognized by linux - lots of forum posts about kernel patching...Arrgh!

  14465   Tue Feb 19 19:03:18 2019 ranaUpdateComputersMartian router -> WPA2

I have swapped our martian router's WiFi security over to WPA2 (AES) from the previous, less-secure, system. Creds are in the secrets-40-red.

  14543   Mon Apr 15 18:29:07 2019 ranaSummaryComputersnew laptop setup: ASIA - yum issues

had trouble using YUM to update. This turned out to be a config problem with our Martian router, not the new laptop. Since I've changed the WiFi pwd awhile ago for the martian access for the CDS laptops, you'll have to enter that in order to use the laptops.

turned out to be some Access Control nonsense inside of the router. Even loggin in as admin with a cable gave some of the fields the greyed out color (had to hover over the link and then type the URL directly in the browser window). ASIA is now able to connect and use YUM + usual connections. Gautam and I have also moved the router a little to get easier view of its LED lights and not blockk its WiFi signal with the cable tray. We'll get a little shelf so that we can mount it ~1 foot off of the wall.

still, this seems like a bad laptop choice: the Lenovo Ideapad 330 will not have its touchpad supported by SL7 without compiling a new version of the kernel frown

  14589   Thu May 2 15:15:15 2019 JonOmnistructureComputerssusaux machine renamed

Now that the replacement susaux machine is installed and fully tested, I renamed it from c1susaux2 to c1susaux and updated the DNS lookup tables on chiara accordingly.

  14598   Wed May 8 22:11:46 2019 ranaSummaryComputersnew laptop setup: ASIA - yum issues
  • setup controls user using K Thorne LLO CDS offsite workstation instructions
  • modified /etc/fstab ala pianosa to NFS mount disks
  • set up symlinks as other workstations
  • troubles with libsasl2 and libmetaio libraries as usual for SL7 - doing symlink tricks
  • setup shared .bashrc
  • now running 'yum install gds-all' to see if we need more local libraries to run GDS from the shared disks...
  14624   Mon May 20 13:16:57 2019 gautamSummaryComputersnew laptop setup: ASIA - ndscope and diaggui

Following instructions here, I installed ndscope on this machine. DTT still could not be be run from this machine, and I want to use this today - so I ran the following commands from the K. Thorne setup instructions.

yum clean metadata
yum update
yum install cds-workstation pcaspy subversion redhat-lsb  gnuradio google-chrome-stable xorg-x11-drv-nvidia epel-release redhat-lsb

Now diaggui can be opened, and spectra can be made. I'm moving this laptop to its new home at EY.

Quote:
  • now running 'yum install gds-all' to see if we need more local libraries to run GDS from the shared disks...
  14679   Mon Jun 17 16:02:17 2019 aaronUpdateComputerskeyed PSL crate

Milind pointed out that all boxes on the medm screens were white. I didn't have diagnostics from the medm screens, so I started following the troubleshooting steps on the restart procedures page.

It seemed like maybe a frontend problem. I tried telnet-ing into several of the fe, and wasn't able to access c1psl. The section on c1psl mentions that if this machine crashes, the screens will go white and the crate needs to be turned off and on. Millind did this.

Now, most of the status lights are restored (screenshot).

 


Milind: I did a burtrestore following this and locked the PMC following the steps described in this elog.

Attachment 1: after_keying_crate.png
after_keying_crate.png
  14767   Wed Jul 17 17:56:18 2019 KojiConfigurationComputersGave resolv.conf to giada

Kruthi noticed that she could not login to rossa from giada.

I checked /etc/resolv.conf and it was

nameserver 127.0.0.1

so obviously it is useless to refer localhost (i.e. giada) as a nameserver.

I copied our usual resolv.conf to giada as following:

nameserver 192.168.113.104
nameserver 131.215.125.1
nameserver 8.8.8.8

search martian

Giada's ssh known_host had unupdated entry for rossa, so I had to clean it up, but after that we can connect to rossa from giada just by "ssh rossa".

Case closed.

  14799   Mon Jul 22 21:04:40 2019 ranaUpdateComputersmaking rossa great again
  • copied over /etc/fstab lines from pianosa sothat the NFS mounts work correctly
  • added symlinks so that the NFS dirs mount in the right dirs
  • installed Opera browser
  • symlink libsasl2.so.3 -> libsasl2.so.2 and now DTT runs and can get data now and in the past
  • DTT can natively produce PDF so you don't have to take screen caps of your camera phone and make a chalk drawing of that anymore
  • sitemap/MEDM is working
  • after editing fonts.alias as detailed in Thorne wiki, I ran 'sudo xset fp rehash' to get MEDM to notice new fonts. MEDM Scalable fonts are back!!
  • installed Grace and now dataviewer works
  • ndscope not running: yum install ndscope breaks because it can't find a couple of python34 packagesno
  • tested that AWGGUI also runs and puts in real sine waves
Attachment 1: seis.pdf
seis.pdf
  14812   Thu Jul 25 14:28:03 2019 gautamConfigurationComputersfirewalld disabled for EPICS CA

I think rana did some more changes to this workstation to make it useful for commissioning activities - but the MEDM screens were still white blanks. The problem was that the firewalld wasn't disabled (last two steps of the KThorne setup wiki). I disabled it. Now donatella can run MEDM, ndscope and StripTool. DTT doesn't work to get online data because of a "Synchronization Error", I'm not bothering with this for now. I think Kruthi successfully demonstrated the fetching of offline data with DTT.

Attachment 1: donatellaCommissioning.png
donatellaCommissioning.png
  14813   Thu Jul 25 20:08:36 2019 gautamUpdateComputersSolidworks machine

I brought one CPU (Dell T3500) and one 28" monitor from Mike Pedraza's office in Downs to the 40m. It is on Steve's desk right now, pending setup. The machine already has Solidworks and Altium installed on it, so we can set it up at our leisure. The login credentials are pasted on the CPU with a post-it should anyone wish to set it up.

  14820   Wed Jul 31 14:44:11 2019 gautamUpdateComputersSupermicro inventory

Chub brought the replacement Supermicro we ordered to the 40m today. I stored it at the SW entrance to the VEA, along with the other Supermicro. At the time of writing, we have, in hand, two (unused) Supermicro machines. One is meant for EY and the other is meant for c1psl/c1iool0. DDR3 RAM and 120 GB SSD drives have also been ordered, but have not yet arrived (I think, Chub, please correct me if I'm wrong).

Update 20190802: The DDR3 RAM and 120 GB SSD drives arrived, and are stored in the FE hardware cabinet along the east arm. So at the time of writing, we have 2 sets of (Supermicro + 120GB HD + 4GB RAM).

Quote:

We should ask Chub to reorder several more SuperMicro rackmount machines, SSD drives, and DRAM cards. Gautam has the list of parts from Johannes' last order.

  14829   Mon Aug 5 17:23:26 2019 gautamSummaryComputersWiFi Settings on asia

The VEA laptop asia was configured to be able to connect to too many WiFi networks - it was getting conflicted in its default position at the vertex and trying to hop between networks, for some reason trying to connect to networks that had poor signal strength. I deleted all options from the known networks except 40MARS. Now the network connection seems much more stable and reliable.

  14831   Tue Aug 6 14:12:02 2019 yehonathanUpdateComputersmaking rossa great again

cdsutils is not working on rossa.

Import cdsutils produces this error:

In [2]: import cdsutils
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-2-949babce8459> in <module>()
----> 1 import cdsutils

/ligo/apps/linux-x86_64/cdsutils-480/lib/python2.7/site-packages/cdsutils/__init__.py in <module>()
     53 
     54 try:
---> 55     import awg
     56 except ImportError:
     57     pass

/ligo/apps/linux-x86_64/cdsutils-480/lib/python2.7/site-packages/cdsutils/awg.py in <module>()
     30 """
     31 
---> 32 import sys, numpy, awgbase
     33 from time import sleep
     34 from threading import Thread, Event, Lock

/ligo/apps/linux-x86_64/cdsutils-480/lib/python2.7/site-packages/cdsutils/awgbase.py in <module>()
     17 libawg = CDLL('libawg.so')
     18 libtestpoint = CDLL('libtestpoint.so')
---> 19 libSIStr = CDLL('libSIStr.so')
     20 
     21 ####

/ligo/apps/anaconda/lib/python2.7/ctypes/__init__.pyc in __init__(self, name, mode, handle, use_errno, use_last_error)
    364 
    365         if handle is None:
--> 366             self._handle = _dlopen(self._name, mode)
    367         else:
    368             self._handle = handle

OSError: libSIStr.so: cannot open shared object file: No such file or directory

  14864   Fri Sep 6 18:08:29 2019 ranaUpdateComputersAlarm noise from smart-ups machine under workstation?

please no one touch the UPS: last time it destroyed ROSSA. Please ask Chub to order the replacement batteries so we can do this in a controlled way (fully shutting down ALL workstations first). Last time we wasted 8 hours on ROSSA rebuilding.

Quote:

There was an alarm sound from the Smart-UPS 2200 sitting under the workstation. I see that the 'replace battery' light is red, and this elog tells me that these batteries are replaced every ~1-4 years; the last replacement was march 2016. Holding down the 'test' button for 2-3 seconds results in the alarm sound and does not clear the replace battery indicator.

  14873   Thu Sep 12 09:49:07 2019 gautamUpdateComputerscontrol rm wkstns shutdown

Chub wanted to get the correct part number for the replacement UPS batteries which necessitated opening up the UPS. To be cautious, all the workstations were shutdown at ~9:30am while the unit is pulled out and inspected. While looking at the UPS, we found that the insulation on the main power cord is damaged at both ends. Chub will post photos.

However, despite these precautions, rossa reports some error on boot up (not the same xdisp junk that happened before). pianosa and donatella came back up just fine. It is remotely accessible (ssh-able) though so maybe we can recover it...

Quote:

please no one touch the UPS: last time it destroyed ROSSA. Please ask Chub to order the replacement batteries so we can do this in a controlled way (fully shutting down ALL workstations first). Last time we wasted 8 hours on ROSSA rebuilding

Attachment 1: IMG_7943.JPG
IMG_7943.JPG
  14913   Mon Sep 30 11:42:36 2019 aaronUpdateComputerscontrol rm wkstns shutdown

I booted Rossa in rescue mode; though I see no errors on bootup, I still see the same error ("a problem has occurred") after boot, and a prompt to logout. I powered rossa off/on (single short press of power button), no change.

Booting in debug mode, I see that the error occurs when mounting /cvs/cds, with the error

[FAILED] Failed to mount /cvs/cds.
See `systemctl status cvs-cds.mount` for details.
[DEPEND] Dependency failed for Remote File System

Which is odd, because when I boot in recovery mode, is mounts /cvs/cds successfully. 

I booted in emergency mode by adding to the boot command

systemd.unit=emergency.target

but didn't have the appropriate root password to troubleshoot further (the usual two didn't work).

ELOG V3.1.3-