ID |
Date |
Author |
Type |
Category |
Subject |
11661
|
Sun Oct 4 12:07:11 2015 |
jamie | Configuration | CDS | CSD network tests in progress |
I'm about to start conducting some tests on the CDS network. Things will probably be offline for a bit. Will post when things are back to normal. |
11663
|
Sun Oct 4 14:23:42 2015 |
jamie | Configuration | CDS | CSD network test complete |
I've finished, for now, the CDS network tests that I was conducting. Everything should be back to normal.
What I did:
I wanted to see if I could make the EPICS glitches we've been seeing go away if I unplugged everything from the CDS martian switch in 1X6 except for:
- fb
- fb1
- chiara
- all the front end machines
What I unplugged were things like megatron, nodus, the slow computers, etc. The control room workstations were still connected, so that I could monitor.
I then used StripTool to plot the output of a front end oscillator that I had set up to generate a 0.1 Hz sine wave (see elog 11662). The slow sine wave makes it easy to see the glitches, which show up as flatlines in the trace.
More tests are needed, but there was evidence that unplugging all the extra stuff from the switch did make the EPICS glitches go away. During the duration of the test I did not see any EPICS glitches. Once I plugged everything back in, I started to see them again. However, I'm currently not seeing many glitches (with everything plugged back in) so I'm not sure what that means. I think more tests are needed. If unplugging everything did help, we still need to figure out which machine is the culprit. |
11665
|
Sun Oct 4 14:32:49 2015 |
jamie | Configuration | CDS | CSD network test complete |
Here's an example of the glitches we've been seeing, as seen in the StripTool trace of the front end oscillator:

You can clearly see the glitch at around T = -18. Obviously during non-glitch times the sine wave is nice and cleanish (there are still the very small discretisation from the EPICS sample times). |
11754
|
Wed Nov 11 22:50:39 2015 |
Koji | Configuration | CDS | Slow machine time&date |
I was gazing at the log file for Autolocker script (/opt/rtcds/caltech/c1/scripts/MC/logs/AutoLocker.log )
and found quite old time stamps. e.g.
Old : C1:IOO-MC_VCO_GAIN 1991-08-08 14:36:28.889032 -3
New : C1:IOO-MC_VCO_GAIN 1991-08-08 14:36:36.705699 18
Old : C1:PSL-FSS_FASTGAIN 1991-08-09 19:05:39.972376 14
New : C1:PSL-FSS_FASTGAIN 1991-08-09 19:05:44.939043 18
It was found that the date/time setting of some of the slow machines (at least c1psl and c1iool0) is not correct.
I could not figure out how to fix it.
Question: Is this anything critical?
Another thing: While I was in c1iool0 I frequently saw the message like
c1iool0 > 0xc461f0 (CA event): Events lost, discard count was 514
Is this anything related to EPICS Freeze?
controls@nodus|~ > telnet c1psl.martian
Trying 192.168.113.53...
Connected to c1psl.martian.
Escape character is '^]'.
c1psl > date
Aug 09, 1991 19:13:26.439024274
value = 32 = 0x20 = ' '
c1psl >
telnet> q
Connection closed.
controls@nodus|~ > telnet c1iool0.martian
Trying 192.168.113.57...
Connected to c1iool0.martian.
Escape character is '^]'.
c1iool0 > date
Aug 08, 1991 14:44:39.755679528
value = 32 = 0x20 = ' '
c1iool0 > 0xc461f0 (CA event): Events lost, discard count was 514
Change MC VCO gain to -3.
0xc461f0 (CA event): Events lost, discard count was 423
Change MC VCO gain to 18.
Change boost gain to 1.
Change boost gain to 2.
|
|
11773
|
Tue Nov 17 15:49:23 2015 |
Koji | Configuration | IOO | MC Autolocker modified |
/opt/rtcds/caltech/c1/scripts/MC/AutoLockMC.csh was modified last night.
1. Autolocker sometimes forget to turn off the MC2Tickle. I added the following lines to make sure to turn it off.
echo autolockMCmain: MC locked, nothing for me to do >> ${lfnam}
echo just in case turn off MC2 tickle >> ${lfnam}
${SCRIPTS}/MC/MC2tickleOFF
2. During the lock acquisition, Autolocker frequently stuck on a weak mode. So the following lines were added
so that the Autolocker toggles the servo switch while waiting for the lock.
echo autolockMCmain: Mon=$mclockstatus, Waiting for MC to lock .. >> ${lfnam}
# Turn off MC Servo Input button
ezcawrite C1:IOO-MC_SW1 1
date >> ${lfnam}
sleep 0.5;
# Turn on MC Servo Input button
ezcawrite C1:IOO-MC_SW1 0
sleep 0.5;
|
11791
|
Thu Nov 19 17:06:57 2015 |
Koji | Configuration | CDS | Disabled auto-launching RT processes upon FE booting |
We want to startup the RT processes one by one on boot of FE machines.
Therefore /diskless/root/etc/rc.local on FB was modified as follows. The last sudo line was commented out.
for sys in $(/etc/rt.sh); do
#sudo -u controls sh -c ". /opt/rtapps/rtapps-user-env.sh && /opt/rtcds/cal\
tech/c1/scripts/start${sys}"
# NOTE: we need epics stuff AND iniChk.pl in PATH
# we use -i here so that the .bashrc is sourced, which should also
# source rtapps and rtcds user env (for epics and scripts paths)
# commented out Nov 19, 2015, KA
# see ELOG 11791 http://nodus.ligo.caltech.edu:8080/40m/11791
# sudo -u controls -i /opt/rtcds/caltech/c1/scripts/start${sys}
done
|
11905
|
Mon Jan 4 14:45:41 2016 |
rana, eq, koji | Configuration | Computer Scripts / Programs | nodus pwd change |
We changed the password for controls on nodus this afternoon. We also zeroed out the authorized_keys file and then added back in the couple that we want in there for automatic backups / detchar.
Also did the recommended Ubuntu updates on there. Everything seems to be going OK so far. We think nothing on the interferometer side cares about the nodus password.
We also decided to dis-allow personal laptops on the new Martian router (to be installed soon). |
12158
|
Wed Jun 8 13:50:39 2016 |
jamie | Configuration | CDS | Spectracom IRIG-B card installed on fb1 |
[EDIT: corrected name of installed card]
We just installed a Spectracom TSyc-PCIe timing card on fb1. The hope is that this will help with the GPS timeing syncronization issues we've been seeing in the new daqd on fb1, hopefully elliminating some of the potential failure channels.
The driver, called "symmetricom" in the advLigoRTS source (name of product from competing vendor), was built/installed (from DCC T1500227):
controls@fb1:~/rtscore/tests/advLigoRTS-40m 0$ cd src/drv/symmetricom/
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 0$ ls
Makefile stest.c symmetricom.c symmetricom.h
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 0$ make
make -C /lib/modules/3.2.0-4-amd64/build SUBDIRS=/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom modules
make[1]: Entering directory `/usr/src/linux-headers-3.2.0-4-amd64'
CC [M] /home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.o
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c:59:9: warning: initialization from incompatible pointer type [enabled by default]
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c:59:9: warning: (near initialization for ‘symmetricom_fops.unlocked_ioctl’) [enabled by default]
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c: In function ‘get_cur_time’:
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c:89:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c: In function ‘symmetricom_init’:
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c:188:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c:222:3: warning: label ‘out_remove_proc_entry’ defined but not used [-Wunused-label]
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c:158:22: warning: unused variable ‘pci_io_addr’ [-Wunused-variable]
/home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.c:156:6: warning: unused variable ‘i’ [-Wunused-variable]
Building modules, stage 2.
MODPOST 1 modules
CC /home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.mod.o
LD [M] /home/controls/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom/symmetricom.ko
make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-4-amd64'
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 0$ sudo make install
#remove all old versions of the driver
find /lib/modules/3.2.0-4-amd64 -name symmetricom.ko -exec rm -f {} \; || true
find /lib/modules/3.2.0-4-amd64 -name symmetricom.ko.gz -exec rm -f {} \; || true
# Install new driver
install -D -m 644 symmetricom.ko /lib/modules/3.2.0-4-amd64/extra/symmetricom.ko
/sbin/depmod -a || true
/sbin/modprobe symmetricom
if [ -e /dev/symmetricom ] ; then \
rm -f /dev/symmetricom ; \
fi
mknod /dev/symmetricom c `grep symmetricom /proc/devices|awk '{print $1}'` 0
chown controls /dev/symmetricom
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 0$ ls /dev/symmetricom
/dev/symmetricom
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 0$ ls -al /dev/symmetricom
crw-r--r-- 1 controls root 250, 0 Jun 8 13:42 /dev/symmetricom
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 0$
|
12161
|
Thu Jun 9 13:28:07 2016 |
jamie | Configuration | CDS | Spectracom IRIG-B card installed on fb1 |
Something is wrong with the timing we're getting out of the symmetricom driver, associated with the new spectracom card.
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 127$ lalapps_tconvert
1149538884
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 0$ cat /proc/gps
704637380.00
controls@fb1:~/rtscore/tests/advLigoRTS-40m/src/drv/symmetricom 0$
The GPS time is way off, and it's counting up at something like 900 seconds/second. Something is misconfigured, but I haven't figured out what yet.
The timing distribution module we're using is spitting out what appears to be an IRIG B122 signal (amplitude moduled 1 kHz carrier), which I think is what we expect. This is being fed into the "AM IRIG input" connector on the card.
Not sure why the driver is spinning so fast, though, with the wrong baseline time. Reboot of the machine didn't help. |
12166
|
Fri Jun 10 12:09:01 2016 |
jamie | Configuration | CDS | IRIG-B debugging |
Looks like we might have a problem with the IRIG-B output of the GPS receiver.
Rolf came over this morning to help debug the strange symmetricom driver behavior on fb1 with the new Spectracom card. We restarted the machine againt and this time when we loaded the drive rit was clocking at a normal rate (second/second). However, the overall GPS time was still wrong, showing a time in October from this year.
The IRIG-B122 output is supposed to encode the time of year via amplitude modulation of a 1kHz carrier. The current time of year is:
controls@fb1:~ 0$ TZ=utc date +'%j day, %T'
162 day, 18:57:35
controls@fb1:~ 0$
The absolute year is not encoded, though, so the symmetricon driver has the year offset hard coded into the driver (yuck), to which it adds the time of year from the IRIG-B signal to get the correct GPS time.
However, loading the symmetricom module shows the following:
...
[ 1601.607403] Spectracom GPS card on bus 1; device 0
[ 1601.607408] TSYNC PIC BASE 0 address = fb500000
[ 1601.607429] Remapped 0xffffc90017012000
[ 1606.606164] TSYNC NOT receiving YEAR info, defaulting to by year patch
[ 1606.606168] date = 299 days 18:28:1161455320
[ 1606.606169] bcd time = 1161455320 sec 959 milliseconds 398 microseconds 959398630 nanosec
[ 1606.606171] Board sync = 1
[ 1606.616076] TSYNC NOT receiving YEAR info, defaulting to by year patch
[ 1606.616079] date = 299 days 18:28:1161455320
[ 1606.616080] bcd time = 1161455320 sec 969 milliseconds 331 microseconds 969331350 nanosec
[ 1606.616081] Board sync = 1
controls@fb1:~ 0$
Apparently the symmetricom driver thinks it's the 299nth day of the year, which of course corresponds to some time in october, which jives with the GPS time the driver is spitting out.
Rolf then noticed that the timing module in the VME crate in the adjacent rack, which also receives an IRIG-B signal from the distribution box, was also showing day 299 on it's front panel display. We checked and confirmed that the symmetricom card and the VME timing module both agree on the wrong time of year, strongly suggesting that the GPS receiver is outputing bogus data on it's IRIG-B output, even though it's showing the correct time on it's front panel. We played around with setting in the GPS receiver to no avail. Finally we rebooted the GPS receiver, but it seemed to come up with the same bogus IRIG-B output (again both symmetricom driver and VME timing module agree on the wrong day).
So maybe our GPS receiver is busted? Not sure what to try now.
|
12167
|
Fri Jun 10 12:21:54 2016 |
jamie | Configuration | CDS | GPS receiver not resetting properly |
The GPS receiver (EndRun Technologies box in 1Y5? (rack closest to door)) seems to not coming back up properly after the reboot. The front pannel says that it's "LKD", but the "sync" LED is flashing instead of solid, and the time of year displayed on the front panel is showing day 6. The fb1 symmetricom driver and VME timing module are still both seeing day 299, though. So something may definitely be screwy with the GPS receiver. |
12182
|
Wed Jun 15 10:19:10 2016 |
Steve | Configuration | CDS | GPS antennas... debugging |
Visual inspection of rooftop GPS antennae:
The 2 GPS antennas are located on the north west corner of CES roof. Their condition looks well weathered. I'd be surprised if they work.
The BNC connector of the 1553 head is inside of the conduit so it is more likely to have a better connection than the other one.
I have not had a chance to look into the "GPS Time Server" unit. |
Attachment 1: GPStimeServer.jpg
|
|
Attachment 2: GPSsn.jpg
|
|
Attachment 3: GPSantennas.jpg
|
|
Attachment 4: GPSantHead.jpg
|
|
Attachment 5: GPSantHead2.jpg
|
|
Attachment 6: GPSantCabels.jpg
|
|
12200
|
Mon Jun 20 11:11:20 2016 |
Steve | Configuration | CDS | GPS receiver not resetting properly |
I called https://www.endruntechnologies.com/pdf/USM3014-0000-000.pdf and they said it's very likely just needs a software update. They will email Jamie the details.
Quote: |
The GPS receiver (EndRun Technologies box in 1Y5? (rack closest to door)) seems to not coming back up properly after the reboot. The front pannel says that it's "LKD", but the "sync" LED is flashing instead of solid, and the time of year displayed on the front panel is showing day 6. The fb1 symmetricom driver and VME timing module are still both seeing day 299, though. So something may definitely be screwy with the GPS receiver.
|
|
12201
|
Mon Jun 20 11:19:41 2016 |
jamie | Configuration | CDS | GPS receiver not resetting properly |
I got the email from them. There was apparently a bug that manifested on February 14 2016. I'll try to software update today.
http://endruntechnologies.com/pdf/FSB160218.pdf
http://endruntechnologies.com/upgradetemplx.htm |
12202
|
Mon Jun 20 14:03:04 2016 |
jamie | Configuration | CDS | EndRun GPS receiver upgraded, fixed |
I just upgraded the EndRun Technologies Tempus LX GPS receiver timing unit, and it seems to have fixed all the problems. 
Thanks to Steve for getting the info from EndRun. There was indeed a bug in the firmware that was fixed with a firmware upgrade.
I upgraded both the system firmware and the firmware of the GPS subsystem:
Tempus LX GPS(root@Tempus:~)-> gntpversion
Tempus LX GPS 6010-0044-000 v 5.70 - Wed Oct 1 04:28:34 UTC 2014
Tempus LX GPS(root@Tempus:~)-> gpsversion
F/W 5.10 FPGA 0416
Tempus LX GPS(root@Tempus:~)->
After reboot the system is fully functional, displaying the correct time, and outputting the correct IRIG-B data, as confirmed by the VME timing unit.
I added a wiki page for the unit: https://wiki-40m.ligo.caltech.edu/NTP
Steve added this picture |
Attachment 1: GPSreceiverWorking.jpg
|
|
12376
|
Thu Aug 4 17:57:09 2016 |
Koji | Configuration | General | Don't restart apache2 - nodus /etc/apache2/sites-available/* accidentally deleted |
Late coming elog about the deletion of the apahce config files
Thu Aug 4 8:50ish 2016
Please don't restart apache2
I accidentally deleted four files in /etc/apache2/sites-available / on nodus. The deleted files were
elog nodus public_html svn
I believe public_html is not used as it is not linked from /etc/apache2/sites-enabled
They are the web server config files and need to be reconfigured manually. We have no backup.
Currently all the web services are running as it was. However, once apache2 is restarted, we'll lose the services.
|
12558
|
Thu Oct 13 14:49:57 2016 |
Koji | Configuration | PEM | XLR(F)-XLR(M) cable took from the fibox to the Blue microphone |
[Gautam Koji]
XLR(F)-XLR(M) cable for the blue microphone is missing. Steve ordered one.
We found one in the fibox setup. As we don't use it during the vent, we use this cable for the microphone.
Once we get the new one, it will go to the fibox setup.
|
12598
|
Thu Nov 3 16:30:42 2016 |
Lydia | Configuration | SUS | ETMX to coil matrix expanded |
[ericq, lydia]
Background:
We believe the optimal OSEM damping would use an input matrix diagonalized to the free swing modes of the optic, and an output matrix which drives the coils appropriately to damp these free swing modes. As was discovered, a free swinging optic does not necessarily have eigenmodes that match up perfectly with pitch and yaw, however in the current state the "TO_COIL" output matrix that determines the drive signals in response to the diagonlized sensor output also controls the drive signals for the oplevs, LSC/ASC, and alignment biases. So attempts to diagonalize the output matrix to agree with the input matrix have resulted in problems elsewhere. (See previous elog). So, we want to expand the "TO_COIL" matrices to treat the OSEM sensor inputs separately from the others.
Today:
- We modified the ETMX suspension model (c1scx) to use a modified copy of the sus_single_control block (sus_single_control_mod) that has 3 additional input columns. These are for the sensing modes determined by the input matrix, and are labeled "MODAL POS", "MODAL PIT", and "MODAL YAW."
- The regular POS, PIT, and YAW columns no longer include the diagonalized OSEM sensor signals for ETMX.
- The suspension screen now out of date; it doesn't show the new columns under Output Filters and the summed values displayed for each damping loop do not incluse the OSEM damping.
- The new matrix can be acessed at /opt/rtcds/caltech/c1/medm/c1scx/C1SUS_ETMX_TO_COIL.adl (see Attachment 1). For now, it has the naive values in the new columns so the damping behavior is the same.
- In trying to get a properly generated MEDM screen for the larger matrix, we discovered that the Simulink block for TO_COIL specifies in its description a custom template for the medm autogeneration. We made a new verion of that template with extra columns and new labels, which can be reused for the other suspensions. These templates are in /opt/rtcds/userapps/release/sus/c1/medm/templates, the new one is SUS_TO_COIL_MTRX_EXTRA.adl
- I will be setting the new column values to ones that represent the diagonlized free swing modes given by the input matrix. Hopefully this will improve OSEM damping without getting in the way of anything else. If this works well, the other SUS models can be changed the same way.
|
Attachment 1: 01.png
|
|
12721
|
Mon Jan 16 12:49:06 2017 |
rana | Configuration | Computers | Megatron update |
The "apt-get update" was failing on some machines because it couldn't find the 'Debian squeeze' repos, so I made some changes so that Megatron could be upgraded.
I think Jamie set this up for us a long time ago, but now the LSC has stopped supporting these versions of the software. We're running Ubuntu12 and 'squeeze' is meant to support Ubuntu10. Ubuntu12 (which is what LLO is running) corresponds to 'Debian-wheezy' and Ubuntu14 to 'Debian-Jessie' and Ubuntu16 to 'debian-stretch'.
We should consider upgrading a few of our workstations to Ubuntu 14 LTS to see how painful it is to run our scripts and DTT and DV. Better to upgrade a bit before we are forced to by circumstance.
I followed the instructions from software.ligo.org (https://wiki.ligo.org/DASWG/DebianWheezy) and put the recommended lines into the /etc/apt/sources.list.d/lsc-debian.list file.
but I still got 1 error (previously there were ~7 errors):
W: Failed to fetch http://software.ligo.org/lscsoft/debian/dists/wheezy/Release Unable to find expected entry 'contrib/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)
Restarting now to see if things work. If its OK, we ought to change our squeeze lines into wheezy for all workstations so that our LSC software can be upgraded. |
12724
|
Mon Jan 16 22:03:30 2017 |
jamie | Configuration | Computers | Megatron update |
Quote: |
We should consider upgrading a few of our workstations to Ubuntu 14 LTS to see how painful it is to run our scripts and DTT and DV. Better to upgrade a bit before we are forced to by circumstance.
|
I would recommend upgrading the workstations to one of the reference operating systems, either SL7 or Debian squeeze, since that's what the sites are moving towards. If you do that you can just install all the control room software from the supported repos, and not worry about having to compile things from source anymore. |
12825
|
Mon Feb 13 17:19:41 2017 |
yinzi | Configuration | | configuring ethernet for raspberry pi |
Gautam and I were able to get the Raspberry Pi up and running today, including being able to ssh into it from the control room.
Below are some details about the setup/procedure that might be helpful to anyone trying to establish Ethernet connection for a new RPi, or a new operating system/SD card.
Here is the physical setup:


The changes that need to be made for a new Raspbian OS in order to communicate with it over ssh are as follows, with links to tutorials on how to do them:
1. Edit dhcpcd.conf file: https://www.modmypi.com/blog/how-to-give-your-raspberry-pi-a-static-ip-address-update
2. Edit interfaces file: https://www.mathworks.com/help/supportpkg/raspberrypi/ug/getting-the-raspberry_pi-ip-address.html
3. Enable ssh server: http://www.instructables.com/id/Use-ssh-to-talk-with-your-Raspberry-Pi/
The specific addresses for the RPi we set up today are:
IP Address: 192.168.113.107
Gateway/Routers/Domain Name Servers: 192.168.113.2
Netmask: 255.255.255.0
GV: I looked through /etc/var/bind/martian.hosts on chiara and decided to recycle the IP address for Domenica.martian as no RPis are plugged in right now... I'm also removing some of the attachments that seem to have been uploaded multiple times. |
12878
|
Thu Mar 9 20:38:19 2017 |
rana | Configuration | IOO | MC lock acquisition settings changed; no more HOM locks |
The MC was sort of misaligned. It was locking on some vertical HOMs. So I locked it and aligned the suspensions to the input beam (not great; we should really align the input beam to the centered spots on the MC mirrors).
With the HOMs reduced I looked at the MC servo board gains which Guatam has been fiddling with. It seems that since the Mod Depth change we're getting a lot more HOM locks. You can recognize this by seeing the longish stretches on the strip tool where FSS-FAST is going rail-to-rail at 0.03 Hz for many minutes. This is where the MC is locked on a HOM, but the autolocker still thinks its unlocked and so is driving the MC2 position at 0.03 Hz to find the TEM00 mode.
I lowered the input gain and the VCO gain in the mcdown script and now it very rarely locks on a HOM. The UGF in this state is ~3-4 kHz (I estimate), so its just enough to lock, but no more. I tested it by intentionally unlocking ~15 times. It seems robust. It still ramps up to a UGF of ~150 kHz as always. 'mcdown' commited to SVN. |
12935
|
Mon Apr 10 15:22:46 2017 |
rana | Configuration | Wiki | DokuWikis are back up |
AIC Wiki updated to latest stable version of DokuWiki: 2017-02-19b "Frusterick Manners" + CAPTCHA + Upgrade + Gallery PlugIns |
12943
|
Thu Apr 13 21:01:20 2017 |
rana | Configuration | Computers | LG UltraWide on Rossa |
we installed a new curved 34" doublewide monitor on Rossa, but it seems like it has a defective dead pixel region in it. Unless it heals itself by morning, we should return it to Amazon. Please don't throw out he packing materials.
Steve 8am next morning: it is still bad The monitor is cracked. It got kicked while traveling. It's box is damaged the same place.
Shipped back 4-17-2017 |
Attachment 1: LG34c.jpg
|
|
Attachment 2: crack.jpg
|
|
12965
|
Wed May 3 16:12:36 2017 |
johannes | Configuration | Computers | catastrophic multiple monitor failures |
It seems we lost three monitors basically overnight.
The main (landscape, left) displays of Pianosa, Rossa and Allegra are all broken with the same failure mode:
their backlights failed. Gautam and I confirmed that there is still an image displayed on all three, just incredibly faint. While Allegra hasn't been used much, we can narrow down that Pianosa's and Rossa's monitors must have failed within 5 or 6 hours of each other, last night.
One could say ... they turned to the dark side 

Quick edit; There was a functioning Dell 24" monitor next to the iMac that we used as a replacement for Pianosa's primary display. Once the new curved display is paired with Rossa we can use its old display for Donatella or Allegra. |
12966
|
Wed May 3 16:46:18 2017 |
Koji | Configuration | Computers | catastrophic multiple monitor failures |
- Is there any machine that can handle 4K? I have one 4K LCD for no use.
- I also can donate one 24" Dell |
12971
|
Thu May 4 09:52:43 2017 |
rana | Configuration | Computers | catastrophic multiple monitor failures |
That's a new failure mode. Probably we can't trust the power to be safe anymore.
Need Steve to order a couple of surge suppressing power strips for the monitors. The computers are already on the UPS, so they don't need it. |
12978
|
Tue May 9 15:23:12 2017 |
Steve | Configuration | Computers | catastrophic multiple monitor failures |
Gautam and Steve,
Surge protective power strip was install on Friday, May 5 in the Control Room
Computers not connected to the UPS are plugged into Isobar12ultra.
Quote: |
That's a new failure mode. Probably we can't trust the power to be safe anymore.
Need Steve to order a couple of surge suppressing power strips for the monitors. The computers are already on the UPS, so they don't need it.
|
|
Attachment 1: Trip-Lite.jpg
|
|
12993
|
Mon May 15 20:43:25 2017 |
rana | Configuration | Computers | catastrophic multiple monitor failures |
this is not the right one; this Ethernet controlled strip we want in the racks for remote control.
Buy some of these for the MONITORS.
Quote: |
Surge protective power strip was install on Friday, May 5 in the Control Room
Computers not connected to the UPS are plugged into Isobar12ultra.
Quote: |
That's a new failure mode. Probably we can't trust the power to be safe anymore.
Need Steve to order a couple of surge suppressing power strips for the monitors. The computers are already on the UPS, so they don't need it.
|
|
|
13009
|
Tue May 23 18:09:18 2017 |
Kaustubh | Configuration | General | Testing ET-3010 PD |
In continuation with the previous(ET-3040 PD) test.
The ET-3010 PD requires to be fiber coupled for optimal use. I will try to test this model without the fiber couple tomorrow and see whether it works or not. |
13070
|
Fri Jun 16 18:21:40 2017 |
jigyasa | Configuration | Cameras | GigE camera IP |
One of the additional GigE cameras has been IP configured for use and installation.
Static IP assigned to the camera- 192.168.113.152
Subnet mask- 255.255.255.0
Gateway- 192.168.113.2
|
13160
|
Wed Aug 2 15:04:15 2017 |
gautam | Configuration | Computers | control room workstation power distribution |
The 4 control room workstation CPUs (Rossa, Pianosa, Donatella and Allegra) are now connected to the UPS.
The 5 monitors are connected to the recently acquired surge-protecting power strips.
Rack-mountable power strip + spare APC Surge Arrest power strip have been stored in the electronics cabinet.
Quote: |
this is not the right one; this Ethernet controlled strip we want in the racks for remote control.
Buy some of these for the MONITORS.
|
|
13440
|
Tue Nov 21 17:51:01 2017 |
Koji | Configuration | Computers | nodus post OS migration admin |
The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017
Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. And "websvn" was also implemented. |
13442
|
Tue Nov 21 23:47:51 2017 |
gautam | Configuration | Computers | nodus post OS migration admin |
I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.
This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.
I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.
Quote: |
The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017
Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. "websvn" is not installed.
|
|
13445
|
Wed Nov 22 11:51:38 2017 |
gautam | Configuration | Computers | nodus post OS migration admin |
Confirmed that this crontab is running - the daily backup of the crontab seems to have successfully executed, and there is now a file crontab_nodus.ligo.caltech.edu.20171122080001 in the directory quoted below. The $HOSTNAME seems to be "nodus.ligo.caltech.edu" whereas it was just "nodus", so the file names are a bit longer now, but I guess that's fine...
Quote: |
I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.
This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.
I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.
|
|
13461
|
Sun Dec 3 05:25:59 2017 |
gautam | Configuration | Computers | sendmail installed on nodus |
Pizza mail didn't go out last weekend - looking at logfile, it seems like the "sendmail" service was missing. I installed sendmail following the instructions here: https://tecadmin.net/install-sendmail-server-on-centos-rhel-server/
Except that to start the sendmail service, I used systemctl and not init.d. i.e. I ran systemctl start sendmail.service (as root). Test email to myself works. Let's see if it works this weekend. Of course this isn't so critical, more important are the maintenance emails that may need to go out (e.g. disk usage alert on chiara / N2 pressure check, which looks like nodus' responsibilities). |
13462
|
Sun Dec 3 17:01:08 2017 |
Koji | Configuration | Computers | sendmail installed on nodus |
An email has come at 5PM on Dec 3rd.
|
13504
|
Fri Jan 5 17:50:47 2018 |
rana | Configuration | Computers | motif on nodus |
I had to do 'sudo yum install motif' on nodus so that we could get libXm.so.4 so that we could run MEDM. Works now. |
13505
|
Fri Jan 5 19:19:25 2018 |
rana | Configuration | SEI | Barry Controls 'air puck' instead of 'VOPO style' breadboard |
We've been thinking about putting in a blade spring / wire based aluminum breadboard on top of the ETM & ITM stacks to get an extra factor of 10 in seismic attenuation.
Today Koji and I wondered about whether we could instead put something on the outside of the chambers. We have frozen the STACIS system because it produces a lot of excess noise below 1 Hz while isolating in the 5-50 Hz band.
But there is a small gap between the STACIS and the blue crossbeams that attache to the beams that go into the vacuum to support the stack. One possibility is to put in a small compliant piece in there to gives us some isolation in the 10-30 Hz band where we are using up a lot of the control range. The SLM series mounts from Barry Controls seems to do the trick. Depending on the load, we can get a 3-4 Hz resonant frequency.
Steve, can you please figure out how to measure what the vertical load is on each of the STACIS? |
Attachment 1: mm_slm.jpg
|
|
Attachment 2: Screen_Shot_2018-01-05_at_7.25.47_PM.png
|
|
13524
|
Wed Jan 10 14:17:57 2018 |
johannes | Configuration | Computer Scripts / Programs | autoburt no longer making backups |
I was looking into setting up autoburt for the new c1auxex2 and found that it stopped making automatic backups for all machines after the beginning of the new year. There is no 2018 folder (it was the only one missing) in /opt/rtcds/caltech/c1/burt/autoburt/snapshots and the /latest/ link in /opt/rtcds/caltech/c1/burt/autoburt/ leads to the last backup of 2017 on 12/31/17 at 23:19.
The autoburt log file shows that the back script last ran today 01/10/18 at 14:19, as it should have, but doesn't show any errors and ends with "You are at the 40m".
I'm not familiar with the autoburt scheduling using cronjobs. If I'm not mistaken the relevant cronjob file is /cvs/cds/rtcds/caltech/c1/scripts/autoburt/autoburt.cron which executes /cvs/cds/rtcds/caltech/c1/scripts/autoburt/autoburt.pl
I've never used perl but there's the following statement when establishing the directory for the new backup:
$yearpath = $autoburtpath."/snapshots/".$thisyear;
# print "scanning for path $yearpath\n";
if (!-e $yearpath) {
die "ERROR: Year directory $yearpath does not exist\n";
}
I manually created the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2018/ directory. Maybe this fixes the hickup? Gotta wait about 30 minutes. |
13525
|
Wed Jan 10 15:25:43 2018 |
johannes | Configuration | Computer Scripts / Programs | autoburt making backups again |
Quote: |
I manually created the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2018/ directory. Maybe this fixes the hickup? Gotta wait about 30 minutes.
|
It worked. The first backup of the year is now from Wednesday, 01/10/18 at 15:19. Ten days of automatic backups are missing. Up until 2204 the year folders had been pre-emptively created so why was 2018 missing?
gautam: this is a bit suspect still - the snapshot file for c1auxex at least seemed to be too light on channels recorded. this was before any c1auxex switching. to be investigated. |
13526
|
Wed Jan 10 16:27:02 2018 |
Steve | Configuration | SEI | load cell for weight measurement |
We could use similar load cells to make the actual weight measurement on the Stacis legs. This seems practical in our case.
I have had bad experience with pneumatic Barry isolators.
Our approximate max compression loads are 1500 lbs on 2 feet and 2500 lbs on the 3rd one.
Quote: |
We've been thinking about putting in a blade spring / wire based aluminum breadboard on top of the ETM & ITM stacks to get an extra factor of 10 in seismic attenuation.
Today Koji and I wondered about whether we could instead put something on the outside of the chambers. We have frozen the STACIS system because it produces a lot of excess noise below 1 Hz while isolating in the 5-50 Hz band.
But there is a small gap between the STACIS and the blue crossbeams that attache to the beams that go into the vacuum to support the stack. One possibility is to put in a small compliant piece in there to gives us some isolation in the 10-30 Hz band where we are using up a lot of the control range. The SLM series mounts from Barry Controls seems to do the trick. Depending on the load, we can get a 3-4 Hz resonant frequency.
Steve, can you please figure out how to measure what the vertical load is on each of the STACIS?
|
|
Attachment 1: stacis3LoadCells.png
|
|
13539
|
Fri Jan 12 12:31:04 2018 |
gautam | Configuration | Computers | sendmail troubles on nodus |
I'm having trouble getting the sendmail service going on nodus since the Christmas day power failure - for some reason, it seems like the mail server that sendmail uses to send out emails on nodus (mx1.caltech.iphmx.com, IP=68.232.148.132) is on a blacklist! Not sure how exactly to go about remedying this.
Running sudo systemctl status sendmail.service -l also shows a bunch of suspicious lines:
Jan 12 10:15:27 nodus.ligo.caltech.edu sendmail[6958]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 10:15:45 nodus.ligo.caltech.edu sendmail[6958]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+10:49:16, xdelay=00:00:39, mailer=esmtp, pri=5432408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 11:15:23 nodus.ligo.caltech.edu sendmail[10334]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 11:15:31 nodus.ligo.caltech.edu sendmail[10334]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+11:49:02, xdelay=00:00:27, mailer=esmtp, pri=5522408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 12:15:25 nodus.ligo.caltech.edu sendmail[13747]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 12:15:42 nodus.ligo.caltech.edu sendmail[13747]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+12:49:13, xdelay=00:00:33, mailer=esmtp, pri=5612408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Why is nodus attempting to email umakant.rapol@iiserpune.ac.in? |
13540
|
Fri Jan 12 16:01:27 2018 |
Koji | Configuration | Computers | sendmail troubles on nodus |
I personally don't like the idea of having sendmail (or something similar like postfix) on a personal server as it requires a lot of maintenance cost (like security update, configuration, etc). If we can use external mail service (like gmail) via gmail API on python, that would easy our worry, I thought. |
13542
|
Fri Jan 12 18:22:09 2018 |
gautam | Configuration | Computers | sendmail troubles on nodus |
Okay I will port awade's python mailer stuff for this purpose.
gautam 14Jan2018 1730: Python mailer has been implemented: see here for the files. On shared drive, the files are at /opt/rtcds/caltech/c1/scripts/general/pizza/pythonMailer/
gautam 11Feb2018 1730: The python mailer had never once worked successfully in automatically sending the message. I realized this may be because I had put the script on the root user's crontab, but had setup the authentication keyring with the password for the mailer on the controls user. So I have now setup a controls user crontab, which for now just runs the pizza mailing. let's see if this works next Sunday...
Quote: |
I personally don't like the idea of having sendmail (or something similar like postfix) on a personal server as it requires a lot of maintenance cost (like security update, configuration, etc). If we can use external mail service (like gmail) via gmail API on python, that would easy our worry, I thought.
|
|
13545
|
Sat Jan 13 02:36:51 2018 |
rana | Configuration | Computers | sendmail troubles on nodus |
I think sendmail is required on nodus since that's how the dokuwiki works. That's why the dokuwiki was trying to send an email to Umakant. |
13546
|
Sat Jan 13 03:20:55 2018 |
Koji | Configuration | Computers | sendmail troubles on nodus |
I know it, and I don't like it. DokuWiki seems to allow us to use an external server for notification emails. That would be the way to go. |
13555
|
Wed Jan 17 23:36:12 2018 |
johannes | Configuration | General | AS port laser injection |
Status of the AS-port auxiliary laser injection
- Auxiliary laser with AOM setup exists, first order diffracted beam is coupled into fiber that leads to the AS table.
- There is a post-PMC picked-off beam available that is currently just dumped (see picture). I want to use it for a beat note with the auxiliary laser pre-AOM so we can phaselock the lasers and then fast-switch the phaselocked light on and off.
- I was going to use the ET3010 PD for the beat note unless someone else has plans for it.
- I obtained a fixed triple-aspheric-lens collimator which is supposed to have a very small M^2 value for the collimation on the AS table. I still have the PSL-lab beam profiler and will measure its output mode.
- Second attached picture shows the space on the AS table that we have for mode-matching into the IFO. Need to figure out the desired mode and how to merge the beams best.
|
Attachment 1: PSLbeat.svg.png
|
|
Attachment 2: ASpath.svg.png
|
|
13570
|
Tue Jan 23 16:02:05 2018 |
Steve | Configuration | SEI | load cells |
1500 and 2000 lbs load cells arrived from MIT to measure the vertical loads on each leg.
Quote: |
We've been thinking about putting in a blade spring / wire based aluminum breadboard on top of the ETM & ITM stacks to get an extra factor of 10 in seismic attenuation.
Today Koji and I wondered about whether we could instead put something on the outside of the chambers. We have frozen the STACIS system because it produces a lot of excess noise below 1 Hz while isolating in the 5-50 Hz band.
But there is a small gap between the STACIS and the blue crossbeams that attache to the beams that go into the vacuum to support the stack. One possibility is to put in a small compliant piece in there to gives us some isolation in the 10-30 Hz band where we are using up a lot of the control range. The SLM series mounts from Barry Controls seems to do the trick. Depending on the load, we can get a 3-4 Hz resonant frequency.
Steve, can you please figure out how to measure what the vertical load is on each of the STACIS?
|
|
Attachment 1: stacis3LoadCells.png
|
|
Attachment 2: DSC00025.JPG
|
|
Attachment 3: DSC00026.JPG
|
|
13681
|
Tue Mar 13 20:03:16 2018 |
johannes | Configuration | Computers | c1auxex replacement |
I assembled the rack-mount server that will long-term replace c1auxex, so we can return the borrowed unit to Larry.
SUPERMICRO SYS-5017A-EP Specs:
- Intel Atom N2800 (2 cores, 1.8GHz, 1MB, 64-bit)
- 4GB (2x2GB) DDR3 RAM
- 128 GB SSD

I installed a standard Debian Jessie distribution, with option LXDE for minimal resource usage. Steps taken after fresh install
- Give controls sudo permission: usermod -aG sudo controls
- mkdir /cvs/cds
- apt-get install nfs-common
- Added line "chiara:/home/cds /cvs/cds nfs rw,bg,nfsvers=3" to end of /etc/fstab
- Configured network adapter in /etc/network/interfaces
iface eth0 inet static
address 192.168.113.48
netmask 255.255.255.0
gateway 192.168.113.2
dns-nameservers 192.168.113.104 131.215.125.1 131.215.139.100
dns-search martian
I first assigned the IP 192.168.113.59 of the original c1auxex, but for some reason my ssh connections kept failing mid-session. After I switched to a different IP the disruption no longer happened.
- Add lines "search martian" and "nameserver 192.168.113.104" to /etc/resolv.conf
- apt-get install openssh-server
At this point the unit was ready for remote connections on the martian network, and I moved it to the XEND.
- Added lines to /home/controls/.bashrc to set paths and environment variables:
export PATH=/cvs/cds/rtapps/epics-3.14.12.2_long/base/bin/linux-x86_64:/cvs/cds/rtapps/epics-3.14.12.2_long/extensions/bin/linux-x86_64:$PATH
export HOST_ARCH=linux-x86_64
export EPICS_HOST_ARCH=linux-x86_64
export RPN_DEFNS=~/.defns.rpn
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/cvs/cds/rtapps/epics-3.14.12.2_long/base/lib/linux-x86_64:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/lib/linux-x86_64/:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/asyn/lib/linux-x86_64
apt-get install libmotif-common libmotif4 libxp6 ( required to run burtwb utility)
The server is ready to take over for c1auxex2 and does not need any local epics compiled, since it can run the 3.14.12.2_long binaries in /cvs/cds. |
Attachment 1: IMG_20180313_105154890.jpg
|
|
Attachment 2: IMG_20180313_133031002.jpg
|
|