ID |
Date |
Author |
Type |
Category |
Subject |
15872
|
Fri Mar 5 17:48:25 2021 |
Jon | Update | CDS | Front-end testing | Today I moved the c1bhd machine from the control room to a new test area set up behind (west of) the 1X6 rack. The test stand is pictured in Attachment 1. I assembled one of the new IO chassis and connected it to the host.
I/O Chassis Assembly
- LIGO-style 24V feedthrough replaced with an ATX 650W switching power supply
- Timing slave installed
- Contec DO-1616L-PE card installed for timing control
- One 16-bit ADC and one 32-channel DO module were installed for testing
The chassis was then powered on and LED lights illuminated indicating that all the components have power. The assembled chassis is pictured in Attachment 2.
Chassis-Host Communications Testing
Following the procedure outlined T1900700, the system failed the very first test of the communications link between chassis and host, which is to check that all PCIe cards installed in both the host and the expansion chassis are detected. The Dolpin host adapter card is detected:
07:06.0 PCI bridge: Stargen Inc. Device 0102 (rev 02) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
I/O behind bridge: 00002000-00002fff
Prefetchable memory behind bridge: 00000000c0200000-00000000c03fffff
Capabilities: [40] Power Management version 2
Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [60] Express Downstream Port (Slot+), MSI 00
Capabilities: [80] Subsystem: Device 0000:0000
Kernel driver in use: pcieport
However the OSS PCIe adapter card linking the host to the IO chassis was not detected, nor were any of the cards in the expansion chassis. Gautam previously reported that the OSS card was not detected by the host (though it was not connected to the chassis then). Even now connected to the IO chassis, the card is still not detected. On the chassis-side OSS card, there is a red LED illuminated indicating "HOST CARD RESET" as pictured in Attachment 3. This may indicate a problem with the card on the host side. Still more debugging to be done. |
Attachment 1: image_67203585.JPG
|
|
Attachment 2: image_67216641.JPG
|
|
Attachment 3: image_17185537.JPG
|
|
15890
|
Tue Mar 9 16:52:47 2021 |
Jon | Update | CDS | Front-end testing | Today I continued with assembly and testing of the new front-ends. The main progress is that the IO chassis is now communicating with the host, resolving the previously reported issue.
Hardware Issues to be Resolved
Unfortunately, though, it turns out one of the two (host-side) One Stop Systems PCIe cards sent from Hanford is bad. After some investigation, I ultimately resolved the problem by swapping in the second card, with no other changes. I'll try to procure another from Keith Thorne, along with some spares.
Also, two of the three switching power supplies sent from Livingston (250W Channel Well PSG400P-89) appear to be incompatible with the Trenton BPX6806 PCIe backplanes in these chassis. The power supply cable has 20 conductors and the connector on the board has 24. The third supply, a 650W Antec EA-650, does have the correct cable and is currently powering one of the IO chassis. I'll confirm this situation with Keith and see whether they have any more Antecs. If not, I think these supplies can still be bought (not obsolete).
I've gone through all the hardware we've received, checked against the procurement spreadsheet. There are still some missing items:
- 18-bit DACs (Qty 14; but 7 are spares)
- ADC adapter boards (Qty 5)
- DAC adapter boards (Qty 9)
- 32-channel DO modules (Qty 2/10 in hand)
Testing Progress
Once the PCIe communications link between host and IO chassis was working, I carried out the testing procedure outlined in T1900700. This performs a series checks to confirm basic operation/compatibility of the hardware and PCIe drivers. All of the cards installed in both the host and the expansion chassis are detected and appear correctly configured, according to T1900700. In the below tree, there is one ADC, one 16-ch DIO, one 32-ch DO, and one DolphinDX card:
+-05.0-[05-20]----00.0-[06-20]--+-00.0-[07-08]----00.0-[08]----00.0 Contec Co., Ltd Device 86e2
| +-01.0-[09]--
| +-03.0-[0a]--
| +-08.0-[0b-15]----00.0-[0c-15]--+-02.0-[0d]--
| | +-03.0-[0e]--
| | +-04.0-[0f]--
| | +-06.0-[10-11]----00.0-[11]----04.0 PLX Technology, Inc. PCI9056 32-bit 66MHz PCI <-> IOBus Bridge
| | +-07.0-[12]--
| | +-08.0-[13]--
| | +-0a.0-[14]--
| | \-0b.0-[15]--
| \-09.0-[16-20]----00.0-[17-20]--+-02.0-[18]--
| +-03.0-[19]--
| +-04.0-[1a]--
| +-06.0-[1b]--
| +-07.0-[1c]--
| +-08.0-[1d]--
| +-0a.0-[1e-1f]----00.0-[1f]----00.0 Contec Co., Ltd Device 8632
| \-0b.0-[20]--
\-08.0-[21-2a]--+-00.0 Stargen Inc. Device 0101
\-00.1-[22-2a]--+-00.0-[23]--
+-01.0-[24]--
+-02.0-[25]--
+-03.0-[26]--
+-04.0-[27]--
+-05.0-[28]--
+-06.0-[29]--
\-07.0-[2a]--
Standalone Subnet
Before I start building/testing RTCDS models, I'd like to move the new front ends to an isolated subnet. This is guaranteed to prevent any contention with the current system, or inadvertent changes to it.
Today I set up another of the Supermicro servers sent by Livingston in the 1X6 test stand area. The intention is for this machine to run a cloned, bootable image of the current fb1 system, allowing it to function as a bootserver and DAQ server for the FEs on the subnet.
However, this hard disk containing the fb1 image appears to be corrupted and will not boot. It seems to have been sitting disconnected in a box since ~2018, which is not a stable way to store data long term. I wasn't immediately able to recover the disk using fsck. I could spend some more time trying, but it might be most time-effective to just make a new clone of the fb1 system as it is now. |
Attachment 1: image_72192707.JPG
|
|
15924
|
Tue Mar 16 16:27:22 2021 |
Jon | Update | CDS | Front-end testing | Some progress today towards setting up an isolated subnet for testing the new front-ends. I was able to recover the fb1 backup disk using the Rescatux disk-rescue utility and successfully booted an fb1 clone on the subnet. This machine will function as the boot server and DAQ server for the front-ends under test. (None of these machines are connected to the Martian network or, currently, even the outside Internet.)
Despite the success with the framebuilder, front-ends cannot yet be booted locally because we are still missing the DHCP and FTP servers required for network booting. On the Martian net, these processes are running not on fb1 but on chiara. And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.
For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success. The repair tool I used to recover the fb1 disk does not find a problem with the chiara disk. However, the chiara disk is an external USB drive, so I suspect there could be a compatibility problem with these old (~2010) machines. Some of them don't even recognize USB keyboards pre-boot-up. I may try booting the USB drive from a newer computer.
Edit: I removed one of the new, unused Supermicros from the 1Y2 rack and set it up in the test stand. This newer machine is able to boot the chiara USB disk without issue. Next time I'll continue with the networking setup. |
15925
|
Tue Mar 16 19:04:20 2021 |
gautam | Update | CDS | Front-end testing | Now that I think about it, I may only have backed up the root file system of chiara, and not/home/cds/ (symlinked to /opt/ over NFS). I think we never revived the rsync backup to LDAS after the FB fiasco of 2017, else that'd have been the most convenient way to get files. So you may have to resort to some other technique (e.g. configure the second network interface of the chiara clone to be on the martian network and copy over files to the local disk, and then disconnect the chiara clone from the martian network (if we really want to keep this test stand completely isolated from the existing CDS network) - the /home/cds/ directory is rather large IIRC, but with 2TB on the FB clone, you may be able to get everything needed to get the rtcds system working). It may then be necessary to hook up a separate disk to write frames to if you want to test that part of the system out.
Good to hear the backup disk was able to boot though!
Quote: |
And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.
For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success.
|
|
15947
|
Fri Mar 19 18:14:56 2021 |
Jon | Update | CDS | Front-end testing | Summary
Today I finished setting up the subnet for new FE testing. There are clones of both fb1 and chiara running on this subnet (pictured in Attachment 2), which are able to boot FEs completely independently of the Martian network. I then assembled a second FE system (Supermicro host and IO chassis) to serve as c1sus2, using a new OSS host adapter card received yesterday from LLO. I ran the same set of PCIe hardware/driver tests as was done on the c1bhd system in 15890. All the PCIe tests pass.
Subnet setup
For future reference, below is the procedure used to configure the bootserver subnet.
- Select "Network" as highest boot priority in FE BIOS settings
- Connect all machines to subnet switch. Verify fb1 and chiara eth0 interfaces are enabled and assigned correct IP address.
- Add c1bhd and c1sus2 entries to
chiara:/etc/dhcp/dhcpd.conf :
host c1bhd {
hardware ethernet 00:25:90:05:AB:46;
fixed-address 192.168.113.91;
}
host c1bhd {
hardware ethernet 00:25:90:06:69:C2;
fixed-address 192.168.113.92;
}
- Restart DHCP server to pick up changes:
$ sudo service isc-dhcp-server restart
- Add c1bhd and c1sus2 entries to
fb1:/etc/hosts :
192.168.113.91 c1bhd
192.168.113.92 c1sus2
- Power on the FEs. If all was configured correctly, the machines will boot.
C1SUS2 I/O chassis assembly
- Installed in host:
- DolphinDX host adapter
- One Stop Systems PCIe x4 host adapter (new card sent from LLO)
- Installed in chassis:
- Channel Well 250 W power supply (replaces aLIGO-style 24 V feedthrough)
- Timing slave
- Contec DIO-1616L-PE module for timing control
Next time, on to RTCDS model compilation and testing. This will require first obtaining a clone of the /opt/rtcds disk hosted on chiara. |
Attachment 1: image_72192707_(1).JPG
|
|
Attachment 2: image_50412545.JPG
|
|
15959
|
Wed Mar 24 19:02:21 2021 |
Jon | Update | CDS | Front-end testing | This evening I prepared a new 2 TB 3.5" disk to hold a copy of /opt/rtcds and /opt/rtapps from chiara. This is the final piece of setup before model compilation can be tested on the new front-ends. However chiara does not appear to support hot-swapping of disks, as the disk is not recognized when connected to the live machine. I will await confirmation before rebooting it. The new disk is not currently connected. |
15976
|
Mon Mar 29 17:55:50 2021 |
Jon | Update | CDS | Front-end testing | Cloning of chiara:/home/cvs underway
I returned today with a beefier USB-SATA adapter, which has an integrated 12 V supply for powering 3.5" disks. I used this to interface a new 6 TB 3.5" disk found in the FE supplies cabinet.
I decided to go with a larger disk and copy the full contents of chiara:/home/cds. Strictly, the FEs only strictly need the RTS executables in /home/cvs/rtcds and /home/cvs/rtapps . However, to independently develop models, the shared matlab binaries in /home/cvs/caltech/... also need to be exposed. And there may be others I've missed.
I began the clone around 12:30 pm today. To preserve bandwidth to the main disk, I am copying not the /home/cds disk directly, but rather its backup image at /media/40mBackup .
Set up of dedicated SimPlant host
Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models.
I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them. However, if there are concerns about having it present on the network, it can be moved to the outside-facing switch in the office area. It is not currently running any RTCDS processes.
Set-up was carried out via the following procedure:
- Installed Debian 10.9 on an internal 480 GB SSD.
- Installed cdssoft repos following Jamie's instructions.
- Installed RTS and Docker dependencies:
$ sudo apt install cpuset advligorts-mbuf-dkms advligorts-gpstime-dkms docker.io docker-compose
- Configured scheduler for real-time operation:
$ sudo /sbin/sysctl kernel.sched_rt_runtime_us = -1
- Reserved 10 cores for RTS user models (plus one for IOP model) by adding the following line to
/etc/default/grub :
GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=nohz,domain,1-11 nohz_full=1-11 tsc=reliable mce=off"
followed by the commands:
$ sudo update-grub
$ sudo reboot now
- Downloaded virtual cymac repo to
/home/controls/docker-cymac .
I need to talk to Chris before I can take the setup further. |
15979
|
Tue Mar 30 18:21:34 2021 |
Jon | Update | CDS | Front-end testing | Progress today:
Outside Internet access for FE test stand
This morning Jordan and I ran an 85-foot Cat 6 Ethernet cable from the campus network switch in the office area (on the ligo.caltech.edu domain) to the FE test stand near 1X6. This is to allow the test-stand subnet to be accessed for remote testing, while keeping it invisible to the parallel Martian subnet.
Successful RTCDS model compilation on new FEs
The clone of the chiara:/home/cds disk completed overnight. Today I installed the disk in the chiara clone. The NFS mounts (/opt/rtcds, /opt/rtapps ) shared with the other test-stand machines mounted without issue.
Next, I attempted to open the shared Matlab executable (/cvs/cds/caltech/apps/linux64/matlab/bin/matlab ) and launch Simulink. The existing Matlab license (/cvs/cds/caltech/apps/linux64/matlab/licenses/license_chiara_865865_R2015b.lic ) did not work on this new machine, as they are machine-specific, so I updated the license file. I linked this license to my personal license, so that the machine license for the real chiara would not get replaced. The original license file is saved in the same directory with a *.bak postfix. If this disk is ever used in the real chiara machine, this file should be restored. After the machine license was updated, Matlab and Simulink loaded and allowed model editing.
Finally, I tested RTCDS model compilation on the new FEs using the c1lsc model as a trial case. It encountered one path issue due to the model being located at /opt/rtcds/userapps/release/isc/c1/models/isc/ instead of /opt/rtcds/userapps/release/isc/c1/models/ . This seems to be a relic of the migration of the 40m models from the SVN to a standalone git repo. This was resolved by simply symlinking to the expected location:
$ sudo ln -s /opt/rtcds/userapps/release/isc/c1/models/isc/c1lsc.mdl /opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl
The model compilation then succeeded:
controls@c1bhd$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1bhd$ make clean-c1lsc
Cleaning c1lsc...
Done
controls@c1bhd$ make c1lsc
Cleaning c1lsc...
Done
Parsing the model c1lsc...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 28830 s in the
future
make[1]: warning: Clock skew detected. Your build may be incomplete.
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/caltech/c1/userapps/release/cds/common/src/cdsToggle.c
/opt/rtcds/userapps/release/cds/c1/src/inmtrxparse.c
/opt/rtcds/userapps/release/cds/common/models/FILTBANK_MASK.mdl
/opt/rtcds/userapps/release/cds/common/models/rtbitget.mdl
/opt/rtcds/userapps/release/cds/common/models/SCHMITTTRIGGER.mdl
/opt/rtcds/userapps/release/cds/common/models/SQRT_SWITCH.mdl
/opt/rtcds/userapps/release/cds/common/src/DB2MAG.c
/opt/rtcds/userapps/release/cds/common/src/OSC_WITH_CONTROL.c
/opt/rtcds/userapps/release/cds/common/src/wait.c
/opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl
/opt/rtcds/userapps/release/isc/c1/models/IQLOCK_WHITENING_TRIGGERING.mdl
/opt/rtcds/userapps/release/isc/c1/models/PHASEROT.mdl
/opt/rtcds/userapps/release/isc/c1/models/RF_PD_WITH_WHITENING_TRIGGERING.mdl
/opt/rtcds/userapps/release/isc/c1/models/UGF_SERVO_40m.mdl
/opt/rtcds/userapps/release/isc/common/models/FILTBANK_TRIGGER.mdl
/opt/rtcds/userapps/release/isc/common/models/LSC_TRIGGER.mdl
Successfully compiled c1lsc
***********************************************
Compile Warnings, found in c1lsc_warnings.log:
***********************************************
[warnings suppressed]
As did the installation:
controls@c1bhd$ make install-c1lsc
Installing system=c1lsc site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1LSC.txt
Installing /opt/rtcds/caltech/c1/target/c1lsc/c1lscepics
Installing /opt/rtcds/caltech/c1/target/c1lsc
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1lsc
/opt/rtcds/caltech/c1/scripts/startc1lsc
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl
-par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_210330_170634.par
-gds_node=42 -site_letter=C -system=c1lsc -host=c1lsc
Installing GDS node 42 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1lsc.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1LSC.ini
Installing Epics MEDM screens
Running post-build script
safe.snap exists
We are ready to start building and testing models. |
2772
|
Mon Apr 5 13:52:45 2010 |
Alberto | Update | Computers | Front-ends down. Rebooted | This morning, at about 12 Koji found all the front-ends down.
At 1:45pm rebooted ISCEX, ISCEY, SOSVME, SUSVME1, SUSVME2, LSC, ASC, ISCAUX
Then I burtestored ISCEX, ISCEY, ISCAUX to April 2nd, 23:07.
The front-ends are now up and running again. |
2376
|
Thu Dec 10 08:40:12 2009 |
Alberto | Update | Computers | Fronte-ends down | I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.
I'll go for a big boot fest. |
2378
|
Thu Dec 10 08:50:33 2009 |
Alberto | Update | Computers | Fronte-ends down |
Quote: |
I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.
I'll go for a big boot fest.
|
Since I wanted to single out the faulting system when these situations occur, I tried to reboot the computers one by one.
1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder; power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
Then I did the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. I executed the steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
5) power-cycle and restart the single front-ends
6) burt-restore all the snapshots
When I tried to restart C1SOSVME by power-cycling it I still got the same response: "No response from EPICS". But I then reset C1SUSVME1 and C1SUSVME2 I was able to restart C1SOSVME.
It turned out that while I was checking the efficacy of the steps of the Grand Reboot to single out the crucial one, I was getting fooled by C1SOSVME's status. C1SOSVME was stuck, hanging on C1SUSVME1 and C1SUSVME2.
So the Nuclear option is still unproven as the only working procedure. It might be not necessary.
Maybe restating BOTH RFM switches, the one in 1Y7 and the one in 1Y6, would be sufficient. Or maybe just power-cycling the C0DAQCTRL and C1DCU1 is sufficient. This has to be confirmed next time we incur on the same problem. |
2382
|
Thu Dec 10 10:01:16 2009 |
Jenne | Update | Computers | Fronte-ends down | All the front ends are back up.
Quote: |
Quote: |
I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.
I'll go for a big boot fest.
|
Since I wanted to understand once for all what's the faulting system when these situations occur, I tried to reboot the computers one by one.
1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder; power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
The following is the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. Execute the following steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
One other possibility remains to be explored to avoid the Nuclear Option. And that is to just try to reset both RFM Network switches: the one in 1Y7 and the one in 1Y6.
|
|
2383
|
Thu Dec 10 10:31:18 2009 |
Jenne | Update | Computers | Fronte-ends down |
Quote: |
All the front ends are back up.
Quote: |
Quote: |
I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.
I'll go for a big boot fest.
|
Since I wanted to understand once for all what's the faulting system when these situations occur, I tried to reboot the computers one by one.
1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder; power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
The following is the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. Execute the following steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
One other possibility remains to be explored to avoid the Nuclear Option. And that is to just try to reset both RFM Network switches: the one in 1Y7 and the one in 1Y6.
|
|
I burtrestored all the snapshots to Dec 9 2009 at 18:00. |
16336
|
Thu Sep 16 01:16:48 2021 |
Koji | Update | General | Frozen 2 | It happened again. Defrosting required. |
Attachment 1: P_20210916_003406_1.jpg
|
|
10756
|
Thu Dec 4 23:45:30 2014 |
Jenne | Update | CDS | Frozen? | [Jenne, Q, Diego]
I don't know why, but everything in EPICS-land froze for a few minutes just now. It happened yesterday that I saw, but I was bad and didn't elog it.
Anyhow, the arms stayed locked (on IR) for the whole time it was frozen, so the fast things must have still been working. We didn't see anything funny going on on the frame builder, although that shouldn't have much to do with the EPICS service. The seismic rainbow on the wall went to zeros during the freeze, although the MC and PSL strip charts are still fine.
After a few minutes, while we were still trying to think of things to check, things went back to normal. We're going to just keep locking for now.... |
3782
|
Tue Oct 26 01:53:21 2010 |
Joonho Lee | Update | Electronics | Fuction Generator removed. | Today I worked on how to measure cable impedance directly.
In order to measure the impedance in RF range, I used a function generator which could generate 50MHz signal and was initially connected to the table on the right of the decks.
The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.
After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.
To test the VIDEO cables, I need a function generator generating signal of frequency 50 MHz.
In the deck on the right of PSL table, there was only one such generator which was connected to the table on the right of the deck.
Therefore, I disconnected it from the cable and took it to the control room to use it because Rana said it was not used.
Then, I tired to find on how to measure the impedance of cable directly but I did not finish yet.
When I finished today works, I put the generator back to the deck but I did not connect to the previous cable which was initially connected to the generator.
Next time, I will finish the practical method of measuring the cable impedance then I will measure the cables with unknown impedance.
Any suggestion would be appreciated. |
8908
|
Tue Jul 23 16:39:31 2013 |
Koji | Update | General | Full IFO alignment recovered | [Annnalisa Koji]
Full alignment of the IFO was recovered. The arms were locked with the green beams first, and then locked with the IR.
In order to use the ASS with lower power, C1:LSC-OUTPUT_MTRX_9_6 and C1:LSC-OUTPUT_MTRX_10_7 were reduced to 0.05.
This compensates the gain imbalance between TRX/Y siganls and the A2L component in the arm feedback signals.
Despite the IFO was aligned, we don't touch the OPLEVs and green beams to the vented IFO. |
Attachment 1: alignment.png
|
|
8912
|
Tue Jul 23 20:41:40 2013 |
gautam | Configuration | endtable upgrade | Full range calibration and installation of PZT-mounted mirrors | Given that the green beam is to be used as the reference during the vent, it was decided to first test the PZT mounted mirrors at the X-endtable rather than the Y-endtable as originally planned. Yesterday, I prepared a second PZT mounted mirror, completed the full range calibration, and with Manasa, installed the mirrors on the X-endtable as mentioned in this elog. The calibration constants have been determined to be (see attached plots for aproximate range of actuation):
M1-pitch: 0.1106 mrad/V
M1-yaw: 0.143 mrad/V
M2-pitch: 0.197 mrad/V
M2-yaw: 0.27 mrad/V
Second 2-inch mirror glued to tip-tilt and mounted:
- The spot sizes on the steering mirrors at the X-end are fairly large, and so two 2-inch steering mirrors were required.
- The mirrors already glued to the PZTs were a CVI 2-inch and a Laseroptik 1-inch mirror.
- I prepared another Laseroptik 2-inch mirror (45 degree with HR and AR coatings for 532 nm) and glued it to a PZT mounted in a modified mount as before.
- Another important point regarding mounting the PZTs: there are two perforated rings (see attached picture) that run around the PZT about 1cm below the surface on which the mirror is to be glued. The PZT has to be pushed in through the mount till these are clear of the mount, or the actuation will not be as desired. In the first CVI 2-inch mirror, this was not the the case, which probably explains the unexpectedly large pitch-yaw coupling that was observed during the calibration [Thanks Manasa for pointing this out].
Full range calibration of PZT:
Having prepared the two steering mirrors, I calibrated them for the full range of input voltages, to get a rough idea of whether the tilt varied linearly and also the range of actuation.
Methodology:
- The QPD setup described in my previous elogs was used for this calibration.
- The linear range of the QPD was gauged to be while the output voltage lay between -0.5V and 0.5V. The calibration constants are as determined during the QPD calibration, details of which are here.
- In order to keep the spot always in the linear range of the QPD, I stared with an input signal of -10V or +10V (ie. one extreme), and moved both the X and Y micrometers on the translational stage till both these coordinates were at one end of the linear range (i.e -0.5V or 0.5V). I then increased the input voltage in steps of ~1V through the full range from -10V to +10V DC. The signal was applied using a SR function generator with the signal amplitude kept to 0, and a DC offset in the range -5V to 5V DC, which gave the desired input voltages to the PZT driver board (between -10V DC and 10V DC).
- When the output of the QPD amp reached the end of the linear regime (i.e 0.5V or -0.5V), I moved the appropriate micrometer dial on the translational stage to take it to the other end of the linear range, before continuing with the measurements. The distance moved was noted.
- Both the X and Y coordinates were noted in order to investigate pitch-yaw coupling.
Analysis and remarks:
- The results of the calibration are presented in the plots below.
- Though the measurement technique was crude (and maybe flawed because of a possible z-displacement while moving the translational stage), the calibration was meant to be rough, and I think the results obtained are satisfactory.
- Fitting the data linearly is only an approximation, as there is evidence of hysteresis. Also, PZTs appear to have some drift, though I have not been able to quantify this (I did observe that the output of the QPD amp shifted by an amount equal to ~0.05mm while I left the setup standing for an hour or so).
- The range of actuation seems to be different for the two PZTs, and also for each degree of freedom, though the measured data is consistent with the minimum range given in the datasheet (3.5 mrad for input voltages in the range -20V to 120V DC).
PZT Calibration Plots
The circles are datapoints for the degree of freedom to which the input is applied, while the 'x's are for the other degree of freedom. Different colours correspond to data measured with the position of the translational stage at some value.
M1 Pitch M1 Yaw

M2 Pitch M2 Yaw

Installation of the mirrors at the X-endtable:
The calibrated mirrors were taken to the X-endtable for installation. The steering mirrors in place were swapped out for the PZT mounted pair. Manasa managed (after considerable tweaking) to mode-match the green beam to the cavity with the new steering mirror configuration. In order to fine tune the alignment, Koji moved ITMx and ETMx in pitch and yaw so as to maximise green TRX. We then got an idea of which way the input pointing had to be moved in order to maximise the green transmission.
|
Attachment 5: PI_S330.20L.pdf
|
|
8967
|
Mon Aug 5 18:48:44 2013 |
gautam | Configuration | endtable upgrade | Full range calibration of PZT mounted mirrors for Y-endtable | I had prepared two more PZT mounted mirrors for the Y-end some time back. These are:
- A 2-inch CVI mirror (45 degree, HR and AR for 532nm, was originally one of the steering mirrors at the X-endtable, and was removed while switching those out for the PZT mounted mirrrors).
- A 1-inch Laseroptik mirror (45 degree, HR and AR for 532nm).
I used the same QPD set-up and the methodology described here to do a full-range calibration of these PZTs. Plots attached. The calibration constants have been determined to be:
CVI-pitch: 0.316 mrad/V
CVI-yaw: 0.4018 mrad/V
Laseroptik pitch: 0.2447 mrad/V
Laseroptik yaw: 0.2822 mrad/V
Remarks:
- These PZTs, like their X-end counterparts, showed evidence of drift and hysteresis. We just have to deal with this.
- One of the PZTs (the one on which the CVI mirror is mounted) is a used one. While testing it, I thought that its behaviour was a little anomalous, but the plots do not seem to suggest that anything is amiss.
Plots:
CVI YAW CVI PITCH

Laseroptik YAW Laseroptik PITCH
|
2279
|
Tue Nov 17 10:09:57 2009 |
josephb | Update | Environment | Fumes | The smell of diesel is particularly bad this morning. Its concentrated enough to be causing me a headache. I'm heading off to Millikan and will be working remotely on Megatron. |
6191
|
Thu Jan 12 11:08:23 2012 |
Leo Singer | Update | PEM | Funky spectrum from STS-2 | I am trying to stitch together spectra from seismometers and accelerometers to produce a ground motion spectrum from Hz to 100's of Hz. I was able to retrieve data from two seismometers, GUR1 and STS_1, but not from any of the accelerometers. The GUR1 spectrum is qualitatively similar to other plots that I have seen, but the STS_1 spectrum looks strange: the X axis spectrum is falling off as ~1/f, but the Y and Z spectra are pretty flat. All three axes have a few lines that they may share in common and that they may share with GUR1.
See attached plot. |
Attachment 1: spectrum.jpg
|
|
932
|
Fri Sep 5 09:56:14 2008 |
josephb, Eric | Configuration | Computers | Funny channels, reboots, and ethernet connections | 1) Apparently the I00-ICS type channels had gotten into a funny state last night, where they were showing just noise, exactly when Rana changed the accelerometer gains and did major reboots. A power cycle of the c1ioo crate and appropriate restarts fixed this.
2) c1asc looks like it was down all night. When I walked out to look at the terminal, it claimed to be unable to read the input file from the command line I had entered the previous night ( < /cvs/cds/caltech/target/c1asc/startup.cmd). In addition we were unable to telnet in, suggesting an ethernet breakdown and inability to mount the appropriate files. So we have temporarily run a new cat6 cable from the c1asc board to the ITMX prosafe switch (since there's a nice knee high cable tray right there). One last power cycle and we were able to telnet in and get it running. |
687
|
Thu Jul 17 00:59:18 2008 |
Jenne | Summary | General | Funny signal coming out of VCO | While working on calibrating the MC_F signal, Rana and I noticed a funny signal coming out of the VCO. We expect the output to be a nice sine wave at about 80MHz. What we see is the 80MHz signal plus higher harmonics. The reason behind the craziness is to be determined. For now, here's what the signal looks like, in both time and frequency domains.
The first plot is a regular screen capture of a 'scope. The second is the output of the SR spectrum analyzer, as seen on a 'scope screen. The leftmost tall peak is the 80MHz peak, and the others are the harmonics. |
Attachment 1: VCOout_time.PNG
|
|
Attachment 2: VCOout_freq.PNG
|
|
9583
|
Tue Jan 28 22:24:46 2014 |
ericq | Update | General | Further Alignment | [Masasa, ericq]
Having no luck doing things remotely, we went into the ITMX chamber and roughly aligned the IR beam. Using the little sliding alignment target, we moved the BS to get the IR beam centered on ITMX, then moved ITMX to get good michelson fringes with ITMY. Using an IR card, found the retroflection and moved ETMX to make it overlap with the beam transmitted through the ITM. With the PRM flashing, X-arm cavity flashes could be seen. So, at that point, both the y-arm and x-arm were flashing low order modes. |
12107
|
Thu May 5 14:03:52 2016 |
ericq | Update | LSC | Further Aux X PDH tweaks | This morning I poked around with the green layout a bit. I found that the iris immediately preceding the viewport was clipping the ingoing green beam too much, opening it up allowed for better coupling to the arm. I also tweaked the positions of the mode matching lenses and did some alignment, and have since been able to achieve GTRX values of around 0.5.
I also removed the 20db attenuator after the mixer, and turned the servo gain way down and was able to lock easily. I then adjusted the gain while measuring the CLG, and set it where the maximum gain peaking was 6dB, which worked out to be a UGF of around 8kHz. On the input monitor, the PDH horn-to-horn voltage going into the VGA is 2.44V, which shouldn't saturate the G=4 preamp stage of the AD8336, which seems ok.
The ALS sensitivity is now approaching the good nominal state:

There remains some things to be done, including comprehensive dumping of all beams at the end table (especially the reflections off of the viewport) and the new filters to replace the current post-mixer LPF, but things look pretty good. |
Attachment 1: 2016-05-05_newals.pdf
|
|
4581
|
Thu Apr 28 12:25:11 2011 |
josephb | Update | CDS | Further adventures in Hyper-threading | First, I disabled front end starts on boot up, and brought c1sus up. I rebuilt the models for the c1sus computer so they had a new specific_cpu numbers, making the assumption that 0-1 were one real core, 2-3 were another, etc.
Then I ran the startc1SYS scripts one by one to bring up the models. Upon just loading the c1x02 on "core 2" (the IOP), I saw it fluctuate from about 5 to 12. After bringing up c1sus on "core 3", I saw the IOP settle down to about 7 consistently. Prior to hyper-threading it was generally 5.
Unfortunately, the c1sus model was between 60 and 70 microseconds, and was producing error messages a few times a second
[ 1052.876368] c1sus: cycle 14432 time 65; adcWait 0; write1 0; write2 0; longest write2 0
[ 1052.936698] c1sus: cycle 15421 time 74; adcWait 0; write1 0; write2 0; longest write2 0
Bringing up the rest of the models (c1mcs on 4, c1rfm on 5, and c1pem on 6), saw c1mcs occasionally jumping above the 60 microsecond line, perhaps once a minute. It was generally hovering around 45 microseconds. Prior to hyper-threading it was around 25-28 microseconds.
c1rfm was rock solid at 38, which it was prior to hyper-threading. This is most likely due to the fact it has almost no calculation and only RFM reads slowing it down.
c1pem continued to use negligible time, 3 microseconds out of its 480.
I tried moving c1sus to core 8 from core 3, which seemed to bring it to the 58 to 65 microsecond range, with long cycles every few seconds.
I built 5 dummy models (dua on 7, dub on 9, duc on 10, dud on 11, due on 1) to ensure that each virtual core had a model on it, to see if it helped with stabilizing things. The models were basically copies of the c1pem model.
Interestingly, c1mcs seemed to get somewhat better and only taking to 30-32 microseconds, although still not as good as its pre-hyper-threading 25-28. Over the course of several minutes it was no longer having a long cycle.
c1sus got worse again, and was running long cycles 4-5 times a second.
At this point, without surgery on which models are controlling which optics (i.e. splitting the c1sus model up) I am not able to have hyper-threading on and have things working. I am proceeding to revert the control models and c1sus computer to the hyper-threading state.
|
13741
|
Mon Apr 9 18:46:03 2018 |
gautam | Update | IOO | Further debugging |
- I analyzed the data from the free swinging MC test conducted over the weekend. Attachment #1 shows the spectra. Color scheme is same for all panels.
- I am suspicious of MC3: why does the LR coil see almost no Yaw motion?
- The "equilibrium" values of all the sensor signals (at the IN1 of the coil input filters) are within 20% of each other (for MC3, but also MC1 and MC2).
- The position resonance is also sensed more by the side coil than by the LR coil.
- To rule out satellite box shenanigans, I just switched the SRM and MC3 satellite boxes. But coherence between frequency noise as sensed by the arms remain.
- I decided to clean up my IMC nosie budget a bit more.
- Attachment #2 shows the NB as of today. I'll choose a better color palette for the next update.
- "Seismic" trace is estimated using the 40m gwinc file - the MC2 stack is probably different from the others and so it's contribution is probably more, but I think this will suffice for a first estimate.
- "RAM" trace is measured at the CM board input, with MC2 misaligned.
- The unaccounted noise is evident from above ~8 Hz.
- More noises will be added as they are measured.
- I am going to spend some time working on modeling the CM board noise and TF in LTspice. I tried getting a measurement of the transfer function fron IN1 to the FAST output of the CM board with the SR785 (motivation being to add the contribution of the input referred CM board noise to the NB plot), but I suspect I screwed up something w.r.t. the excitation amplitude, as I am getting a totally nonsensical shape, which also seems to depend on my input excitation amplitude. I don't think the output is saturated (viewed during measurement on a scope), but perhaps there are some subtle effects going on.
|
Attachment 1: MC_Freeswinging.pdf
|
|
Attachment 2: IMC_NB_20180409.pdf
|
|
13744
|
Tue Apr 10 14:28:44 2018 |
gautam | Update | IOO | Further debugging | I am working on IMC electronics. IMC is misaligned until further notice. |
2654
|
Thu Mar 4 02:25:14 2010 |
Jenne | Update | COC | Further details on the magnet story, and SRM guiderod glued | [Koji, Jenne]
First, the easy story: SRM got it's guiderod & standoff glued on this evening. It will be ready for magnets (assuming everything is sorted out....see below) as early as tomorrow. We can also begin to glue PRM guiderods as early as tomorrow.
The magnet story is not as short.....
Problem: ITMX and ITMY's side magnets are not glued in the correct places along the z-axis of the optic (z-axis as in beam propagation direction).
ITMX (as reported the other day) has the side magnet placement off by ~2mm. ITMX side was glued using the magnet fixture from MIT and the teflon pads that Kiwamu and I improvised.
It was determined that the improvised teflon pads were too thin (maybe about 1m thick), so I took those out, and replaced them with the teflon pads stolen from the 40m's magnet gluing fixture. (The teflon pad from the MIT fixture and the ones from the MIT fixture are the same within my measuring ability using a flat surface and feeling for a step between them. I haven't yet measured with calipers the MIT pad thickness). The pads from the 40m fixture, which were used in the MIT fixture to glue ITMY side last night were measured to be ~1.7mm thick.
Today when Koji hung ITMY, he discovered that the side magnet is off by ~1mm. This improvement is consistent with the switching of the teflon pads to the ones from the 40m fixture.
We compared the 40m fixture with the one from MIT, and it looks like the distance from the edge of where the optic should sit to the center of the hole for the side magnet is different by ~1.1mm. This explains the remaining ~1mm that ITMY is off by.
We should put the teflon pads back into the 40m fixture, and only use that one from now on, unless we find an easy way to make thicker teflon pads for the fixture we received from MIT. (The pads that are in there are about the maximum thickness that will fit). I'm going to use my thickness measurements of SRM (taken in the process of gluing the guiderods) to see what thickness of pads / what fixture we want to actually use, but I'm sure that the fixture we found in the 40m is correct. We can't use this fixture however, until we get some clean 1/4-28 screws. I've emailed Steve and Bob, so hopefully they'll have something for us by ~lunchtime tomorrow.
The ITMX side magnet is so far off in the Z-direction that we'll have to remove it and reglue it in the correct position in order for the shadow sensor to do anything. For ITMY, we'll check it out tomorrow, whether the magnet is in the LED beam at all or not. If it's not blocking the LED beam enough, we'll have to remove and reglue it too.
Why someone made 2 almost identical fixtures, with a 1mm height difference and different threads for the set screws, I don't know. But I don't think whoever that person was can be my friend this week. |
12507
|
Mon Sep 19 22:03:10 2016 |
ericq | Update | General | Further recovery progress | [ericq, Lydia, Teng]
Brief summary of this afternoon's activities:
- PMC alignment adjusted (Transmission of 0.74)
- IMC locked, hand aligned. Tranmission slightly over 15k. Measured spot positions to be all under 2mm.
- Set DC offsets of MC2 Trans + WFS1 + WFS2 (WFS2 DC offsets had wandered so much that DC "centered" left some quadrants almost totally dark)
- Set demod offsets of WFS1+WFS2
- Note to self: WFS script area is a mess. I can never remember which scripts are the right ones to run. I should clean this up
- WFS loops activated, tested. All clear.
- Locked Yarm, dither aligned. Transmission 0.8
- Moved BS to center ITMY reflection on AS camera
- Misaligned ETMY, aligned PRM to make a flashing PRY AS beam. REFL camera spot confirmed to be on the screen, which is nice
- Wandered ITMX around until its AS spot was found. ITMX OSEMs not too far from their half max. (todo: update with numbers)
- Wandered SRM around until full DRMI flashes seen
- Centered all vertex oplevs
- Made a brief attempt at locking X arm, could only get some crazy high order mode to lock. BS and ITMX alignments have changed substantially from the in-air locks, so probably need to adjust ETMX much more.
Addendum: I had a suspicion that the alignment had moved so much, we were missing the TRX PDs. I misaligned the Y arm, and used AS110 as a proxy for X arm power, as we've done in the past for this kind of thing. Indeed, I could maximize the signal and lock a TM00 mode. Both the high gain PD and QPD in the TRX path are totally dark. This needs realignment on the end table. |
12508
|
Tue Sep 20 10:45:06 2016 |
rana | Update | General | Further recovery progress | Rana suspicious. We had arms locked before pumpdown with beams on Transmon PDs. If they're off now, must be beams are far off on the mirrors. Try A2L to estimate spot positions before walkin the beams too far. |
12510
|
Wed Sep 21 01:08:02 2016 |
ericq | Update | General | Further recovery progress | The misalignment wasn't as bad as I had intially feared; the spot was indeed pretty high on ETMX at first. Both transmon QPDs did need a reasonable amount of steering to center once the dither had centered the beam spots on the optics.
Arms, PRMI and DRMI have all been locked and dither aligned. All oplevs and transmon QPDs have been centered. All AS and REFL photodiodes have been centered.
Green TM00 modes are seen in each arm; I'll do ALS recovery tomorrow. |
11414
|
Tue Jul 14 17:14:23 2015 |
Eve | Summary | Summary Pages | Future summary pages improvements | Here is a list of suggested improvements to the summary pages. Let me know if there's something you'd like for me to add to this list!
- A lot of plots are missing axis labels and titles, and I often don't know what to call these labels. I could use some help with this.
- Check the weather and vacuum tabs to make sure that we're getting the expected output. Set the axis labels accordingly.
- Investigate past periods of missing data on DataViewer to see if the problem was with the data requisition process, the summary page production process, or something else.
- Based on trends in data over the past three months, set axis ranges accordingly to encapsulate the full data range.
- Create a CDS tab to store statistics of our digital systems. We will use the CDS signals to determine when the digital system is running and when the minute trend is missing. This will allow us to exclude irrelevant parts of the data.
- Provide duty ratio statistics for the IMC.
- Set triggers for certain plots. For example, for channels C1:LSC-XARM OUT DQ and page 4 LIGO-T1500123–v1 C1:LSC-YARM OUT DQ to be plotted in the Arm LSC Control signals figures, C1:LSCTRX OUT DQ and C1:LSC-TRY OUT DQ must be higher than 0.5, thus acting as triggers.
- Include some flag or other marking indicating when data is not being represented at a certain time for specific plots.
- Maybe include some cool features like interactive plots.
|
11437
|
Wed Jul 22 22:06:42 2015 |
Eve | Summary | Summary Pages | Future summary pages improvements | - CDS Tab
We want to monitor the status of the digital control system.
1st plot
Title: EPICS DAQ Status
I wonder we can plot the binary numbers as statuses of the data acquisition for the realtime codes.
We want to use the status indicators. Like this:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20150722/plots/H1-MULTI_A8CE50_SEGMENTS-1121558417-86400.png
channels:
C1:DAQ-DC0_C1X04_STATUS
C1:DAQ-DC0_C1LSC_STATUS
C1:DAQ-DC0_C1ASS_STATUS
C1:DAQ-DC0_C1OAF_STATUS
C1:DAQ-DC0_C1CAL_STATUS
C1:DAQ-DC0_C1X02_STATUS
C1:DAQ-DC0_C1SUS_STATUS
C1:DAQ-DC0_C1MCS_STATUS
C1:DAQ-DC0_C1RFM_STATUS
C1:DAQ-DC0_C1PEM_STATUS
C1:DAQ-DC0_C1X03_STATUS
C1:DAQ-DC0_C1IOO_STATUS
C1:DAQ-DC0_C1ALS_STATUS
C1:DAQ-DC0_C1X01_STATUS
C1:DAQ-DC0_C1SCX_STATUS
C1:DAQ-DC0_C1ASX_STATUS
C1:DAQ-DC0_C1X05_STATUS
C1:DAQ-DC0_C1SCY_STATUS
C1:DAQ-DC0_C1TST_STATUS
1st plot
Title: IOP Fast Channel DAQ Status
These have two bits each. How can we handle it?
If we need to shrink it to a single bit take "AND" of them.
C1:FEC-40_FB_NET_STATUS (legend: c1x04, if a legend placable)
C1:FEC-20_FB_NET_STATUS (legend: c1x02)
C1:FEC-33_FB_NET_STATUS (legend: c1x03)
C1:FEC-19_FB_NET_STATUS (legend: c1x01)
C1:FEC-46_FB_NET_STATUS (legend: c1x05)
3rd plot
Title C1LSC CPU Meters
channels:
C1:FEC-40_CPU_METER (legend: c1x04)
C1:FEC-42_CPU_METER (legend: c1lsc)
C1:FEC-48_CPU_METER (legend: c1ass)
C1:FEC-22_CPU_METER (legend: c1oaf)
C1:FEC-50_CPU_METER (legend: c1cal)
The range is from 0 to 75 except for c1oaf that could go to 500.
Can we plot c1oaf with the value being devided by 8? (Then the legend should be c1oaf /8)
4th plot
Title C1SUS CPU Meters
channels:
C1:FEC-20_CPU_METER (legend: c1x02)
C1:FEC-21_CPU_METER (legend: c1sus)
C1:FEC-36_CPU_METER (legend: c1mcs)
C1:FEC-38_CPU_METER (legend: c1rfm)
C1:FEC-39_CPU_METER (legend: c1pem)
The range is be from 0 to 75 except for c1pem that could go to 500.
Can we plot c1pem with the value being devided by 8? (Then the legend should be c1pem /8)
5th plot
Title C1IOO CPU Meters
channels:
C1:FEC-33_CPU_METER (legend: c1x03)
C1:FEC-34_CPU_METER (legend: c1ioo)
C1:FEC-28_CPU_METER (legend: c1als)
The range is be from 0 to 75.
6th plot
Title C1ISCEX CPU Meters
channels:
C1:FEC-19_CPU_METER (legend: c1x01)
C1:FEC-45_CPU_METER (legend: c1scx)
C1:FEC-44_CPU_METER (legend: c1asx)
The range is be from 0 to 75.
7th plot
Title C1ISCEY CPU Meters
channels:
C1:FEC-46_CPU_METER (legend: c1x05)
C1:FEC-47_CPU_METER (legend: c1scy)
C1:FEC-91_CPU_METER (legend: c1tst)
The range is be from 0 to 75.
=====================
IOO
We want a duty ratio plot for the IMC. C1:IOO-MC_TRANS_SUM >1e4 is the good period.
Duty ratio plot looks like the right plot of the following link
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20150722/lock/segments/
=====================
SUS: OPLEV
OL_PIT_INMON and OL_YAW_INMON are good for the slow drift monitor.
But their sampling rate is too slow for the PSDs.
Can you use
C1:SUS-ETM_OPLEV_PERROR
C1:SUS-ETM_OPLEV_YERROR
etc...
For the PSDs? They are 2kHz sampling DQ channels. You would be able to plot
it up to ~1kHz. In fact, we want to monitor the PSD from 100mHz to 1kHz.
How can you set up the resolution (=FFT length)?
=====================
LSC / ASC / ALS tabs
Let's make new tabs LSC, ASC, and ALS
LSC:
We should have a plot for
C1:LSC-TRX_OUT_DQ
C1:LSC-TRY_OUT_DQ
C1:LSC-POPDC_OUT_DQ
It's OK to use the minute trend for now.
You can check the range using dataviewer.
ASC:
Let's use
C1:SUS_MC1_ASCPIT_OUT16 (legend: IMC WFS)
C1:ASS-XARM_ITM_YAW_OSC_CLKGAIN (legend: XARM ASS)
C1:ASS-YARM_ITM_YAW_OSC_CLKGAIN (legend: YARM ASS)
C1:ASX-XARM_M1_PIT_OSC_CLKGAIN (legend: XARM Green ASS)
as the status indicators. There is no YARM Green ASS yet.
ALS:
Title: ALS Green transmission
We want a time series of
ALS-TRX_OUT16
ALS-TRY_OUT16
Title: ALS Green beatnote
Another time series
ALS-BEATX_FINE_Q_MON
ALS-BEATY_FINE_Q_MON
Title: Frequency monitor
We have frequency counter outputs, but I have to talk to Eric to know the channel names |
8045
|
Fri Feb 8 21:14:52 2013 |
Manasa | Update | Optics | G&H - AR Reflectivity | Hours of struggle and still no data 
I tried to measure the AR reflectivity and the loss due to flipping of G&H mirrors
With almost no wedge angle, separating the AR reflected beam from the HR reflected beam seems to need more tricks.

The separation between the 2 reflected rays is expected 0.8mm. After using a lens along the incident beam, this distance was still not enough to be separable by an iris.
The first trick: I could find a prism and tried to refract the beams at the edge of the prism...but the edges weren't that sharp to separate the beams (Infact I thought an axicon would do the job better..but I think we don't have any of those).
Next from the bag of tricks: I installed a camera to see if the spots can actually be resolved.
The camera image shows the 2 sets of focal spots; bright set to the left corresponding to HR reflected beam and the other from the AR surface. I expect the ghost images to arise from the 15 arcsec wedge of the mirror. I tried to mask one of the sets using a razor blade to see if I can separate them and get some data using a PD. But, it so turns out that even the blade edge is not sharp enough to separate them.
If there are any more intelligent ideas...go ahead and suggest!

|
8046
|
Fri Feb 8 22:49:31 2013 |
Koji | Update | Optics | G&H - AR Reflectivity | How about to measure the AR reflectivity at larger (but small) angles the extrapolate the function to smaller angle,
or estimate an upper limit?
The spot separation is
D = 2 d Tan(\phi) Cos(\theta), where \phi = ArcSin(Sin(\theta) * n)
D = 2 d Tan(\phi) Cos(\theta), where \phi = ArcSin(Sin(\theta) / n) (<== correction by Manasa's entry)
\theta is the angle of incidence. For a small \theta, D is propotional to \theta.
So If you double the incident angle, the beam separation will be doubled,
while the reflectivity is an even function of the incident angle (i.e. the lowest order is quadratic).
I am not sure until how much larger angle you can use the quadratic function rather than a quartic function.
But thinking about the difficulty you have, it might be worth to try. |
8047
|
Fri Feb 8 23:04:40 2013 |
Manasa | Update | Optics | G&H - AR Reflectivity |
Quote: |
D = 2 d Tan(\phi) Cos(\theta), where \phi = ArcSin(Sin(\theta) * n)
\theta is the angle of incidence. For a small \theta, D is propotional to \theta.
|
n1Sin(\theta1) = n2 Sin(\theta2)
So it should be
\phi = ArcSin(Sin(\theta) / n
I did check the reflected images for larger angles of incidence, about 20 deg and visibly (on the IR card) I did not see much change in the separation. But I will check it with the camera again to confirm on that. |
8051
|
Sat Feb 9 19:34:34 2013 |
rana | Update | Optics | G&H - AR Reflectivity |
Use the trick I suggested:
Focus the beam so that the beam size at the detector is smaller than the beam separation. Use math to calculate the beam size and choose the lens size and position. You should be able to achieve a waist size of < 0.1 mm for the reflected beam. |
8063
|
Mon Feb 11 19:55:47 2013 |
Manasa | Update | Optics | G&H - AR Reflectivity |
I adjusted the focal length of the focusing lens and reduced the beam size enough to mask with the razor blade edge while looking at the camera and then making measurements using PD.
I am still not satisfied with this data because the R of the HR surface measured after flipping seems totally unbelievable (at around 0.45).
G&H AR reflectivity
R percentage
11 ppm @4 deg
19.8 ppm @6 deg
20 ppm @ 8 deg
30 ppm @ 20 deg |
8075
|
Wed Feb 13 09:28:56 2013 |
Steve | Update | Optics | G&H - HR plots |
Gooch & Housego optics order specification from 03-13-2010
Side 1: HR Reflectivity >99.99 % at 1064 nm for 0-45 degrees for S & P polarization
Side 2: AR coat R <0.15
The HR coating scans uploaded to 40mwiki / Aux optics today |
8018
|
Wed Feb 6 20:19:52 2013 |
Manasa | Update | Optics | G&H and LaserOptik mirrors | [Koji, Manasa]
We measured the wedge angle of the G&H and LaserOptik mirrors at the OMC lab using an autocollimator and rotation stage.
The wedge angles:
G&H : 18 arc seconds (rough measurement)
LaserOptik : 1.887 deg |
12102
|
Mon May 2 17:06:58 2016 |
rana | Summary | COC | G&H optics to Fullerton/HWS for anneal testing | Steve sent 4 of our 1" diameter G&H HR mirrors to Josh Smith at Fullerton for scatter testing. Attached photo is our total stock before sending. |
Attachment 1: 20160427_182305.jpg
|
|
530
|
Wed Jun 11 15:30:55 2008 |
josephb | Configuration | Cameras | GC1280 | The trial use GC1280 has arrived. This is a higher resolution CMOS camera (similar to the GC750). Other than higher resolution, it has a piece of glass covering and protecting the sensor as opposed to a plastic piece as used in the GC750. This may explain the reduced sensitivity to 1064nm light that the camera seems to exhibit. For example, the image averages presented here required a 60,000 microsecond exposure time, compared to 1000-3000 microseconds for similar images from the GC750. This is an inexact comparison, and the actual sensitivity difference will be determined once we have identical beams on both cameras.
The attached pdfs (same image, different angles of view) are from 200 averaged images looking at 1064nm laser light scattering from a piece of paper. The important thing to note is there doesn't seem to be any definite structure, as was seen in the GC750 scatter images.
One possibility is that too much power is reaching the CMOS detector, penetrating, and then reflecting back to the back side of the detector. Lower power and higher exposure times may avoid this problem, and the glass of the GC1280 is simply cutting down on the amount passing through.
This theory will be tested either this evening or tomorrow morning, by reducing the power on the GC750 to the point at which it needs to be exposed for 60,000 microseconds to get a decent image.
The other possibility is that the GC750 was damaged at some point by too much incident power, although its unclear what kind of failure mode would generate the images we have seen recently from the GC750. |
Attachment 1: GC1280_60000E_scatter_2d.pdf
|
|
Attachment 2: GC1280_60000E_scatter_3d.pdf
|
|
649
|
Tue Jul 8 21:46:38 2008 |
Yoichi | Configuration | PSL | GC650M moved to the PMC transmission | I moved a GC650M, which was monitoring the light coming out of the PSL, to the transmission port of the PMC to see the transmitted mode shape.
It will stay there unless someone find other use of it.
Just FYI, you can see the picture from the control computers by the following procedure:
ssh -X mafalda
cd /cvs/cds/caltech/target/Prosilica/40mCode
./SampleViewer
Chose 02-2210A-06223 and click on the Live View icon. |
521
|
Thu Jun 5 13:35:23 2008 |
josephb | Configuration | Cameras | GC750 looking at 1064nm scattered light | I've taken 200 images of the GC750 (CMOS) camera while holding it by hand up to a beam card (also held by hand) in the path of ~5mW of beam power. I then averaged the images to produce the fourth attached plot.
Rob has pointed out the image looks a lot like PCB traces. So perhaps we're seeing the electronics behind the CMOS sensor?
I repeated the same experiment with HeNe laser light (again scattered off a card). These show none of the detailed structure (just what looks to be a large reflection from the card moving around depending on how steady my hand was). These are the first 3 attached plots. So only 1064nm light so far sees these features.
As a possible solution, I did a quick and dirty calibration by dividing a previous PSL output beam by the 1064 average scatter light values. These produce the last attached pdf (with multiple images). The original uncalibrated image is on top, while the very simply calibrated image is on the bottom of each plot.
It seems as the effect may be power dependent (which could still be calibrated properly, but would take a bit more effort than simply dividing), as determined by looking at the edges of the calibrated plot. |
Attachment 1: GC750_HeNe_scatter_avg.pdf
|
|
Attachment 2: GC750_HeNe_scatter_avg2.pdf
|
|
Attachment 3: GC750_HeNe_scatter_avg3.pdf
|
|
Attachment 4: GC750_scatter_avg.pdf
|
|
Attachment 5: GC750_nitrogen_white.pdf
|
|
378
|
Fri Mar 14 12:06:29 2008 |
josephb | Configuration | Cameras | GC750 looking at ETMX while locked | The GC750 (CMOS) is currently looking at the front of ETMX. Unfortunately, its being routed through a 10Mbit connection (which I will be purchasing a replacement for today), so getting it to send images to Mafalda/Linux 2 or 3 isn't working well, but by using a local gigabit switch and a laptop I can get sufficient speed for full images with the sample viewer.
The attached image is from a full 752x480 reslution with 10,000 microsecond exposure with the X-arm locked. Although it looks like I still need to work on the focusing. Will be switching the GC750 with the GC 650 (CCD) later today and comparing the resulting images. |
Attachment 1: ETMX_zoom_exp_10000_750.tiff
|
558
|
Tue Jun 24 17:12:10 2008 |
josephb, Eric | Configuration | Cameras | GC750 setup, 1X4 Hub connected, ETMX images | The GC750 camera has been setup to look at ETMX. In addition, the new 1X4 rack mounted switch (131.215.113.200) has been connected via new cat6 cable to the control room hub (131.215.113.1?), thanks to Eric. The camera is now plugged into 1X4 rack switch and now has a gigabit connection to the control room computers as well as Mafalda (131.215.113.23).
By using ssh -X mafalda or ssh -X 131.215.113.23, then typing:
target
cd Prosilica/bin-pc/x86/
./Sampleviewer
A viewer will be brought up. By clicking on the 3rd icon from the left (looks like an eye) will bring up a live view.
Closing the view, and then cd ../../40mCode, and then running ./Snap --help will tell you how to use a simple code for taking .tiff images as well as setting things such as exposure length and size of image (in pixels) to send.
When the interferometer was set to an X-arm only configuration, we took two series of 200 images each, with two different exposure lengths.
Attached are three pdf images. The first is just a black and white single image, the second is an average of 100 images, and the third is the standard deviation of the 100 images. |
Attachment 1: GC750_ETMX_E30000_single.pdf
|
|
Attachment 2: GC750_ETMX_E30000_avg.pdf
|
|
Attachment 3: GC750_ETMX_E30000_std.pdf
|
|
10854
|
Mon Jan 5 20:17:26 2015 |
jamie | Configuration | CDS | GDS upgraded to 2.16.14 | I upgraded the GDS and ROOT installations in /ligo/apps/ubuntu12 the control room workstations:
- GDS 2.16.14
- ROOT 5.34.18 (dependency of GDS)
My cursory tests indicate that they seem to be working:

Now that the control room environment has become somewhat uniform at Ubuntu 12, I modified the /ligo/cdscfg/workstationrc.sh file to source the ubuntu12 configuration:
controls@nodus|apps > cat /ligo/cdscfg/workstationrc.sh
# CDS WORKSTATION ENVIRONMENT
source /ligo/apps/ligoapps-user-env.sh
source /ligo/apps/ubuntu12/ligoapps-user-env.sh
source /opt/rtcds/rtcds-user-env.sh
controls@nodus|apps >
This should make all the newer versions available everywhere on login. |
7238
|
Tue Aug 21 00:02:05 2012 |
rana | Summary | Computer Scripts / Programs | GDS/DTT bug: 10 digit GPS times not accepted | I've noticed that we're experiencing this bug which was previously seen at LHO. We cannot enter 10 digit GPS times into the time fields for DTT due to a limit in TLGEntry.cc, which Jim Batch fixed in September of last year. Seems like we're running an old version of the GDS tools.
I checked the Lidax tool (which you can get from the GDS Mainmenu). It does, in fact, allow 10 digit entries. |
13
|
Thu Oct 25 00:01:21 2007 |
rana | Software Installation | CDS | GEO DV => LIGO DV | Martin Hewitson of GEO600 fame has modified the cool GEO DV
to work with the LIGO NDS system with some NDS advice from Rolf (who's over in Germany this week).
I've moved it onto the 40m CDS system and installed it on the AdhikariLab computer named 'django'. It worked immediately.
I modified the main .m file to include the 40m's NDS server. When you run it you have to include the path to the NDS
client written by Ben Johnson.
The attached is a screenshot of it working on a Mac; it looks as cool on Linux.
Its installed in /cvs/cds/caltech/apps/ligoDV/. In matlab you navigate to that directory and then
type addpath('/cvs/cds/caltech/apps/linux/UNIX_NDS_Client_beta2/') to add the NDS client.
On the Solaris machines, type type addpath('/cvs/cds/caltech/apps/solaris9/UNIX_NDS_Client_beta2/') instead.
Then type ligoDV to start it up. Then click away and have fun.
In the example I've selected C1:PEM-BS_ACC_EAST_Z and plotted its specgram.
 |
Attachment 1: Picture_1.png
|
|
|