ID |
Date |
Author |
Type |
Category |
Subject |
15738
|
Fri Dec 18 22:59:12 2020 |
Jon | Configuration | CDS | Updated CDS upgrade plan | Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:
-
Existing FEs stay where they are (they are not moved to a single rack)
-
Dolphin IPC remains PCIe Gen 1
-
RFM network is entirely replaced with Dolphin IPC
Please send me any omissions or corrections to the layout. |
15739
|
Sat Dec 19 00:25:20 2020 |
Jon | Update | | New SMA cables on order | I re-ordered the below cables, this time going with flexible, double-shielded RG316-DS. Jordan will pick up and return the RG-405 cables after the holidays.
Quote: |
As requested, I placed an order for an assortment of new RF cables: SMA male-male, RG405.
|
|
15764
|
Thu Jan 14 12:19:43 2021 |
Jon | Update | CDS | Expansion chassis from LHO | That's fine, we didn't actually request those. We bought and already have in hand new PCIe x4 cables for the chassis-host connection. They're 3 m copper cables, which was based on the assumption of the time that host and chassis would be installed in the same rack.
Quote: |
- Regarding the fibers - one of the fibers is pre-2012. These are known to fail (according to Rolf). One of the two that LHO shipped is from 2012 (judging by S/N, I can't find an online lookup for the serial number), the other is 2011. IIRC, Rolf offered us some fibers so we may want to take him up on that. We may also be able to use copper cables if the distances b/w server and expansion chassis are short.
|
|
15766
|
Fri Jan 15 15:06:42 2021 |
Jon | Update | CDS | Expansion chassis from LHO | Koji asked me assemble a detailed breakdown of the parts received from LHO, which I do based on the high-res photos that Gautam posted of the shipment.
Parts in hand:
Qty |
Part |
Note(s) |
2 |
Chassis body |
|
2 |
Power board and cooling fans |
As noted in 15763, these have the standard LIGO +24V input connector which we may want to change |
2 |
IO interface backplane |
|
2 |
PCIe backplane |
|
2 |
Chassis-side OSS PCIe x4 card |
|
2 |
CX4 fiber cables |
These were not requested and are not needed |
Parts still needed:
Qty |
Part |
Note(s) |
2 |
Host-side OSS PCIe x4 card |
These were requested but missing from the LHO shipment |
2 |
Timing slave |
These were not originally requested, but we have recently learned they will be replaced at LHO soon |
Issue with PCIe slots in new FEs
Also, I looked into the mix-up regarding the number of PCIe slots in the new Supermicro servers. The motherboard actually has six PCIe slots and is on the CDS list of boards known to be compatible. The mistake (mine) was in selecting a low-profile (1U) chassis that only exposes one of these slots. But at least it's not a fundamental limitation.
One option is to install an external PCIe expansion chassis that would be rack-mounted right above the FE. It is automatically configured by the system BIOS, so doesn't require any special drivers. It also supports hot-swapping of PCIe cards. There are also cheap ribbon-cable riser cards that would allow more cards to be connected for testing, although this is not as great for permanent mounting.
It may still be better to use the machines offered by Keith Thorne from LLO, as they're more powerful anyway. But if there is going to be an extended delay before those can be received, we should be able to use the machines we already have in conjunction with one of these PCIe expansion options. |
15770
|
Tue Jan 19 13:19:24 2021 |
Jon | Update | CDS | Expansion chassis from LHO | Indeed T1800302 is the document I was alluding to, but I completely missed the statement about >3 GHz speed. There is an option for 3.4 GHz processors on the X10SRi-F board, but back in 2019 I chose against it because it would double the cost of the systems. At the time I thought I had saved us $5k. Hopefully we can get the LLO machines in the near term---but if not, I wonder if it's worth testing one of these to see whether the performance is tolerable.
Can you please provide a link to this "list of boards"? The only document I can find is T1800302....
|
I confirm that PCIe 2.0 motherboards are backwards compatible with PCIe 1.x cards, so there's no hardware issue. My main concern is whether the obsolete Dolphin drivers (requiring linux kernel <=3.x) will work on a new system, albeit one running Debian 8. The OSS PCIe card is automatically configured by the BIOS, so no external drivers are required for that one.
Please also confirm that there are no conflicts w.r.t. the generation of PCIe slots, and the interfaces (Dolphin, OSSI) we are planning to use - the new machines we have are "PCIe 2.0" (though i have no idea if this is the same as Gen 2).
|
|
15771
|
Tue Jan 19 14:05:25 2021 |
Jon | Configuration | CDS | Updated CDS upgrade plan | I've produced updated diagrams of the CDS layout, taking the comments in 15476 into account. I've also converted the 40m's diagrams from Omnigraffle ($150/license) to the free, cloud-based platform draw.io. I had never heard of draw.io, but I found that it has most all the same functionality. It also integrates nicely with Google Drive.
Attachment 1: The planned CDS upgrade (2 new FEs, fully replace RFM network with Gen 1 Dolphin IPC)
Attachment 2: The current 40m CDS topology
The most up-to-date diagrams are hosted at the following links:
Please send me any further corrections or omissions. Anyone logged in with LIGO.ORG credentials can also directly edit the diagrams. |
15842
|
Wed Feb 24 22:13:47 2021 |
Jon | Update | CDS | Planning document for front-end testing | I've started writing up a rough testing sequence for getting the three new front-ends operational (c1bhd, c1sus2, c1ioo). Since I anticipate this plan undergoing many updates, I've set it up as a Google doc which everyone can edit (log in with LIGO.ORG credentials).
Link to planning document
Please have a look and add any more tests, details, or concerns. I will continue adding to it as I read up on CDS documentation. |
15872
|
Fri Mar 5 17:48:25 2021 |
Jon | Update | CDS | Front-end testing | Today I moved the c1bhd machine from the control room to a new test area set up behind (west of) the 1X6 rack. The test stand is pictured in Attachment 1. I assembled one of the new IO chassis and connected it to the host.
I/O Chassis Assembly
- LIGO-style 24V feedthrough replaced with an ATX 650W switching power supply
- Timing slave installed
- Contec DO-1616L-PE card installed for timing control
- One 16-bit ADC and one 32-channel DO module were installed for testing
The chassis was then powered on and LED lights illuminated indicating that all the components have power. The assembled chassis is pictured in Attachment 2.
Chassis-Host Communications Testing
Following the procedure outlined T1900700, the system failed the very first test of the communications link between chassis and host, which is to check that all PCIe cards installed in both the host and the expansion chassis are detected. The Dolpin host adapter card is detected:
07:06.0 PCI bridge: Stargen Inc. Device 0102 (rev 02) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
I/O behind bridge: 00002000-00002fff
Prefetchable memory behind bridge: 00000000c0200000-00000000c03fffff
Capabilities: [40] Power Management version 2
Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [60] Express Downstream Port (Slot+), MSI 00
Capabilities: [80] Subsystem: Device 0000:0000
Kernel driver in use: pcieport
However the OSS PCIe adapter card linking the host to the IO chassis was not detected, nor were any of the cards in the expansion chassis. Gautam previously reported that the OSS card was not detected by the host (though it was not connected to the chassis then). Even now connected to the IO chassis, the card is still not detected. On the chassis-side OSS card, there is a red LED illuminated indicating "HOST CARD RESET" as pictured in Attachment 3. This may indicate a problem with the card on the host side. Still more debugging to be done. |
15890
|
Tue Mar 9 16:52:47 2021 |
Jon | Update | CDS | Front-end testing | Today I continued with assembly and testing of the new front-ends. The main progress is that the IO chassis is now communicating with the host, resolving the previously reported issue.
Hardware Issues to be Resolved
Unfortunately, though, it turns out one of the two (host-side) One Stop Systems PCIe cards sent from Hanford is bad. After some investigation, I ultimately resolved the problem by swapping in the second card, with no other changes. I'll try to procure another from Keith Thorne, along with some spares.
Also, two of the three switching power supplies sent from Livingston (250W Channel Well PSG400P-89) appear to be incompatible with the Trenton BPX6806 PCIe backplanes in these chassis. The power supply cable has 20 conductors and the connector on the board has 24. The third supply, a 650W Antec EA-650, does have the correct cable and is currently powering one of the IO chassis. I'll confirm this situation with Keith and see whether they have any more Antecs. If not, I think these supplies can still be bought (not obsolete).
I've gone through all the hardware we've received, checked against the procurement spreadsheet. There are still some missing items:
- 18-bit DACs (Qty 14; but 7 are spares)
- ADC adapter boards (Qty 5)
- DAC adapter boards (Qty 9)
- 32-channel DO modules (Qty 2/10 in hand)
Testing Progress
Once the PCIe communications link between host and IO chassis was working, I carried out the testing procedure outlined in T1900700. This performs a series checks to confirm basic operation/compatibility of the hardware and PCIe drivers. All of the cards installed in both the host and the expansion chassis are detected and appear correctly configured, according to T1900700. In the below tree, there is one ADC, one 16-ch DIO, one 32-ch DO, and one DolphinDX card:
+-05.0-[05-20]----00.0-[06-20]--+-00.0-[07-08]----00.0-[08]----00.0 Contec Co., Ltd Device 86e2
| +-01.0-[09]--
| +-03.0-[0a]--
| +-08.0-[0b-15]----00.0-[0c-15]--+-02.0-[0d]--
| | +-03.0-[0e]--
| | +-04.0-[0f]--
| | +-06.0-[10-11]----00.0-[11]----04.0 PLX Technology, Inc. PCI9056 32-bit 66MHz PCI <-> IOBus Bridge
| | +-07.0-[12]--
| | +-08.0-[13]--
| | +-0a.0-[14]--
| | \-0b.0-[15]--
| \-09.0-[16-20]----00.0-[17-20]--+-02.0-[18]--
| +-03.0-[19]--
| +-04.0-[1a]--
| +-06.0-[1b]--
| +-07.0-[1c]--
| +-08.0-[1d]--
| +-0a.0-[1e-1f]----00.0-[1f]----00.0 Contec Co., Ltd Device 8632
| \-0b.0-[20]--
\-08.0-[21-2a]--+-00.0 Stargen Inc. Device 0101
\-00.1-[22-2a]--+-00.0-[23]--
+-01.0-[24]--
+-02.0-[25]--
+-03.0-[26]--
+-04.0-[27]--
+-05.0-[28]--
+-06.0-[29]--
\-07.0-[2a]--
Standalone Subnet
Before I start building/testing RTCDS models, I'd like to move the new front ends to an isolated subnet. This is guaranteed to prevent any contention with the current system, or inadvertent changes to it.
Today I set up another of the Supermicro servers sent by Livingston in the 1X6 test stand area. The intention is for this machine to run a cloned, bootable image of the current fb1 system, allowing it to function as a bootserver and DAQ server for the FEs on the subnet.
However, this hard disk containing the fb1 image appears to be corrupted and will not boot. It seems to have been sitting disconnected in a box since ~2018, which is not a stable way to store data long term. I wasn't immediately able to recover the disk using fsck. I could spend some more time trying, but it might be most time-effective to just make a new clone of the fb1 system as it is now. |
15924
|
Tue Mar 16 16:27:22 2021 |
Jon | Update | CDS | Front-end testing | Some progress today towards setting up an isolated subnet for testing the new front-ends. I was able to recover the fb1 backup disk using the Rescatux disk-rescue utility and successfully booted an fb1 clone on the subnet. This machine will function as the boot server and DAQ server for the front-ends under test. (None of these machines are connected to the Martian network or, currently, even the outside Internet.)
Despite the success with the framebuilder, front-ends cannot yet be booted locally because we are still missing the DHCP and FTP servers required for network booting. On the Martian net, these processes are running not on fb1 but on chiara. And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.
For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success. The repair tool I used to recover the fb1 disk does not find a problem with the chiara disk. However, the chiara disk is an external USB drive, so I suspect there could be a compatibility problem with these old (~2010) machines. Some of them don't even recognize USB keyboards pre-boot-up. I may try booting the USB drive from a newer computer.
Edit: I removed one of the new, unused Supermicros from the 1Y2 rack and set it up in the test stand. This newer machine is able to boot the chiara USB disk without issue. Next time I'll continue with the networking setup. |
15947
|
Fri Mar 19 18:14:56 2021 |
Jon | Update | CDS | Front-end testing | Summary
Today I finished setting up the subnet for new FE testing. There are clones of both fb1 and chiara running on this subnet (pictured in Attachment 2), which are able to boot FEs completely independently of the Martian network. I then assembled a second FE system (Supermicro host and IO chassis) to serve as c1sus2, using a new OSS host adapter card received yesterday from LLO. I ran the same set of PCIe hardware/driver tests as was done on the c1bhd system in 15890. All the PCIe tests pass.
Subnet setup
For future reference, below is the procedure used to configure the bootserver subnet.
- Select "Network" as highest boot priority in FE BIOS settings
- Connect all machines to subnet switch. Verify fb1 and chiara eth0 interfaces are enabled and assigned correct IP address.
- Add c1bhd and c1sus2 entries to
chiara:/etc/dhcp/dhcpd.conf :
host c1bhd {
hardware ethernet 00:25:90:05:AB:46;
fixed-address 192.168.113.91;
}
host c1bhd {
hardware ethernet 00:25:90:06:69:C2;
fixed-address 192.168.113.92;
}
- Restart DHCP server to pick up changes:
$ sudo service isc-dhcp-server restart
- Add c1bhd and c1sus2 entries to
fb1:/etc/hosts :
192.168.113.91 c1bhd
192.168.113.92 c1sus2
- Power on the FEs. If all was configured correctly, the machines will boot.
C1SUS2 I/O chassis assembly
- Installed in host:
- DolphinDX host adapter
- One Stop Systems PCIe x4 host adapter (new card sent from LLO)
- Installed in chassis:
- Channel Well 250 W power supply (replaces aLIGO-style 24 V feedthrough)
- Timing slave
- Contec DIO-1616L-PE module for timing control
Next time, on to RTCDS model compilation and testing. This will require first obtaining a clone of the /opt/rtcds disk hosted on chiara. |
15948
|
Fri Mar 19 19:15:13 2021 |
Jon | Update | CDS | c1auxey assembly | Today I helped Yehonathan get started with assembly of the c1auxey (slow controls) Acromag chassis. This will replace the final remaining VME crate. We cleared the far left end of the electronics bench in the office area, as discussed on Wed. The high-voltage supplies and test equipment was moved together to the desk across the aisle.
Yehonathan has begun assembling the chassis frame (it required some light machining to mount the DIN rails that hold the Acromag units). Next, he will wire up the switches, LED indicator lights, and Acromag power connectors following the the documented procedure. |
15959
|
Wed Mar 24 19:02:21 2021 |
Jon | Update | CDS | Front-end testing | This evening I prepared a new 2 TB 3.5" disk to hold a copy of /opt/rtcds and /opt/rtapps from chiara. This is the final piece of setup before model compilation can be tested on the new front-ends. However chiara does not appear to support hot-swapping of disks, as the disk is not recognized when connected to the live machine. I will await confirmation before rebooting it. The new disk is not currently connected. |
15976
|
Mon Mar 29 17:55:50 2021 |
Jon | Update | CDS | Front-end testing | Cloning of chiara:/home/cvs underway
I returned today with a beefier USB-SATA adapter, which has an integrated 12 V supply for powering 3.5" disks. I used this to interface a new 6 TB 3.5" disk found in the FE supplies cabinet.
I decided to go with a larger disk and copy the full contents of chiara:/home/cds. Strictly, the FEs only strictly need the RTS executables in /home/cvs/rtcds and /home/cvs/rtapps . However, to independently develop models, the shared matlab binaries in /home/cvs/caltech/... also need to be exposed. And there may be others I've missed.
I began the clone around 12:30 pm today. To preserve bandwidth to the main disk, I am copying not the /home/cds disk directly, but rather its backup image at /media/40mBackup .
Set up of dedicated SimPlant host
Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models.
I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them. However, if there are concerns about having it present on the network, it can be moved to the outside-facing switch in the office area. It is not currently running any RTCDS processes.
Set-up was carried out via the following procedure:
- Installed Debian 10.9 on an internal 480 GB SSD.
- Installed cdssoft repos following Jamie's instructions.
- Installed RTS and Docker dependencies:
$ sudo apt install cpuset advligorts-mbuf-dkms advligorts-gpstime-dkms docker.io docker-compose
- Configured scheduler for real-time operation:
$ sudo /sbin/sysctl kernel.sched_rt_runtime_us = -1
- Reserved 10 cores for RTS user models (plus one for IOP model) by adding the following line to
/etc/default/grub :
GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=nohz,domain,1-11 nohz_full=1-11 tsc=reliable mce=off"
followed by the commands:
$ sudo update-grub
$ sudo reboot now
- Downloaded virtual cymac repo to
/home/controls/docker-cymac .
I need to talk to Chris before I can take the setup further. |
15979
|
Tue Mar 30 18:21:34 2021 |
Jon | Update | CDS | Front-end testing | Progress today:
Outside Internet access for FE test stand
This morning Jordan and I ran an 85-foot Cat 6 Ethernet cable from the campus network switch in the office area (on the ligo.caltech.edu domain) to the FE test stand near 1X6. This is to allow the test-stand subnet to be accessed for remote testing, while keeping it invisible to the parallel Martian subnet.
Successful RTCDS model compilation on new FEs
The clone of the chiara:/home/cds disk completed overnight. Today I installed the disk in the chiara clone. The NFS mounts (/opt/rtcds, /opt/rtapps ) shared with the other test-stand machines mounted without issue.
Next, I attempted to open the shared Matlab executable (/cvs/cds/caltech/apps/linux64/matlab/bin/matlab ) and launch Simulink. The existing Matlab license (/cvs/cds/caltech/apps/linux64/matlab/licenses/license_chiara_865865_R2015b.lic ) did not work on this new machine, as they are machine-specific, so I updated the license file. I linked this license to my personal license, so that the machine license for the real chiara would not get replaced. The original license file is saved in the same directory with a *.bak postfix. If this disk is ever used in the real chiara machine, this file should be restored. After the machine license was updated, Matlab and Simulink loaded and allowed model editing.
Finally, I tested RTCDS model compilation on the new FEs using the c1lsc model as a trial case. It encountered one path issue due to the model being located at /opt/rtcds/userapps/release/isc/c1/models/isc/ instead of /opt/rtcds/userapps/release/isc/c1/models/ . This seems to be a relic of the migration of the 40m models from the SVN to a standalone git repo. This was resolved by simply symlinking to the expected location:
$ sudo ln -s /opt/rtcds/userapps/release/isc/c1/models/isc/c1lsc.mdl /opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl
The model compilation then succeeded:
controls@c1bhd$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1bhd$ make clean-c1lsc
Cleaning c1lsc...
Done
controls@c1bhd$ make c1lsc
Cleaning c1lsc...
Done
Parsing the model c1lsc...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 28830 s in the
future
make[1]: warning: Clock skew detected. Your build may be incomplete.
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/caltech/c1/userapps/release/cds/common/src/cdsToggle.c
/opt/rtcds/userapps/release/cds/c1/src/inmtrxparse.c
/opt/rtcds/userapps/release/cds/common/models/FILTBANK_MASK.mdl
/opt/rtcds/userapps/release/cds/common/models/rtbitget.mdl
/opt/rtcds/userapps/release/cds/common/models/SCHMITTTRIGGER.mdl
/opt/rtcds/userapps/release/cds/common/models/SQRT_SWITCH.mdl
/opt/rtcds/userapps/release/cds/common/src/DB2MAG.c
/opt/rtcds/userapps/release/cds/common/src/OSC_WITH_CONTROL.c
/opt/rtcds/userapps/release/cds/common/src/wait.c
/opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl
/opt/rtcds/userapps/release/isc/c1/models/IQLOCK_WHITENING_TRIGGERING.mdl
/opt/rtcds/userapps/release/isc/c1/models/PHASEROT.mdl
/opt/rtcds/userapps/release/isc/c1/models/RF_PD_WITH_WHITENING_TRIGGERING.mdl
/opt/rtcds/userapps/release/isc/c1/models/UGF_SERVO_40m.mdl
/opt/rtcds/userapps/release/isc/common/models/FILTBANK_TRIGGER.mdl
/opt/rtcds/userapps/release/isc/common/models/LSC_TRIGGER.mdl
Successfully compiled c1lsc
***********************************************
Compile Warnings, found in c1lsc_warnings.log:
***********************************************
[warnings suppressed]
As did the installation:
controls@c1bhd$ make install-c1lsc
Installing system=c1lsc site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1LSC.txt
Installing /opt/rtcds/caltech/c1/target/c1lsc/c1lscepics
Installing /opt/rtcds/caltech/c1/target/c1lsc
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1lsc
/opt/rtcds/caltech/c1/scripts/startc1lsc
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl
-par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_210330_170634.par
-gds_node=42 -site_letter=C -system=c1lsc -host=c1lsc
Installing GDS node 42 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1lsc.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1LSC.ini
Installing Epics MEDM screens
Running post-build script
safe.snap exists
We are ready to start building and testing models. |
15997
|
Tue Apr 6 07:19:11 2021 |
Jon | Update | CDS | New SimPlant cymac | Yesterday Chris and I completed setup of the Supermicro machine that will serve as a dedicated host for developing and testing RTCDS sim models. It is currently sitting in the stack of machines in the FE test stand, though it should eventually be moved into a permanent rack.
It turns out the machine cannot run 10 user models, only 4. Hyperthreading was enabled in the BIOS settings, which created the illusion of there being 12 rather than 6 physical cores. Between Chris and Ian's sim models, we already have a fully-loaded machine. There are several more of these spare 6-core machines that could be set up to run additional models. But in the long term, and especially in Ian's case where the IFO sim models will all need to communicate with one another (this is a self-contained cymac, not a distributed FE system), we may need to buy a larger machine with 16 or 32 cores.
IPMI was set up for the c1sim cymac. I assigned the IPMI interface a static IP address on the Martian network (192.168.113.45) and registered it in the usual way with the domain name server on chiara. After updating the BIOS settings and rebooting, I was able to remotely power off and back on the machine following these instructions.
Set up of dedicated SimPlant host
Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models.
I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them.
|
|
15998
|
Tue Apr 6 11:13:01 2021 |
Jon | Update | CDS | FE testing | I/O chassis assembly
Yesterday I installed all the available ADC/DAC/BIO modules and adapter boards into the new I/O chassis (c1bhd, c1sus2). We are still missing three ADC adapter boards and six 18-bit DACs. A thorough search of the FE cabinet turned up several 16-bit DACs, but only one adapter board. Since one 16-bit DAC is required anyway for c1sus2, I installed the one complete set in that chassis.
Below is the current state of each chassis. Missing components are highlighted in yellow. We cannot proceed to loopback testing until at least some of the missing hardware is in hand.
C1BHD
Component |
Qty Required |
Qty Installed |
16-bit ADC |
1 |
1 |
16-bit ADC adapter |
1 |
0 |
18-bit DAC |
1 |
0 |
18-bit DAC adapter |
1 |
1 |
16-ch DIO |
1 |
1 |
C1SUS2
Component |
Qty required |
Qty Installed |
16-bit ADC |
2 |
2 |
16-bit ADC adapter |
2 |
0 |
16-bit DAC |
1 |
1 |
16-bit DAC adapter |
1 |
1 |
18-bit DAC |
5 |
0 |
18-bit DAC adapter |
5 |
5 |
32-ch DO |
6 |
6 |
16-ch DIO |
1 |
1 |
Gateway for remote access
To enable remote access to the machines on the test stand subnet, one machine must function as a gateway server. Initially, I tried to set this up using the second network interface of the chiara clone. However, having two active interfaces caused problems for the DHCP and FTS servers and broke the diskless FE booting. Debugging this would have required making changes to the network configuration that would have to be remembered and reverted, were the chiara disk to ever to be used in the original machine.
So instead, I simply grabbed another of the (unused) 1U Supermicro servers from the 1Y1 rack and set it up on the subnet as a standalone gateway server. The machine is named c1teststand. Its first network interface is connected to the general computing network (ligo.caltech.edu) and the second to the test-stand subnet. It has no connection to the Martian subnet. I installed Debian 10.9 anticipating that, when the machine is no longer needed in the test stand, it can be converted into another docker-cymac for to run additional sim models.
Currently, the outside-facing IP address is assigned via DHCP and so periodically changes. I've asked Larry to assign it a static IP on the ligo.caltech.edu domain, so that it can be accessed analogously to nodus. |
16012
|
Sat Apr 10 08:51:32 2021 |
Jon | Update | CDS | I/O Chassis Assembly | I installed three of the 16-bit ADC adapter boards assembled by Koji. Now, the only missing hardware is the 18-bit DACs (quantities below). As I mentioned this week, there are 2-3 16-bit DACs available in the FE cabinet. They could be used if more 16-bit adapter boards could be procured.
C1BHD |
|
|
Component |
Qty Required |
Qty Installed |
16-bit ADC |
1 |
1 |
16-bit ADC adapter |
1 |
1 |
18-bit DAC |
1 |
0 |
18-bit DAC adapter |
1 |
1 |
16-ch DIO |
1 |
1 |
C1SUS2 |
|
|
Component |
Qty required |
Qty Installed |
16-bit ADC |
2 |
2 |
16-bit ADC adapter |
2 |
2 |
16-bit DAC |
1 |
1 |
16-bit DAC adapter |
1 |
1 |
18-bit DAC |
5 |
0 |
18-bit DAC adapter |
5 |
5 |
32-ch DO |
6 |
6 |
16-ch DIO |
1 |
1 |
|
16015
|
Sat Apr 10 11:56:14 2021 |
Jon | Update | CDS | 40m LSC simPlant model | Summary
Yesterday I resurrected the 40m's LSC simPlant model, c1lsp. It is running on c1sim, a virtual, self-contained cymac that Chris and I set up for developing sim models (see 15997). I think the next step towards an integrated IFO model is incorporating the suspension plants. I am going to hand development largely over to Ian at this point, with continued support from me and Chris.

LSC Plant
This model dates back to around 2012 and appears to have last been used in ~2015. According to the old CDS documentation:
Name |
Description |
Communicates directly with |
LSP |
Simulated length sensing model of the physical plant, handles light propagation between mirrors, also handles alignment modeling and would have to communicate ground motion to all the suspensions for ASS to work |
LSC, XEP, YEP, VSP |
Here XEP, YEP, and VSP are respectively the x-end, y-end, and vertex suspension plant models. I haven't found any evidence that these were ever fully implemented for the entire IFO. However, it looks like SUS plants were later implemented for a single arm cavity, at least, using two models named c1sup and c1spx (appear in more recent CDS documentation). These suspension plants could likely be updated and then copied for the other suspended optics.
To represent the optical transfer functions, the model loads a set of SOS filter coefficients generated by an Optickle model of the interferometer. The filter-generating code and instructions on how to use it are located here. In particular, it contains a Matlab script named opt40m.m which defines the interferferometer. It should be updated to match the parameters in the latest 40m Finesse model, C1_w_BHD.kat. The calibrations from Watts to sensor voltages will also need to be checked and likely updated.
Model-Porting Procedure
For future reference, below are the steps followed to port this model to the virtual cymac.
- Copy over model files.
- The c1lsp model,
chiara:/opt/rtcds/userapps/release/isc/c1/models/c1lsp.mdl , was copied to the userapps directory on the virtual cymac, c1sim:/home/controls/docker-cymac/userapps/x1lsp.mdl . In the filename, note the change in IFO prefix "c1" --> "x1," since this cymac is not part of the C1 CDS network.
- This model also depends on a custom library file,
chiara:/opt/rtcds/userapps/release/isc/c1/models/SIMPLANT.mdl , which was copied to c1sim:/home/controls/docker-cymac/userapps/lib/SIMPLANT.mdl .
- Update model parameters in Simulink. To edit models in Simulink, see the instructions here and also here.
- The main changes are to the cdsParameters block, which was updated as shown below. Note the values of
dcuid and specific_cpu are specifically assigned to x1lsp and will vary for other models. The other parameters will be the same.

- I also had to change the name of one of the user-defined objects from "ADC0" --> "ADC" and then re-copy the cdsAdc object (shown above) from the
CDS_PARTS.mdl library. At least in newer RCG code, the cdsAdc object must also be named "ADC0." This namespace collision was causing the compiler to fail.
- Note: Since Matlab is not yet set up on c1sim, I actually made these edits on one of the 40m machines (chiara) prior to copying the model.
- Compile and launch the models. Execute the following commands on c1sim:
-
$ cd ~/docker-cymac
$ ./kill_cymac
$ ./start_cymac debug
-
The optional debug flag will print the full set of compilation messages to the terminal. If compilation fails, search the traceback for lines containing "ERROR" to determine what is causing the failure.
-
Accessing MEDM screens. Once the model is running, a button should be added to the sitemap screen (located at c1sim:/home/controls/docker-cymac/userapps/medm/sitemap.adl ) to access one or more screens specific to the newly added model.
-
Custom-made screens should be added to c1sim:/home/controls/docker-cymac/userapps/medm/x1lsp (where the final subdirectory is the name of the particular model).
-
The set of available auto-generated screens for the model can be viewed by entering the virtual environment:
-
$ cd ~/docker-cymac
$ ./login_cymac #drops into virtual shell
# cd /opt/rtcds/tst/x1/medm/x1lsp #last subdirectory is model name
# ls -l *.adl
# exit #return to host shell
-
The sitemap screen and any subscreens can link to the auto-generated screens in the usual way (by pointing to their virtual /opt/rtcds path). Currently, for the virtual path resolution to work, an environment script has to be run prior to launching sitemap, which sets the location of a virtual MEDM server (this will be auto-scripted in the future):
-
$ cd ~/docker-cymac
$ eval $(./env_cymac)
$ sitemap
-
One important auto-generated screen that should be linked for every model is the CDS runtime diagnostics screen, which indicates the success/fail state of the model and all its dependencies. T1100625 details the meaning of all the various indicator lights.
-

|
16037
|
Thu Apr 15 17:24:08 2021 |
Jon | Update | CDS | Updated c1auxey wiring plan | I've updated the c1auxey wiring plan for compatibility with the new suspension electronics. Specifically it is based on wiring schematics for the new HAM-A coil driver (D1100117), satellite amplifier (D1002818), and HV bias driver (D1900163).
Changes:
- The PDMon, VMon, CoilEnable, and BiasAdj channels all move from DB37 to various DB9 breakout boards.
- The DB9 cables (x2) connecting the CoilEnable channels to the coil drivers must be spliced with the dewhitening switching signals from the RTS.
- As suggested, I added five new BI channels to monitor the state of the CoilEnable switches. For lack of a better name, they follow the naming convention
C1:SUS-ETMY_xx_ENABLEMon .
@Yehonathan can proceed with wiring the chassis.
Quote: |
I finished prewiring the new c1auxey Acromag chassis (see attached pictures). I connected all grounds to the DIN rail to save some wiring. The power switches and LEDs work as expected.
I configured the DAQ modules using the old windows machine. I configured the gateway to be 192.168.114.1. The host machine still needs to be setup.
Next, the feedthroughs need to be wired and the channels need to be bench tested.
|
|
16090
|
Wed Apr 28 11:31:40 2021 |
Jon | Update | VAC | Empty N2 Tanks | I checked out what happened on c1vac. There are actually two independent monitoring codes running:
- The interlock service, which monitors the line directly connected to the valves.
- A seaparate convenience mailer, running as a cronjob, that monitors the tanks.
The interlocks did not trip because the low-pressure delivery line, downstream of the dual-tank regulator, never fell below the minimum pressure to operate the valves (65 PSI). This would have eventually occurred, had Jordan been slower to replace the tanks. So I see no problem with the interlocks.
On the other hand, the N2 mailer should have sent an email at 2021-04-18 15:00, which was the first time C1:Vac-N2T1_pressure dropped below the 600 PSI threshold. N2check.log shows these pressures were recorded at this time, but does not log that an email was sent. Why did this fail? Not sure, but I found two problems which I did fix:
- One was that the code was checking the sensor on the low-pressure side (
C1:Vac-N2_pressure ; nominally 75 PSI) against the same 600 PSI threshold as the tanks. This channel should either be removed or a separate threshold (65 PSI) defined just for it. I just removed it from the list because monitoring of this channel is redundant with the interlock service. This does not explain the failure to send an email.
- The second issue was that the
pyN2check.sh script appeared to be calling Python 3 to run a Python 2 script. At least that was the situation when I tested it, and this was causing it to fail partway through. This might well explain the problem with no email. I explicitly set python --> python2 in the shell script.
The code then ran fine for me when I retested it. I don't see any further issues.
Quote: |
Installed T2 today, and leaked checked the entire line. No issues found. It could have been a bad valve on the tank itself. Monitored T2 pressure for ~2 hours to see if there was any change. All seems ok.
Quote: |
When I came into the lab this morning, I noticed that both N2 tanks were empty. I had swapped one on Friday (4-16-21) before I left the lab. Looking at the logs, the right tank (T2) sprung a leak shortly shortly after install. I leak checked the tank coupling after install but did not see a leak. There could a leak further down the line, possibly at the pressure transducer.
The left tank (T1) emptied normally over the weekend, and I quickly swapped the left tank for a full one, and is curently at ~2700 psi. It was my understanding that if both tanks emptied, V1 would close automatically and a mailer would be sent out to the 40m group. I did not receive an email over the weekend, and I checked the Vac status just now and V1 was still open.
I will keep an eye on the tank pressure throughout the day, and will try to leak check the T2 line this afternoon, but someone should check the vacuum interlocks and verify.
|
|
|
16093
|
Thu Apr 29 10:51:35 2021 |
Jon | Update | CDS | I/O Chassis Assembly | Summary
Yesterday I unpacked and installed the three 18-bit DAC cards received from Hanford. I then repeated the low-level PCIe testing outlined in T1900700, which is expanded upon below. I did not make it to DAC-ADC loopback testing because these tests in fact revealed a problem with the new hardware. After a combinatorial investigation that involved swapping cards around between known-to-be-working PCIe slots, I determined that one of the three 18-bit DAC cards is bad. Although its "voltage present" LED illuminates, the card is not detected by the host in either I/O chassis.
I installed one of the two working DACs in the c1bhd chassis. This now 100% completes this system. I installed the other DAC in the c1sus2 chassis, which still requires four more 18-bit DACs. Lastly, I reran the PCIe tests for the final configurations of both chassis.
PCIe Card Detection Tests
For future reference, below is the set of command line tests to verify proper detection and initialization of ADC/DAC/BIO cards in I/O chassis. This summarizes the procedure described in T1900700 and also adds the tests for 18-bit DAC and 32-channel BO cards, which are not included in the original document.
Each command should be executed on the host machine with the I/O chassis powered on:
$ sudo lspci -v | grep -B 1 xxxx
where xxxx is a four-digit device code given in the following table.
Device |
Device Code |
General Standards 16-bit ADC |
3101 |
General Standards 16-bit DAC |
3120 |
General Standards 18-bit DAC |
3357 |
Contec 16-channel BIO |
8632 |
Contec 32-channel BO |
86e2 |
Dolphin IPC host adapter |
0101 |
The command will return a two-line entry for each PCIe device of the specified type that is detected. For example, on a system with a single ADC this command should return:
10:04.0 Bridge: PLX Technology, Inc. PCI9056 32-bit 66MHz PCI IOBus Bridge (rev ac)
Subsystem: PLX Technology, Inc. Device 3101 |
16116
|
Tue May 4 07:38:36 2021 |
Jon | Update | CDS | I/O Chassis Assembly | IOP models created
With all the PCIe issues now resolved, yesterday I proceeded to build an IOP model for each of new FEs. I assigned them names and DCUIDs consist with the 40m convention, listed below. These models currently exist on only the cloned copy of /opt/rtcds running on the test stand. They will be copied to the main network disk later, once the new systems are fully tested.
Model |
Host |
CPU |
DCUID |
c1x06 |
c1bhd |
1 |
23 |
c1x07 |
c1sus2 |
1 |
24 |
The models compile and install successfully. The RCG runtime diagnostics indicate that all is working except for the timing synchronization and DAQD data transmission. This is as expected because neither of these have been set up yet.
Timing system set-up
The next step is to provide the 65 kHz clock signals from the timing fanout via LC optical fiber. I overlooked the fact that an SPX optical transceiver is required to interface the fiber to the timing slave board. These were not provided with the timing slaves we received. The timing slaves require a particular type of transceiver, 100base-FX/OC-3, which we did not have on hand. (For future reference, there is a handy list of compatible transceivers in E080541, p. 14.) I placed a Digikey order for two Finisar FTLF1217P2BTL, which should arrive within two days. |
16130
|
Tue May 11 16:29:55 2021 |
Jon | Update | CDS | I/O Chassis Assembly |
Quote: |
Timing system set-up
The next step is to provide the 65 kHz clock signals from the timing fanout via LC optical fiber. I overlooked the fact that an SPX optical transceiver is required to interface the fiber to the timing slave board. These were not provided with the timing slaves we received. The timing slaves require a particular type of transceiver, 100base-FX/OC-3, which we did not have on hand. (For future reference, there is a handy list of compatible transceivers in E080541, p. 14.) I placed a Digikey order for two Finisar FTLF1217P2BTL, which should arrive within two days.
|
Today I brought and installed the new optical transceivers (Finisar FTLF1217P2BTL) for the two timing slaves. The timing slaves appear to phase-lock to the clocking signal from the master fanout. A few seconds after each timing slave is powered on, its status LED begins steadily blinking at 1 Hz, just as in the existing 40m systems.
However, some other timing issue remains unresolved. When the IOP model is started (on either FE), the DACKILL watchdog appears to start in a tripped state. Then after a few minutes of running, the TIM and ADC indicators go down as well. This makes me suspect the sample clocks are not really phase-locked. However, the models do start up with no error messages. Will continue to debug... |
16154
|
Sun May 23 18:28:54 2021 |
Jon | Update | CDS | Opto-isolator for c1auxey | The new HAM-A coil drivers have a single DB9 connector for all the binary inputs. This requires that the dewhitening switching signals from the fast system be spliced with the coil enable signals from c1auxey. There is a common return for all the binary inputs. To avoid directly connecting the grounds of the two systems, I have looked for a suitable opto-isolator for the c1auxey signals.
I best option I found is the Ocean Controls KTD-258, a 4-channel, DIN-rail-mounted opto-isolator supporting input/output voltages of up to 30 V DC. It is an active device and can be powered using the same 15 V supply as is currently powering both the Acromags and excitation. I ordered one unit to be trialed in c1auxey. If this is found to be good solution, we will order more for the upgrades of c1auxex and c1susaux, as required for compatibility with the new suspension electronics.

|
16166
|
Fri May 28 10:54:59 2021 |
Jon | Update | CDS | Opto-isolator for c1auxey | I have received the opto-isolator needed to complete the new c1auxey system. I left it sitting on the electronics bench next to the Acromag chassis.
Here is the manufacturer's wiring manual. It should be wired to the +15V chassis power and to the common return from the coil driver, following the instructions herein for NPN-style signals. Note that there are two sets of DIP switches (one on the input side and one on the output side) for selecting the mode of operation. These should all be set to "NPN" mode. |
16167
|
Fri May 28 11:16:21 2021 |
Jon | Update | CDS | Front-End Assembly and Testing | An update on recent progress in the lab towards building and testing the new FEs.
1. Timing problems resolved / FE BIOS changes
The previously reported problem with the IOPs losing sync after a few minutes (16130) was resolved through a change in BIOS settings. However, there are many required settings and it is not trivial to get these right, so I document the procedure here for future reference.
The CDS group has a document (T1300430) listing the correct settings for each type of motherboard used in aLIGO. All of the machines received from LLO contain the oldest motherboards: the Supermicro X8DTU. Quoting from the document, the BIOS must be configured to enforce the following:
• Remove hyper-threading so the CPU doesn’t try to run stuff on the idle core, as hyperthreading simulate two cores for every physical core.
• Minimize any system interrupts from hardware, such as USB and Serial Ports, that might get through to the ‘idled’ core. This is needed on the older machines.
• Prevent the computer from reducing the clock speed on any cores to ‘save power’, etc. We need to have a constant clock speed on every ‘idled’ CPU core.
I generally followed the T1300430 instructions but found a few adjustments were necessary for diskless and deterministic operation, as noted below. The procedure for configuring the FE BIOS is as follows:
- At boot-up, hit the delete key to enter the BIOS setup screen.
- Before changing anything, I recommend photographing or otherwise documenting the current working settings on all the subscreens, in case for some reason it is necessary to revert.
- T1300430 assumes the process is started from a known state and lists only the non-default settings that must be changed. To put the BIOS into this known state, first navigate to Exit > Load Failsafe Defaults > Enter.
- Configure the non-default settings following T1300430 (Sec. 5 for the X8DTU motherboard). On the IPMI screen, set the static IP address and netmask to their specific assigned values, but do set the gateway address to all zeros as the document indicates. This is to prevent the IPMI from trying to initiate outgoing connections.
- For diskless booting to continue to work, it is also necessary to set Advanced > PCI/PnP Configuration > Load Onboard LAN 1 Option Rom > Enabled.
- I also found it was necessary to re-enable IDE direct memory access and WHEA (Windows Hardware Error Architecture) support. Since these machines have neither hard disks nor Windows, I have no idea why these are needed, but I found that without them, one of the FEs would hang during boot about 50% of the time.
- Advanced > PCI/PnP configuration > PCI IDE BusMaster > Enabled.
- Advanced > ACPI Configuration > WHEA Support > Enabled.
After completing the BIOS setup, I rebooted the new FEs about six times each to make sure the configuration was stable (i.e., would never hang during boot).
2. User models created for FE testing
With the timing issue resolved, I proceeded to build basic user models for c1bhd and c1sus2 for testing purposes. Each one has a simple structure where M ADC inputs are routed through IIR filters to an M x N output matrix, which forms linear signal combinations that are routed to N DAC outputs. This is shown in Attachment 1 for the c1bhd case, where the signals from a single ADC are conditioned and routed to a single 18-bit DAC. The c1sus2 case is similar; however the Contec BO modules still needed to be added to this model.
The FEs are now running two models each: the IOP model and one user model. The assigned parameters of each model are documented below.
Model |
Host |
CPU |
DCUID |
Path |
c1x06 |
c1bhd |
1 |
23 |
/opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl |
c1x07 |
c1sus2 |
1 |
24 |
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl |
c1bhd |
c1bhd |
2 |
25 |
/opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl |
c1sus2 |
c1sus2 |
2 |
26 |
/opt/rtcds/userapps/release/sus/c1/models/c1sus2.mdl |
The user models were compiled and installed following the previously documented procedure (15979). As shown in Attachment 2, all the RTS processes are now working, with the exception of the DAQ server (for which we're still awaiting hardware). Note that these models currently exist only on the cloned copy of the /opt/rtcds disk running on the test stand. The plan is to copy these models to the main 40m disk later, once the new FEs are ready to be installed.
3. AA and AI chassis installed
I installed several new AA and AI chassis in the test stand to interface with the ADC and DAC cards. This includes three 16-bit AA chassis, one 16-bit AI chassis, and one 18-bit AI chassis, as pictured in Attachment 3. All of the AA/AI chassis are powered by one of the new 15V DC power strips connected to a bench supply, which is housed underneath the computers as pictured in Attachment 4.
These chassis have not yet been tested, beyond verifying that the LEDs all illuminate to indicate that power is present. |
16185
|
Sun Jun 6 08:42:05 2021 |
Jon | Update | CDS | Front-End Assembly and Testing | Here is an update and status report on the new BHD front-ends (FEs).
Timing
The changes to the FE BIOS settings documented in [16167] do seem to have solved the timing issues. The RTS models ran for one week with no more timing failures. The IOP model on c1sus2 did die due to an unrelated "Channel hopping detected" error. This was traced back to a bug in the Simulink model, where two identical CDS parts were both mapped to ADC_0 instead of ADC_0/1. I made this correction and recompiled the model following the procedure in [15979].
Model naming standardization
For lack of a better name, I had originally set up the user model on c1sus2 as "c1sus2.mdl" This week I standardized the name to follow the three-letter subsystem convention, as four letters lead to some inconsistency in the naming of the auto-generated MEDM screens. I renamed the model c1sus2.mdl -> c1su2.mdl. The updated table of models is below.
Model |
Host |
CPU |
DCUID |
Path |
c1x06 |
c1bhd |
1 |
23 |
/opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl |
c1x07 |
c1sus2 |
1 |
24 |
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl |
c1bhd |
c1bhd |
2 |
25 |
/opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl |
c1su2 |
c1su2 |
2 |
26 |
/opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl |
Renaming an RTS model requires several steps to fully propagate the change, so I've documented the procedure below for future reference.
On the target FE, first stop the model to be renamed:
controls@c1sus2$ rtcds stop c1sus2
Then, navigate to the build directory and run the uninstall and cleanup scripts:
controls@c1sus2$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1sus2$ make uninstall-c1sus2
controls@c1sus2$ make clean-c1sus2
Unfortunately, the uninstall script does not remove every vestige of the old model, so some manual cleanup is required. First, open the file /opt/rtcds/caltech/c1/target/gds/param/testpoint.par and manually delete the three-line entry corresponding to the old model:
hostname=c1sus2
system=c1sus2
[C-node26]
If this is not removed, reinstallation of the renamed model will fail because its assigned DCUID will appear to already be in use. Next, find all relics of the old model using:
controls@c1sus2$ find /opt/rtcds/caltech/c1 -iname "*sus2*"
and manually delete each file and subdirectory containing the "sus2" name. Finally, rename, recompile, reinstall, and relaunch the model:
controls@c1sus2$ cd /opt/rtcds/userapps/release/sus/c1/models
controls@c1sus2$ mv c1sus2.mdl c1su2.mdl
controls@c1sus2$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1sus2$ make c1su2
controls@c1sus2$ make install-c1su2
controls@c1sus2$ rtcds start c1su2
Sitemap screens
I used a tool developed by Chris, mdl2adl, to auto-generate a set of temporary sitemap/model MEDM screens. This package parses each Simulink file and generates an MEDM screen whose background is an .svg image of the Simulink model. Each object in the image is overlaid with a clickable button linked to the auto-generated RTS screens. An example of the screen for the C1BHD model is shown in Attachment 1. Having these screens will make the testing much faster and less user-error prone.
I generated these screens following the instructions in Chris' README. However, I ran this script on the c1sim machine, where all the dependencies including Matlab 2021 are already set up. I simply copied the target .mdl files to the root level of the mdl2adl repo, ran the script (./mdl2adl.sh c1x06 c1x07 c1bhd c1su2), and then copied the output to /opt/rtcds/caltech/c1/medm/medm_teststand. Then I redefined the "sitemap" environment variable on the chiara clone to point to this new location, so that they can be launched in the teststand via the usual "sitemap" command.
Current status and plans
Is it possible to convert 18-bit AO channels to 16-bit?
Currently, we are missing five 18-bit DACs needed to complete the c1sus2 system (the c1bhd system is complete). Since the first shipment, we have had no luck getting additional 18-bit DACs from the sites, and I don't know when more will become available. So, this week I took an inventory of all the 16-bit DACs available at the 40m. I located four 16-bit DACs, pictured in Attachment 2. Their operational states are unknown, but none were labeled as known not to work.
The original CDS design would call for 40 more 18-bit DAC channels. Between the four 16-bit DACs there are 64 channels, so if only 3/4 of these DACs work we would have enough AO channels. However, my search turned up zero additional 16-bit DAC adapter boards. We could check if first Rolf or Todd have any spares. If not, I think it would be relatively cheap and fast to have four new adapters fabricated.
DAQ network limitations and plan
To get deeper into the signal-integrity aspect of the testing, it is going to be critical to get the secondary DAQ network running in the teststand. Of all the CDS tools (Ndscope, Diaggui, DataViewer, StripTool), only StripTool can be used without a functioning NDS server (which, in turn, requires a functioning DAQ server). StripTool connects directly to the EPICS server run by the RTS process. As such, StripTool is useful for basic DC tests of the fast channels, but it can only access the downsampled monitor channels. Ian and Anchal are going to carry out some simple DAC-to-ADC loopback tests to the furthest extent possible using StripTool (using DC signals) and will document their findings separately.
We don't yet have a working DAQ network because we are still missing one piece of critical hardware: a 10G switch compatible with the older Myricom network cards. In the older RCG version 3.x used by the 40m, the DAQ code is hardwired to interface with a Myricom 10G PCIe card. I was able to locate a spare Myricom card, pictured in Attachment 3, in the old fb machine. Since it looks like it is going to take some time to get an old 10G switch from the sites, I went ahead and ordered one this week. I have not been able to find documentation on our particular Myricom card, so it might be compatible with the latest 10G switches but I just don't know. So instead I bought exactly the same older (discontinued) model as is used in the 40m DAQ network, the Netgear GSM7352S. This way we'll also have a spare. The unit I bought is in "like-new" condition and will unfortunately take about a week to arrive. |
16186
|
Sun Jun 6 12:15:16 2021 |
Jon | Update | CDS | Opto-isolator for c1auxey | Since this Ocean Controls optoisolator has been shown to be compatible, I've gone ahead and ordered 10 more:
- (1) to complete c1auxey
- (2) for the upgrade of c1auxex
- (7) for the upgrade of c1susaux
They are expected to arrive by Wednesday. |
16188
|
Sun Jun 6 16:33:47 2021 |
Jon | Update | CDS | BI channels on c1auxey |
There is still an open issue with the BI channels not read by EPICS. They can still be read by the Windows machine though.
|
I looked into the issue that Yehonathan reported with the BI channels. I found the problem was with the .cmd file which sets up the Modbus interfacing of the Acromags to EPICS (/cvs/cds/caltech/target/c1auxey1/ETMYaux.cmd).
The problem is that all the channels on the XT1111 unit are being configured in Modbus as output channels. While it is possible to break up the address space of a single unit, so that some subset of channels are configured as inputs and another as outputs, I think this is likely to lead to mass confusion if the setup ever has to be modified. A simpler solution (and the convention we adopted for previous systems) is just to use separate Acromag units for BI and BO signals.
Accordingly, I updated the wiring plan to include the following changes:
- The five EnableMon BI channels are moved to a new Acromag XT1111 unit (BIO01), whose channels are configured in Modbus as inputs.
- One new DB37M connector is added for the 11 spare BI channels on BIO01.
- The five channels freed up on the existing XT1111 (BIO00) are wired to the existing connector for spare BO channels.
So, one more Acromag XT1111 needs to be added to the c1auxey chassis, with the wiring changes as noted above. I have already updated the .cmd and EPICS database files in /cvs/cds/caltech/target/c1auxey1 to reflect these changes. |
16225
|
Fri Jun 25 14:06:10 2021 |
Jon | Update | CDS | Front-End Assembly and Testing | Summary
Here is the final summary (from me) of where things stand with the new front-end systems. With Anchal and Ian's recent scripted loopback testing [16224], all the testing that can be performed in isolation with the hardware on hand has been completed. We currently have no indication of any problem with the new hardware. However, the high-frequency signal integrity and noise testing remains to be done.
I detail those tests and link some DTT templates for performing them below. We have not yet received the Myricom 10G network card being sent from LHO, which is required to complete the standalone DAQ network. Thus we do not have a working NDS server in the test stand, so cannot yet run any of the usual CDS tools such as Diaggui. Another option would be to just connect the new front-ends to the 40m Martian/DAQ networks and test them there.
Final Hardware Configuration
Due to the unavailablity of the 18-bit DACs that were expected from the sites, we elected to convert all the new 18-bit AO channels to 16-bit. I was able to locate four unused 16-bit DACs around the 40m [16185], with three of the four found to be working. I was also able to obtain three spare 16-bit DAC adapter boards from Todd Etzel. With the addition of the three working DACs, we ended up with just enough hardware to complete both systems.
The final configuration of each I/O chassis is as follows. The full setup is pictured in Attachment 1.
|
C1BHD |
C1SUS2 |
Component |
Qty Installed |
Qty Installed |
16-bit ADC |
1 |
2 |
16-bit ADC adapter |
1 |
2 |
16-bit DAC |
1 |
3 |
16-bit DAC adapter |
1 |
3 |
16-channel BIO |
1 |
1 |
32-channel BO |
0 |
6 |
This hardware provides the following breakdown of channels available to user models:
|
C1BHD |
C1SUS2 |
Channel Type |
Channel Count |
Channel Count |
16-bit AI* |
31 |
63 |
16-bit AO |
16 |
48 |
BO |
0 |
192 |
*The last channel of the first ADC is reserved for timing diagnostics.
The chassis have been closed up and their permanent signal cabling installed. They do not need to be reopened, unless future testing finds a problem.
RCG Model Configuration
An IOP model has been created for each system reflecting its final hardware configuration. The IOP models are permanent and system-specific. When ready to install the new systems, the IOP models should be copied to the 40m network drive and installed following the RCG-compilation procedure in [15979]. Each system also has one temporary user model which was set up for testing purposes. These user models will be replaced with the actual SUS, OMC, and BHD models when the new systems are installed.
The current RCG models and the action to take with each one are listed below:
Model Name |
Host |
CPU |
DCUID |
Path (all paths local to chiara clone machine) |
Action |
c1x06 |
c1bhd |
1 |
23 |
/opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl |
Copy to same location on 40m network drive; compile and install |
c1x07 |
c1sus2 |
1 |
24 |
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl |
Copy to same location on 40m network drive; compile and install |
c1bhd |
c1bhd |
2 |
25 |
/opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl |
Do not copy; replace with permanent OMC/BHD model(s) |
c1su2 |
c1su2 |
2 |
26 |
/opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl |
Do not copy; replace with permanent SUS model(s) |
Each front-end can support up to four user models.
Future Signal-Integrity Testing
Recently, the CDS group has released a well-documented procedure for testing General Standards ADC and DACs: T2000188. They've also automated the tests using a related set of shell scripts (T2000203). Unfortnately I don't believe these scripts will work at the 40m, as they require the latest v4.x RCG.
However, there is an accompanying set of DTT templates that could be very useful for accelerating the testing. They are available from the LIGO SVN (log in with username: "first.last@LIGO.ORG"). I believe these can be used almost directly, with only minor updates to channel names, etc. There are two classes of DTT-templated tests:
- DAC -> ADC loopback transfer functions
- Voltage noise floor PSD measurements of individual cards
The T2000188 document contains images of normal/passing DTT measurements, as well as known abnormalities and failure modes. More sophisticated tests could also be configured, using these templates as a guiding example.
Hardware Reordering
Due to the unexpected change from 18- to 16-bit AO, we are now short on several pieces of hardware:
- 16-bit AI chassis. We originally ordered five of these chassis, and all are obligated as replacements within the existing system. Four of them are now (temporarily) in use in the front-end test stand. Thus four of the new 18-bit AI chassis will need to be retrofitted with 16-bit hardware.
- 16-bit DACs. We currently have exactly enough DACs. I have requested a quote from General Standards for two additional units to have as spares.
- 16-bit DAC adapters. I have asked Todd Etzel for two additional adapter boards to also have as spares. If no more are available, a few more should be fabricated.
|
16226
|
Fri Jun 25 19:14:45 2021 |
Jon | Update | Equipment loan | Zurich Instruments analyzer | I returned the Zurich Instruments analyzer I borrowed some time ago to test out at home. It is sitting on first table across from Steve's old desk. |
13717
|
Thu Mar 29 12:03:37 2018 |
Jon Richardson | Summary | General | Proof-of-Concept SRC Gouy Phase Measurement | I've been developing an idea for making a direct measurement of the SRC Gouy phase at RF. It's a very different approach from what has been tried before. Prior to attempting this at the sites, I'm interested in making a proof-of-concept measurement demonstrating the technique on the 40m. The finesse of the 40m SRC will be slightly higher than at the sites due to its lower-transmission SRM. Thus if this technique does not work at the 40m, it almost certainly will not work at the sites.
The idea is, with the IFO locked in a signal-recycled Michelson configuration (PRM and both ETMs misaligned), to inject an auxiliary laser from the AS port and measure its reflection from the SRC using one of the pre-OMC pickoff RFPDs. At the sites, this auxiliary beam is provided by the newly-installed squeezer laser. Prior to injection, an AM sideband is imprinted on the auxiliary beam using an AOM and polarizer. The sinusoidal AOM drive signal is provided by a network analyzer, which sweeps in frequency across the MHz band and demodulates the PD signal in-phase to make an RF transfer function measurement. At the FSR, there will be a AM transmission resonance (reflection minimum). If HOMs are also present (created by either partially occluding or misaligning the injection beam), they too will generate transmission resonances, but at a frequency shift proportional to the Gouy phase. For the theoretical 19 deg one-way Gouy phase at the sites, this mode spacing is approximately 300 kHz. If the transmission resonances of two or more modes can be simultaneously measured, their frequency separation will provide a direct measurement of the SRC Gouy phase.

The above figure illustrates this measurement configuration. An attached PDF gives more detail and the expected response based on Finesse modeling of this IFO configuration. |
13802
|
Tue May 1 08:04:13 2018 |
Jon Richardson | Configuration | Electronics | PSL-Aux. Laser Phase-Locked Loop | [Jon, Gautam, Johannes]
Summary: In support of making a proof-of-concept RF measurement of the SRC Gouy phase, we've implemented a PLL of the aux. 700mW NPRO laser frequency to the PSL. The lock was demonstrated to hold for minutes time scales, at which point the slow (currently uncontrolled) thermal drift of the aux. laser appears to exceed the PZT dynamic range. New (temporary) hardware is set up on an analyzer cart beside the PSL launch table.
Next steps:
- Characterize PLL stability and noise performance (transfer functions).
- Align and mode-match aux. beam from the AS table into the interferometer.
- With the IFO locked in a signal-recycled Michelson configuration, inject broadband (swept) AM sidebands via the aux. laser AOM. Coherently measure the reflection of the driven AM from the SRC.
- Experiment with methods of creating higher-order modes (partially occluding the beam vs. misaligning into, e.g., the output Faraday isolator). The goal is identify a viable techinque that is also possible at the sites, where the squeezer laser serves as the aux. laser.
The full measurement idea is sketched in the attached PDF.
PSL-Aux. beat note sensor on the PSL launch table.
Feedback signal to aux. laser PZT.
PLL electronics cart.
|
13814
|
Fri May 4 13:24:56 2018 |
Jon Richardson | Configuration | Electronics | AUX-PSL PLL Implementation & Characterization | Attached are final details of the phase-locked loop (PLL) implementation we'll use for slaving the AUX 700 mW NPRO laser to the PSL.
The first image is a schematic of the electronics used to create the analog loop. They are curently housed on an analyzer cart beside the PSL table. If this setup is made permanent, we will move them to a location inside the PSL table enclosure.
The second image is the measured transfer function of the closed loop. It achieves approximately 20 dB of noise suppression at low frequencies, with a UGF of 50 kHz. In this configuration, locks were observed to hold for 10s of minutes. |
13858
|
Thu May 17 13:51:35 2018 |
Jon Richardson | Configuration | Electronics | Documentation & Schematics for AUX-PSL PLL | [Jon, Gautam]
Attached is supporting documentation for the AUX-PSL PLL electronics installed in the lower PSL shelf, as referenced in #13845.
Some initial loop measurements by Gautam and Koji (#13848) compare the performance of the LB1005 vs. an SR560 as the controller, and find the LB1005 to be advantageous (a higher UGF and phase margin). I have some additional measurements which I'll post separately.
Loop Design
Pickoffs of the AUX and PSL beams are routed onto a broadband-sensitive New Focus 1811 PD. The AUX laser temperature is tuned to place the optical beat note of the two fields near 50 MHz. The RF beat note is sensed by the AC-coupled PD channel, amplified, and mixed-down with a 50 MHz RF source to obtain a DC error signal. The down-converted term is isolated via a 1.9-MHz low-pass filter in parallel with a 50 Ohm resistor and fed into a Newport LB1005 proportional-integral (PI) servo controller. Controller settings are documented in the below schematic. The resulting control signal is fed back into the fast PZT actuator input of the AUX laser.
Schematic diagram of the PLL.
Hardware Photos
Optical layout on the PSL table.
PLL electronics installed in the lower PSL shelf.
Close-up view of the phase detector electronics.
Slow temp. (left) and fast PZT signals into the AUX controller.
AUX-PSL beat note locked at 50 MHz offset, from the control room.
|
13867
|
Fri May 18 19:59:55 2018 |
Jon Richardson | Configuration | Electronics | AUX-PSL PLL Characterization Measurements | Below is analysis of measurements I had taken of the AUX-PSL PLL using an SR560 as the servo controller (1 Hz single-pole low-pass, gain varied 100-500). The resulting transfer function is in good agreement with that found by Gautam and Koji (#13848). The optimal gain is found to be 200, which places the UGF at 15 kHz with a 45 deg phase margin.
For now I have reverted the PLL to use the SR560 instead of the LB1005. The issue with the LB1005 is that the TTL input for remote control only "freezes" the integrator, but does not actually reset it. This is fine if the lock is disabled in a controlled way (i.e., via the medm interface). However, if the lock is lost uncontrollably, the integrator is stuck in a garbage state that prevents re-locking. The only way to reset this integrator is to manually flip a switch on the controller box (no remote reset). Rana suggests we might be able to find a workaround using a remote-controlled relay before the controller.


|
13876
|
Tue May 22 10:14:39 2018 |
Jon Richardson | Configuration | Electronics | Documentation & Schematics for AUX-PSL PLL |
Quote: |
[Jon, Gautam]
Attached is supporting documentation for the AUX-PSL PLL electronics installed in the lower PSL shelf, as referenced in #13845.
Some initial loop measurements by Gautam and Koji (#13848) compare the performance of the LB1005 vs. an SR560 as the controller, and find the LB1005 to be advantageous (a higher UGF and phase margin). I have some additional measurements which I'll post separately.
Loop Design
Pickoffs of the AUX and PSL beams are routed onto a broadband-sensitive New Focus 1811 PD. The AUX laser temperature is tuned to place the optical beat note of the two fields near 50 MHz. The RF beat note is sensed by the AC-coupled PD channel, amplified, and mixed-down with a 50 MHz RF source to obtain a DC error signal. The down-converted term is isolated via a 1.9-MHz low-pass filter in parallel with a 50 Ohm resistor and fed into a Newport LB1005 proportional-integral (PI) servo controller. Controller settings are documented in the below schematic. The resulting control signal is fed back into the fast PZT actuator input of the AUX laser.
Schematic diagram of the PLL.
Hardware Photos
Optical layout on the PSL table.
PLL electronics installed in the lower PSL shelf.
Close-up view of the phase detector electronics.
Slow temp. (left) and fast PZT signals into the AUX controller.
AUX-PSL beat note locked at 50 MHz offset, from the control room.
|
|
13891
|
Fri May 25 13:06:33 2018 |
Jon Richardson | Configuration | Electronics | Improved Measurements of AUX-PSL PLL | Attached are gain-variation measurements of the final, in situ AUX-to-PSL phase-locked loop (PLL).
Attachment 1: Figure of open-loop transfer function
Attachment 2: Raw network analyzer data
The figure shows the open-loop transfer function measured at several gain settings of the LB1005 PI servo controller. The shaded regions denote the 1-sigma sample variance inferred from 10 sweeps per gain setting. This analysis supercedes previous posts as it reflects the final loop architecture, which was slightly modified (now has a 90 dB low-frequency gain limit) as a workaround to make the LB1005 remotely operable. The measurements are also extended from 100 kHz to 1 MHz to resolve the PZT resonances of the AUX laser.
Conclusions:
- Gain variation confirms response linearity.
- At least two PZT resonances above the UGF are not far below unity (150 kHz and 500 kHz).
- Recommend to lower the proportional gain by 3 dB. This will place the UGF at 30 kHz with 55 degrees of phase.
|
13893
|
Fri May 25 14:55:33 2018 |
Jon Richardson | Update | Cameras | Status of GigE Camera Software Fixes | There is an effort to switch to an all-digital system for the GigE camera feeds similar to the one running at LLO, which uses Joe Betzwieser's custom SnapPy package to interface with the cameras in Python and aggregate their feeds into a fancy GUI. Joe's code is a SWIG-wrapping of the commercial camera-driver API, Pylon, from Basler. The wrapping allows the low-level camera driver methods to be called from within Python, and their feeds are forwarded to a gstreamer stream also initiated from within Python. The problem is that his wrapping (and the underlying Pylon software itself) is only runnable on an older version of Ubuntu. Efforts to run his software on newer distributions at the 40m have failed.
I'm working on a fix to essentially rewrite his high-level SnapPy code (generators of GUIs, etc.) to use the newest version of Pylon (pylon5) to interface at a low level with the cameras. I discovered that since the last attempt to digitize the camera system, Basler has released their own official version of a Python wrapping for Pylon on github (PyPylon).
Progress so far:
- I've installed from source the newest version of Pylon, pylon5.0.12 on the SL7 machine (rossa). I chose that machine because LIGO is migrating to Scientific Linux, but I think this will also work for any distribution.
- I've installed from source the the newest, official Python wrapping of the Basler Pylon software, pypylon.
- I've tested the pypylon package and confirmed it can run our cameras.
The next and final step is to modify Joe's SnapPy package to import pypylon instead of his custom wrapping of an older version of the camera software, and update all of the Pylon calls to use the new methods. I'll hopefully get back to this early next week. |
13914
|
Mon Jun 4 11:34:05 2018 |
Jon Richardson | Update | Cameras | Update on GigE Cameras | I spent a day trying to modify Joe B.'s LLO camera client-server code without ultimate success. His codes now runs without throwing any errors, but something inside the black-box handoff of his camera source code to gstreamer appears to be SILENTLY FAILING. Gautam suggested a call with Joe B., which I think is worth a try.
In the meantime, I've impemented a simple Python video feed streamer which does work, and which students can use as a base framework to implement more complicated things (e.g., stream multiple feeds in one window, save a video stream movie or animation).
It uses the same PyPylon API to interface with the GigE cameras as does Joe's code. However, it uses matplotlib instead of gstreamer to render the imaging. The matplotlib code is optimized for maximum refresh rate and I observed it to achieve ~5 Hz for a single video feed. However, this demo code does not set any custom cameras settings (it just initializes a camera with its defaults), so it's quite possible that the refresh rate is actually limited by, e.g., the camera exposure time.
Location of the code (on the shared network drive):
/opt/rtcds/caltech/c1/scripts/GigE/demo_with_mpl/stream_camera_to_mpl.py
This demo initializes a single GigE camera with its default settings and continuously streams its video feed in a pop-up window. It runs continuously until the window is closed. I installed PyPylon from source on the SL7 machine (rossa) and have only tested it on that machine. I believe it should work on all our versions of Linux, but if not, run the camera software on rossa for now.
Usage:
From within the above directory, the code is executed as
$python stream_camera_to_mpl.py [Camera IP address]
with a single argument specifying the IP address of the desired camera. At the time I tested, there was only one GigE camera on our network, at 192.168.113.152. |
13898
|
Wed May 30 16:12:30 2018 |
Jonathan Hanks | Summary | CDS | Looking at c1oaf issues | When c1oaf starts up there are 446 gain channels that should be set to 0.0 but which end up at 1.0. An example channel is C1:OAF-ADAPT_CARM_ADPT_ACC1_GAIN. The safe.snap file states that it should be set to 0. After model start up it is at 1.0.
We ran some tests, including modifying the safe.snap to make sure it was reading the snap file we were expecting. For this I set the setpoint to 0.5. After restart of the model we saw that the setpoint went to 0.5 but the epics value remained at 1.0. I then set the snap file back to its original setting. I ran the epics sequencer by hand in a gdb session and verified that the sequencer was setting the field to 0. I also built a custom sequencer that would catch writes by the sdf system to the channel. I only saw one write, the initial write that pushed a 0. I have reverted my changes to the sequencer.
The gain channel can be caput to the correct value and it is not pushed back to 1.0. So there does not appear to be a process actively pushing the value to 1.0. On Rolfs sugestion we ran the sequencer w/o the kernel object loaded, and saw the same behavior.
This will take some thought. |
629
|
Thu Jul 3 12:36:05 2008 |
Jonh | Summary | SUS | ETMY watchdog | ETMY watchdog was tripped. I turned it off and re-enabled the outputs. |
3976
|
Tue Nov 23 11:32:03 2010 |
Joonho | Summary | Electronics | RF distribution unit. | The last time(Friday) I made an arrangement for RF distribution unit.
I am making RF distribution unit for RF upgrade which is designed by Alberto.
To reduce a noise from loose connection,
I tried to make the number of hard connect as much as possible while reducing the number of connection via wire.
This is why I put splitters right next to the front pannel so that the connection between pannel plugs and splitters could be made of hard joints.
I attached the arrangement that I made on the last Friday.
Next time, I will drill the teflon(the supporting plate) for assembly.
Any suggestion would be really appreciated. |
4139
|
Tue Jan 11 21:08:19 2011 |
Joonho | Summary | Cameras | CCD cables upgrade plan. | Today I have made the CCD Cable Upgrade Plan for improvement of sysmtem.
I have ~60 VIDEO cables to be worked for upgrades so I would like to ask all of your favor in helping me of replacing cables.
1. Background
Currently, VIDEO system is not working as we desire.
About 20 cables are of impedance of 50 or 52 ohm which is not matched with the whole VIDEO system.
Moreover, some cameras and monitors are out of connection.
2. What I have worked so far.
I have checked impedance of all cables so I figured out which cables can be used or should be replaced.
I measured cables' pathes along the side tray so that we can share which cable is installed along which path.
I have made almost of cables necessary for VIDEO system upgrades but no label is attached so far.
3. Upgrade plan (More details are shown in attached file)
0 : Cable for output ch#2 and input ch#16 is not available for now |
1 : First, we need to work on the existing cables. |
1A : Check the label on the both ends and replace to the new label if necessary |
1B : We need to move the existing cable's channel only for those currently connected to In #26 (from #26 to #25) |
2 : Second, we need to implement new cables into the system |
2A : Make two cable's label and attach those on the both ends |
2B : Disconnect existing cables at the channel assigned for new cables and remove the cables from the tray also |
2C : Move 4 quads into the cabinet containing VIDEO MUX |
2D : Implement the new cable into the system along the path described and connect the cables to the assgined channel and camera or monitor |
4. This is a kind of a first draft of the plan.
Any comment for the better plan is always welcome.
Moreover, replacing all the cables indicated in the files is of great amount of work.
I would like to ask all of your favors in helping me to replace the cables (from 1. to 2D. steps above).
|
4328
|
Fri Feb 18 20:17:07 2011 |
Joonho | Summary | Electronics | Isolation of Voltage regulator | Today I was working on RF distribution box.
So far I almost finished to electronically isolate voltage regulators from the box wall by inserting mica sheet, sleeve, and washers.
The problem I found is the resistance between wall and the voltage regulator is order of M ohms
I checked my isolation (mica sheet and sleeve and washer) but there is no problem there.
But I found that the power switch is not completely isolated from the wall.( around 800 kohm)
and that the resistance between the regulator and the wall is smaller for the regulator closer to the power switch
and greater for the regulator less closer to it.
So I think we need to put washer or sleeve to isolate the powersitch electronically from the box wall.
Suresh or I will fix this problem
[ To Suresh, I can finish the isolation when I come tomorrow. Or you can proceed to finish isolation.] |
3655
|
Tue Oct 5 18:27:18 2010 |
Joonho Lee | Summary | Electronics | CCD cable's impedence | Today I checked the CCD cables which is connected to the VIDEOMUX.
17 cables are type of RG59, 8 cables are type of RG58. I have not figured out the type of other cables(23 cables) yet.
The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.
After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.
To check the impedance of each CCD cable, I went to the VIDEOMUX and looked for the label on the cable's surface.
Type of RG59 is designated to the cable of impedance 75ohm. I wrote down each cable's input or output channel number with observation(whether it is of type RG59 or not).
The result of observation is as follows.
Type |
channel number where it is connected to |
Type 59 |
in#2, in#11, in#12, in#15, in#18, in#19, in#22, in#26, out#3, out#4, out#11, out#12, out#14, out#17, out#18, out#20, out#21 |
Type 58 |
in#17, in#23, in#24, in#25, out#2, out#5, out#7, out#19 |
unknown type |
others |
For 23 cables that I have not figured out their type, cables are too entangled so it is limited to look for the label along each cable .
I will try to figure out more tomorrow. Any suggestion would be really appreciated. |
3694
|
Mon Oct 11 23:55:25 2010 |
Joonho Lee | Summary | Electronics | CCD cables for output signal | Today I checked all the CCD cables which is connected output channels of the VIDEOMUX.
Among total 22 cables for output, 18 cables are type of RG59, 4 cables are type of RG58.
The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.
After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.
Today, I labeled all cables connected to output channels of VIDEO MUX and disconnect all of them since last time it was hard to check every cable because of cables too entangled.
With thankful help by Yuta, I also checked which output channel is sending signal to which monitor while I was disconnecting cables.
Then I checked the types of all cables and existing label which might designate where each cable is connected to.
After I finished the check, I reconnected all cables into the output channel which each of cable was connected to before I disconnected.
4 cables out of 22 are type of RG58 so expected to be replace with cable of type RG59.
The result of observation is as follows.
Ch#
|
where its signal is sent |
type |
1 |
unknown |
59 |
2 |
Monitor#2 |
58 |
3 |
Monitor#3 |
59 |
4 |
Monitor#4 |
59 |
5 |
Monitor#5 |
58 |
6 |
Monitor#6 |
59 |
7 |
Monitor#7 |
58 |
8 |
unknown / labeled as "PSL output monitor" |
59 |
9 |
Monitor#9 |
59 |
10 |
Monitor#10 |
59 |
11 |
Monitor#11 |
59 |
12 |
Monitor#12 |
59 |
13 |
Unknown |
59 |
14 |
Monitor#14 |
59 |
15 |
Monitor#15 |
59 |
16 |
unknown / labeled as "10" |
59 |
17 |
unknown |
59 |
18 |
unknown / labeled as "3B" |
59 |
19 |
unknown / labeled as "MON6 IR19" |
58 |
20 |
unknown |
59 |
21 |
unknown |
59 |
22 |
unknown |
59 |
|
I could not figure out where 10 cables are sending their signals to. They are not connected to monitor turned on in control room
so I guess they are connected to monitors located inside the lab. I will check these unknown cables when I check the unknown input cables.
Next time, I will check out cables which is connected to input channels of VIDEIO MUX. Any suggestion would be really appreciated. |
3739
|
Mon Oct 18 22:11:32 2010 |
Joonho Lee | Summary | Electronics | CCD cables for input signal | Today I checked all the CCD cables which is connected input channels of the VIDEOMUX.
Among total 25 cables for output, 12 cables are type of RG59, 4 cables are type of RG58, and 9 cables are of unknown type.
The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.
After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.
Today, I check the cables in similar way as I did the last time.
I labeled all cables connected to input channels of VIDEO MUX and disconnect all of them since last time it was hard to check every cable because of cables too entangled.
Then I checked the types of all cables and existing label which might designate where each cable is connected to.
After I finished the check, I reconnected all cables into the input channel which each of cable was connected to before I disconnected.
4 cables out of 25 are type of RG58 so expected to be replace with cable of type RG59.
9 cables out of 25 are of unknown type. These nine cables are all orange-colored thick cables which do not have any label about the cable characteristic on the surface.
The result of observation is as follows.
Note that type 'TBD-1' is used for the orange colored cables because all of them look like the same type of cable.
Channel number |
where its signal is coming |
type |
1 |
C1:IO-VIDEO 1 MC2 |
TBD-1 |
2 |
FI CAMERA |
59 |
3 |
PSL OUTPUT CAMERA |
59 |
4 |
BS C:1O-VIDEO 4 |
TBD-1 |
5 |
MC1&3 C:1O-VIDEO 5 |
59 |
6 |
ITMX C:1O-VIDEO 6 |
TBD-1 |
7 |
C1:IO-VIDEO 7 ITMY |
TBD-1 |
8 |
C1:IO-VIDEO 8 ETMX |
TBD-1 |
9 |
C1:IO-VIDEO 9 ETMY |
TBD-1 |
10 |
No cable is connected
(spare channel) |
|
11 |
C1:IO-VIDEO 11 RCR |
59 |
12 |
C1:IO-VIDEO RCT |
59 |
13 |
MCR VIDEO |
59 |
14 |
C1:IO-VIDEO 14 PMCT |
59 |
15 |
VIDEO 15 PSL IOO(OR IOC) |
59 |
16 |
C1:IO-VIDEO 16 IMCT |
TBD-1 |
17 |
PSL CAMERA |
58 |
18 |
C1:IO-VIDEO 18 IMCR |
59 |
19 |
C1:IO-VIDEO 19 SPS |
59 |
20 |
C1:IO-VIDEO 20 BSPO |
TBD-1 |
21 |
C1:IO-VIDEO 21 ITMXPO |
TBD-1 |
22 |
C1:IO-VIDEO 22 APS1 |
59 |
23 |
ETMX-T |
58 |
24 |
ETMY-T |
58 |
25 |
POY CCD VIDEO CH25 |
58 |
26 |
OMC-V |
59 |
|
Today I could not figure out what impedance the TBD-1 type(unknown type) has.
Next time, I will check out the orange-colored cables' impedance directly and find where the unknown output signal is sent. Any suggestion would be really appreciated. |
3782
|
Tue Oct 26 01:53:21 2010 |
Joonho Lee | Update | Electronics | Fuction Generator removed. | Today I worked on how to measure cable impedance directly.
In order to measure the impedance in RF range, I used a function generator which could generate 50MHz signal and was initially connected to the table on the right of the decks.
The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.
After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.
To test the VIDEO cables, I need a function generator generating signal of frequency 50 MHz.
In the deck on the right of PSL table, there was only one such generator which was connected to the table on the right of the deck.
Therefore, I disconnected it from the cable and took it to the control room to use it because Rana said it was not used.
Then, I tired to find on how to measure the impedance of cable directly but I did not finish yet.
When I finished today works, I put the generator back to the deck but I did not connect to the previous cable which was initially connected to the generator.
Next time, I will finish the practical method of measuring the cable impedance then I will measure the cables with unknown impedance.
Any suggestion would be appreciated. |
|