40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 87 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  16185   Sun Jun 6 08:42:05 2021 JonUpdateCDSFront-End Assembly and Testing

Here is an update and status report on the new BHD front-ends (FEs).


The changes to the FE BIOS settings documented in [16167] do seem to have solved the timing issues. The RTS models ran for one week with no more timing failures. The IOP model on c1sus2 did die due to an unrelated "Channel hopping detected" error. This was traced back to a bug in the Simulink model, where two identical CDS parts were both mapped to ADC_0 instead of ADC_0/1. I made this correction and recompiled the model following the procedure in [15979].

Model naming standardization

For lack of a better name, I had originally set up the user model on c1sus2 as "c1sus2.mdl" This week I standardized the name to follow the three-letter subsystem convention, as four letters lead to some inconsistency in the naming of the auto-generated MEDM screens. I renamed the model c1sus2.mdl -> c1su2.mdl. The updated table of models is below.

Model Host CPU DCUID Path
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl
c1su2 c1su2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl

Renaming an RTS model requires several steps to fully propagate the change, so I've documented the procedure below for future reference.

On the target FE, first stop the model to be renamed:

controls@c1sus2$ rtcds stop c1sus2

Then, navigate to the build directory and run the uninstall and cleanup scripts:

controls@c1sus2$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1sus2$ make uninstall-c1sus2
controls@c1sus2$ make clean-c1sus2

Unfortunately, the uninstall script does not remove every vestige of the old model, so some manual cleanup is required. First, open the file /opt/rtcds/caltech/c1/target/gds/param/testpoint.par and manually delete the three-line entry corresponding to the old model:


If this is not removed, reinstallation of the renamed model will fail because its assigned DCUID will appear to already be in use. Next, find all relics of the old model using:

controls@c1sus2$ find /opt/rtcds/caltech/c1 -iname "*sus2*"

and manually delete each file and subdirectory containing the "sus2" name. Finally, rename, recompile, reinstall, and relaunch the model:

controls@c1sus2$ cd /opt/rtcds/userapps/release/sus/c1/models
controls@c1sus2$ mv c1sus2.mdl c1su2.mdl
controls@c1sus2$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1sus2$ make c1su2
controls@c1sus2$ make install-c1su2
controls@c1sus2$ rtcds start c1su2

Sitemap screens

I used a tool developed by Chris, mdl2adl, to auto-generate a set of temporary sitemap/model MEDM screens. This package parses each Simulink file and generates an MEDM screen whose background is an .svg image of the Simulink model. Each object in the image is overlaid with a clickable button linked to the auto-generated RTS screens. An example of the screen for the C1BHD model is shown in Attachment 1. Having these screens will make the testing much faster and less user-error prone.

I generated these screens following the instructions in Chris' README. However, I ran this script on the c1sim machine, where all the dependencies including Matlab 2021 are already set up. I simply copied the target .mdl files to the root level of the mdl2adl repo, ran the script (./mdl2adl.sh c1x06 c1x07 c1bhd c1su2), and then copied the output to /opt/rtcds/caltech/c1/medm/medm_teststand. Then I redefined the "sitemap" environment variable on the chiara clone to point to this new location, so that they can be launched in the teststand via the usual "sitemap" command.

Current status and plans

Is it possible to convert 18-bit AO channels to 16-bit?

Currently, we are missing five 18-bit DACs needed to complete the c1sus2 system (the c1bhd system is complete). Since the first shipment, we have had no luck getting additional 18-bit DACs from the sites, and I don't know when more will become available. So, this week I took an inventory of all the 16-bit DACs available at the 40m. I located four 16-bit DACs, pictured in Attachment 2. Their operational states are unknown, but none were labeled as known not to work.

The original CDS design would call for 40 more 18-bit DAC channels. Between the four 16-bit DACs there are 64 channels, so if only 3/4 of these DACs work we would have enough AO channels. However, my search turned up zero additional 16-bit DAC adapter boards. We could check if first Rolf or Todd have any spares. If not, I think it would be relatively cheap and fast to have four new adapters fabricated.

DAQ network limitations and plan

To get deeper into the signal-integrity aspect of the testing, it is going to be critical to get the secondary DAQ network running in the teststand. Of all the CDS tools (Ndscope, Diaggui, DataViewer, StripTool), only StripTool can be used without a functioning NDS server (which, in turn, requires a functioning DAQ server). StripTool connects directly to the EPICS server run by the RTS process. As such, StripTool is useful for basic DC tests of the fast channels, but it can only access the downsampled monitor channels. Ian and Anchal are going to carry out some simple DAC-to-ADC loopback tests to the furthest extent possible using StripTool (using DC signals) and will document their findings separately.

We don't yet have a working DAQ network because we are still missing one piece of critical hardware: a 10G switch compatible with the older Myricom network cards. In the older RCG version 3.x used by the 40m, the DAQ code is hardwired to interface with a Myricom 10G PCIe card. I was able to locate a spare Myricom card, pictured in Attachment 3, in the old fb machine. Since it looks like it is going to take some time to get an old 10G switch from the sites, I went ahead and ordered one this week. I have not been able to find documentation on our particular Myricom card, so it might be compatible with the latest 10G switches but I just don't know. So instead I bought exactly the same older (discontinued) model as is used in the 40m DAQ network, the Netgear GSM7352S. This way we'll also have a spare. The unit I bought is in "like-new" condition and will unfortunately take about a week to arrive.

Attachment 1: c1bhd.png
Attachment 2: 16bit_dacs.png
Attachment 3: myricom.png
  16220   Tue Jun 22 16:53:01 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

The channels on both the C1BHD and C1SUS2 seem to be frozen: they arent updating and are holding one value. To fix this Anchal and I tried:

  • restarting the computers 
    • restarting basically everything including the models
  • Changing the matrix values
  • adding filters
  • messing with the offset 
  • restarting the network ports (Paco suggested this apparently it worked for him at some point)
  • Checking to make sure everything was still connected inside the case (DAC, ADC, etc..)

I wonder if Jon has any ideas. 

  16224   Thu Jun 24 17:32:52 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

Anchal and I ran tests on the two systems (C1-SUS2 and C1-BHD). Attached are the results and the code and data to recreate them.

We connected one DAC channel to one ADC channel and thus all of the results represent a DAC/ADC pair. We then set the offset to different values from -3000 to 3000 and recorded the measured signal. I then plotted the response curve of every DAC/ADC pair so each was tested at least once.

There are two types of plots included in the attachments

1) a summary plot found on the last pages of the pdf files. This is a quick and dirty way to see if all of the channels are working. It is NOT a replacement for the other plots. It shows all the data quickly but sacrifices precision.

2) In an in-depth look at an ADC/DAC pair. Here I show the measured value for a defined DC offset. The Gain of the system should be 0.5 (put in an offset of 100 and measure 50). I included a line to show where this should be. I also plotted the difference between the 0.5 gain line and the measured data. 

As seen in the provided plots the channels get saturated after about the -2000 to 2000 mark, which is why the difference graph is only concentrated on -2000 to 2000 range. 

Summary: all the channels look to be working they all report very little deviation off of the theoretical gain. 

Note: ADC channel 31 is the timing signal so it is the only channel that is wildly off. It is not a measurement channel and we just measured it by mistake.

Attachment 1: C1-SU2_Channel_Responses.pdf
C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf
Attachment 2: C1-BHD_Channel_Responses.pdf
C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf
Attachment 3: CDS_Channel_Test.zip
  16225   Fri Jun 25 14:06:10 2021 JonUpdateCDSFront-End Assembly and Testing


Here is the final summary (from me) of where things stand with the new front-end systems. With Anchal and Ian's recent scripted loopback testing [16224], all the testing that can be performed in isolation with the hardware on hand has been completed. We currently have no indication of any problem with the new hardware. However, the high-frequency signal integrity and noise testing remains to be done.

I detail those tests and link some DTT templates for performing them below. We have not yet received the Myricom 10G network card being sent from LHO, which is required to complete the standalone DAQ network. Thus we do not have a working NDS server in the test stand, so cannot yet run any of the usual CDS tools such as Diaggui. Another option would be to just connect the new front-ends to the 40m Martian/DAQ networks and test them there.

Final Hardware Configuration

Due to the unavailablity of the 18-bit DACs that were expected from the sites, we elected to convert all the new 18-bit AO channels to 16-bit. I was able to locate four unused 16-bit DACs around the 40m [16185], with three of the four found to be working. I was also able to obtain three spare 16-bit DAC adapter boards from Todd Etzel. With the addition of the three working DACs, we ended up with just enough hardware to complete both systems.

The final configuration of each I/O chassis is as follows. The full setup is pictured in Attachment 1.

Component Qty Installed Qty Installed
16-bit ADC 1 2
16-bit ADC adapter 1 2
16-bit DAC 1 3
16-bit DAC adapter 1 3
16-channel BIO 1 1
32-channel BO 0 6

This hardware provides the following breakdown of channels available to user models:

Channel Type Channel Count Channel Count
16-bit AI* 31 63
16-bit AO 16 48
BO 0 192

*The last channel of the first ADC is reserved for timing diagnostics.

The chassis have been closed up and their permanent signal cabling installed. They do not need to be reopened, unless future testing finds a problem.

RCG Model Configuration

An IOP model has been created for each system reflecting its final hardware configuration. The IOP models are permanent and system-specific. When ready to install the new systems, the IOP models should be copied to the 40m network drive and installed following the RCG-compilation procedure in [15979]. Each system also has one temporary user model which was set up for testing purposes. These user models will be replaced with the actual SUS, OMC, and BHD models when the new systems are installed.

The current RCG models and the action to take with each one are listed below:

Model Name Host CPU DCUID Path (all paths local to chiara clone machine) Action
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl Copy to same location on 40m network drive; compile and install
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl Copy to same location on 40m network drive; compile and install
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl Do not copy; replace with permanent OMC/BHD model(s)
c1su2 c1su2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl Do not copy; replace with permanent SUS model(s)

Each front-end can support up to four user models.

Future Signal-Integrity Testing

Recently, the CDS group has released a well-documented procedure for testing General Standards ADC and DACs: T2000188. They've also automated the tests using a related set of shell scripts (T2000203). Unfortnately I don't believe these scripts will work at the 40m, as they require the latest v4.x RCG.

However, there is an accompanying set of DTT templates that could be very useful for accelerating the testing. They are available from the LIGO SVN (log in with username: "first.last@LIGO.ORG"). I believe these can be used almost directly, with only minor updates to channel names, etc. There are two classes of DTT-templated tests:

  1. DAC -> ADC loopback transfer functions
  2. Voltage noise floor PSD measurements of individual cards

The T2000188 document contains images of normal/passing DTT measurements, as well as known abnormalities and failure modes. More sophisticated tests could also be configured, using these templates as a guiding example.

Hardware Reordering

Due to the unexpected change from 18- to 16-bit AO, we are now short on several pieces of hardware:

  • 16-bit AI chassis. We originally ordered five of these chassis, and all are obligated as replacements within the existing system. Four of them are now (temporarily) in use in the front-end test stand. Thus four of the new 18-bit AI chassis will need to be retrofitted with 16-bit hardware.
  • 16-bit DACs. We currently have exactly enough DACs. I have requested a quote from General Standards for two additional units to have as spares.
  • 16-bit DAC adapters. I have asked Todd Etzel for two additional adapter boards to also have as spares. If no more are available, a few more should be fabricated.
Attachment 1: test_stand.JPG
  17058   Thu Aug 4 19:01:59 2022 TegaUpdateComputersFront-end machine in supermicro boxes

Koji and JC looked around the lab today and found some supermicro boxes which I was told to look into to see if they have any useful computers.


Boxes next to Y-arm cabinets (3 boxes: one empty)

We were expecting to see a smaller machine in the first box - like top machine in attachement 1 - but it turns out to actually contain the front-end we need, see bottom machine in attachment 1. This is the same machine as c1bhd currently on the teststand. Attachment 2 is an image of the machine in the second box (maybe a new machine for frambuilder?). The third box is empty.


Boxes next to X-arm cabinets (3 boxes)

Attachement 3 shows the 3 boxes each of which contains the same FE machine we saw earlier at the bottom of attachement 1. The middle box contains the note shown in attacment 4.


Box opposite Y-arm cabinets (1 empty box)


In summary, it looks like we have 3 new front-ends, 1 new front-end with networking issue and 1 new tower machine (possibly a frame builder replacement).

Attachment 1: IMG_20220804_184444473.jpg
Attachment 2: IMG_20220804_191658206.jpg
Attachment 3: IMG_20220804_185336240.jpg
Attachment 4: IMG_20220804_185023002.jpg
  17066   Mon Aug 8 17:16:51 2022 TegaUpdateComputersFront-end machine setup

Added 3 FE machines - c1ioo, c1lsc, c1sus -  to the teststand following the instructions in elog15947. Note that we also updated /etc/hosts on chiara by adding the names and ip of the new FE since we wish to ssh from there given that chiara is where we land when we connect to c1teststand.

Two of the FE machines - c1lsc & c1ioo - have the 6-core X5680 @ 3.3GHz processor and the BIOS were already mostly configured because they came from LLO I believe. The third machine - c1sus - has the 6-core X5650 @ 2.67GHz processor and required a complete BIOS config according to the doc.

Next Step:  I think the next step is to get the latest RTS working on the new fb1 (tower machine), then boot the frontends from there.

KVM switch note:

All current front-ends have the ps/2 keyboard and mouse connectors except for fb1, which only has usb ports. So we may not be able to connect to fb1 using a ps/2 KVM switch that works for all the current front-ends. The new tower machine does have a ps/2 connector so if we decide to use that as the bootserver and framebuilder, then we should be fine.

Attachment 1: IMG_20220808_170349717.jpg
  15872   Fri Mar 5 17:48:25 2021 JonUpdateCDSFront-end testing

Today I moved the c1bhd machine from the control room to a new test area set up behind (west of) the 1X6 rack. The test stand is pictured in Attachment 1. I assembled one of the new IO chassis and connected it to the host.

I/O Chassis Assembly

  • LIGO-style 24V feedthrough replaced with an ATX 650W switching power supply
  • Timing slave installed
  • Contec DO-1616L-PE card installed for timing control
  • One 16-bit ADC and one 32-channel DO module were installed for testing

The chassis was then powered on and LED lights illuminated indicating that all the components have power. The assembled chassis is pictured in Attachment 2.

Chassis-Host Communications Testing

Following the procedure outlined T1900700, the system failed the very first test of the communications link between chassis and host, which is to check that all PCIe cards installed in both the host and the expansion chassis are detected. The Dolpin host adapter card is detected:

07:06.0 PCI bridge: Stargen Inc. Device 0102 (rev 02) (prog-if 00 [Normal decode])
    Flags: bus master, fast devsel, latency 0
    Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
    I/O behind bridge: 00002000-00002fff
    Prefetchable memory behind bridge: 00000000c0200000-00000000c03fffff
    Capabilities: [40] Power Management version 2
    Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
    Capabilities: [60] Express Downstream Port (Slot+), MSI 00
    Capabilities: [80] Subsystem: Device 0000:0000
    Kernel driver in use: pcieport

However the OSS PCIe adapter card linking the host to the IO chassis was not detected, nor were any of the cards in the expansion chassis. Gautam previously reported that the OSS card was not detected by the host (though it was not connected to the chassis then). Even now connected to the IO chassis, the card is still not detected. On the chassis-side OSS card, there is a red LED illuminated indicating "HOST CARD RESET" as pictured in Attachment 3. This may indicate a problem with the card on the host side. Still more debugging to be done.

Attachment 1: image_67203585.JPG
Attachment 2: image_67216641.JPG
Attachment 3: image_17185537.JPG
  15890   Tue Mar 9 16:52:47 2021 JonUpdateCDSFront-end testing

Today I continued with assembly and testing of the new front-ends. The main progress is that the IO chassis is now communicating with the host, resolving the previously reported issue.

Hardware Issues to be Resolved

Unfortunately, though, it turns out one of the two (host-side) One Stop Systems PCIe cards sent from Hanford is bad. After some investigation, I ultimately resolved the problem by swapping in the second card, with no other changes. I'll try to procure another from Keith Thorne, along with some spares.

Also, two of the three switching power supplies sent from Livingston (250W Channel Well PSG400P-89) appear to be incompatible with the Trenton BPX6806 PCIe backplanes in these chassis. The power supply cable has 20 conductors and the connector on the board has 24. The third supply, a 650W Antec EA-650, does have the correct cable and is currently powering one of the IO chassis. I'll confirm this situation with Keith and see whether they have any more Antecs. If not, I think these supplies can still be bought (not obsolete).

I've gone through all the hardware we've received, checked against the procurement spreadsheet. There are still some missing items:

  • 18-bit DACs (Qty 14; but 7 are spares)
  • ADC adapter boards (Qty 5)
  • DAC adapter boards (Qty 9)
  • 32-channel DO modules (Qty 2/10 in hand)

Testing Progress

Once the PCIe communications link between host and IO chassis was working, I carried out the testing procedure outlined in T1900700. This performs a series checks to confirm basic operation/compatibility of the hardware and PCIe drivers. All of the cards installed in both the host and the expansion chassis are detected and appear correctly configured, according to T1900700. In the below tree, there is one ADC, one 16-ch DIO, one 32-ch DO, and one DolphinDX card:

+-05.0-[05-20]----00.0-[06-20]--+-00.0-[07-08]----00.0-[08]----00.0  Contec Co., Ltd Device 86e2
|                               +-01.0-[09]--
|                               +-03.0-[0a]--
|                               +-08.0-[0b-15]----00.0-[0c-15]--+-02.0-[0d]--
|                               |                               +-03.0-[0e]--
|                               |                               +-04.0-[0f]--
|                               |                               +-06.0-[10-11]----00.0-[11]----04.0  PLX Technology, Inc. PCI9056 32-bit 66MHz PCI <-> IOBus Bridge
|                               |                               +-07.0-[12]--
|                               |                               +-08.0-[13]--
|                               |                               +-0a.0-[14]--
|                               |                               \-0b.0-[15]--
|                               \-09.0-[16-20]----00.0-[17-20]--+-02.0-[18]--
|                                                               +-03.0-[19]--
|                                                               +-04.0-[1a]--
|                                                               +-06.0-[1b]--
|                                                               +-07.0-[1c]--
|                                                               +-08.0-[1d]--
|                                                               +-0a.0-[1e-1f]----00.0-[1f]----00.0  Contec Co., Ltd Device 8632
|                                                               \-0b.0-[20]--
\-08.0-[21-2a]--+-00.0  Stargen Inc. Device 0101

Standalone Subnet

Before I start building/testing RTCDS models, I'd like to move the new front ends to an isolated subnet. This is guaranteed to prevent any contention with the current system, or inadvertent changes to it.

Today I set up another of the Supermicro servers sent by Livingston in the 1X6 test stand area. The intention is for this machine to run a cloned, bootable image of the current fb1 system, allowing it to function as a bootserver and DAQ server for the FEs on the subnet.

However, this hard disk containing the fb1 image appears to be corrupted and will not boot. It seems to have been sitting disconnected in a box since ~2018, which is not a stable way to store data long term. I wasn't immediately able to recover the disk using fsck. I could spend some more time trying, but it might be most time-effective to just make a new clone of the fb1 system as it is now.

Attachment 1: image_72192707.JPG
  15924   Tue Mar 16 16:27:22 2021 JonUpdateCDSFront-end testing

Some progress today towards setting up an isolated subnet for testing the new front-ends. I was able to recover the fb1 backup disk using the Rescatux disk-rescue utility and successfully booted an fb1 clone on the subnet. This machine will function as the boot server and DAQ server for the front-ends under test. (None of these machines are connected to the Martian network or, currently, even the outside Internet.)

Despite the success with the framebuilder, front-ends cannot yet be booted locally because we are still missing the DHCP and FTP servers required for network booting. On the Martian net, these processes are running not on fb1 but on chiara. And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.

For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success. The repair tool I used to recover the fb1 disk does not find a problem with the chiara disk. However, the chiara disk is an external USB drive, so I suspect there could be a compatibility problem with these old (~2010) machines. Some of them don't even recognize USB keyboards pre-boot-up. I may try booting the USB drive from a newer computer.

Edit: I removed one of the new, unused Supermicros from the 1Y2 rack and set it up in the test stand. This newer machine is able to boot the chiara USB disk without issue. Next time I'll continue with the networking setup.

  15925   Tue Mar 16 19:04:20 2021 gautamUpdateCDSFront-end testing

Now that I think about it, I may only have backed up the root file system of chiara, and not/home/cds/ (symlinked to /opt/ over NFS). I think we never revived the rsync backup to LDAS after the FB fiasco of 2017, else that'd have been the most convenient way to get files. So you may have to resort to some other technique (e.g. configure the second network interface of the chiara clone to be on the martian network and copy over files to the local disk, and then disconnect the chiara clone from the martian network (if we really want to keep this test stand completely isolated from the existing CDS network) - the /home/cds/ directory is rather large IIRC, but with 2TB on the FB clone, you may be able to get everything needed to get the rtcds system working). It may then be necessary to hook up a separate disk to write frames to if you want to test that part of the system out.

Good to hear the backup disk was able to boot though!


And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.

For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success.

  15947   Fri Mar 19 18:14:56 2021 JonUpdateCDSFront-end testing


Today I finished setting up the subnet for new FE testing. There are clones of both fb1 and chiara running on this subnet (pictured in Attachment 2), which are able to boot FEs completely independently of the Martian network. I then assembled a second FE system (Supermicro host and IO chassis) to serve as c1sus2, using a new OSS host adapter card received yesterday from LLO. I ran the same set of PCIe hardware/driver tests as was done on the c1bhd system in 15890. All the PCIe tests pass.

Subnet setup

For future reference, below is the procedure used to configure the bootserver subnet.

  • Select "Network" as highest boot priority in FE BIOS settings
  • Connect all machines to subnet switch. Verify fb1 and chiara eth0 interfaces are enabled and assigned correct IP address.
  • Add c1bhd and c1sus2 entries to chiara:/etc/dhcp/dhcpd.conf:
host c1bhd {
  hardware ethernet 00:25:90:05:AB:46;
host c1bhd {
  hardware ethernet 00:25:90:06:69:C2;
  • Restart DHCP server to pick up changes:
$ sudo service isc-dhcp-server restart
  • Add c1bhd and c1sus2 entries to fb1:/etc/hosts:    c1bhd    c1sus2
  • Power on the FEs. If all was configured correctly, the machines will boot.

C1SUS2 I/O chassis assembly

  • Installed in host:
    • DolphinDX host adapter
    • One Stop Systems PCIe x4 host adapter (new card sent from LLO)
  • Installed in chassis:
    • Channel Well 250 W power supply (replaces aLIGO-style 24 V feedthrough)
    • Timing slave
    • Contec DIO-1616L-PE module for timing control

Next time, on to RTCDS model compilation and testing. This will require first obtaining a clone of the /opt/rtcds disk hosted on chiara.

Attachment 1: image_72192707_(1).JPG
Attachment 2: image_50412545.JPG
  15959   Wed Mar 24 19:02:21 2021 JonUpdateCDSFront-end testing

This evening I prepared a new 2 TB 3.5" disk to hold a copy of /opt/rtcds and /opt/rtapps from chiara. This is the final piece of setup before model compilation can be tested on the new front-ends. However chiara does not appear to support hot-swapping of disks, as the disk is not recognized when connected to the live machine. I will await confirmation before rebooting it. The new disk is not currently connected.

  15976   Mon Mar 29 17:55:50 2021 JonUpdateCDSFront-end testing

Cloning of chiara:/home/cvs underway

I returned today with a beefier USB-SATA adapter, which has an integrated 12 V supply for powering 3.5" disks. I used this to interface a new 6 TB 3.5" disk found in the FE supplies cabinet.

I decided to go with a larger disk and copy the full contents of chiara:/home/cds. Strictly, the FEs only strictly need the RTS executables in /home/cvs/rtcds and /home/cvs/rtapps. However, to independently develop models, the shared matlab binaries in /home/cvs/caltech/... also need to be exposed. And there may be others I've missed.

I began the clone around 12:30 pm today. To preserve bandwidth to the main disk, I am copying not the /home/cds disk directly, but rather its backup image at /media/40mBackup.

Set up of dedicated SimPlant host

Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models.

I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them. However, if there are concerns about having it present on the network, it can be moved to the outside-facing switch in the office area. It is not currently running any RTCDS processes.

Set-up was carried out via the following procedure:

  • Installed Debian 10.9 on an internal 480 GB SSD.
  • Installed cdssoft repos following Jamie's instructions.
  • Installed RTS and Docker dependencies:
    $ sudo apt install cpuset advligorts-mbuf-dkms advligorts-gpstime-dkms docker.io docker-compose
  • Configured scheduler for real-time operation:
    $ sudo /sbin/sysctl kernel.sched_rt_runtime_us = -1
  • Reserved 10 cores for RTS user models (plus one for IOP model) by adding the following line to /etc/default/grub:
    GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=nohz,domain,1-11 nohz_full=1-11 tsc=reliable mce=off"
    followed by the commands:
    $ sudo update-grub
    $ sudo reboot now
  • Downloaded virtual cymac repo to /home/controls/docker-cymac.

I need to talk to Chris before I can take the setup further.

  15979   Tue Mar 30 18:21:34 2021 JonUpdateCDSFront-end testing

Progress today:

Outside Internet access for FE test stand

This morning Jordan and I ran an 85-foot Cat 6 Ethernet cable from the campus network switch in the office area (on the ligo.caltech.edu domain) to the FE test stand near 1X6. This is to allow the test-stand subnet to be accessed for remote testing, while keeping it invisible to the parallel Martian subnet.

Successful RTCDS model compilation on new FEs

The clone of the chiara:/home/cds disk completed overnight. Today I installed the disk in the chiara clone. The NFS mounts (/opt/rtcds, /opt/rtapps) shared with the other test-stand machines mounted without issue.

Next, I attempted to open the shared Matlab executable (/cvs/cds/caltech/apps/linux64/matlab/bin/matlab) and launch Simulink. The existing Matlab license (/cvs/cds/caltech/apps/linux64/matlab/licenses/license_chiara_865865_R2015b.lic) did not work on this new machine, as they are machine-specific, so I updated the license file. I linked this license to my personal license, so that the machine license for the real chiara would not get replaced. The original license file is saved in the same directory with a *.bak postfix. If this disk is ever used in the real chiara machine, this file should be restored. After the machine license was updated, Matlab and Simulink loaded and allowed model editing.

Finally, I tested RTCDS model compilation on the new FEs using the c1lsc model as a trial case. It encountered one path issue due to the model being located at /opt/rtcds/userapps/release/isc/c1/models/isc/ instead of /opt/rtcds/userapps/release/isc/c1/models/. This seems to be a relic of the migration of the 40m models from the SVN to a standalone git repo. This was resolved by simply symlinking to the expected location:

$ sudo ln -s /opt/rtcds/userapps/release/isc/c1/models/isc/c1lsc.mdl /opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl

The model compilation then succeeded:

controls@c1bhd$ cd /opt/rtcds/caltech/c1/rtbuild/release

controls@c1bhd$ make clean-c1lsc
Cleaning c1lsc...

controls@c1bhd$ make c1lsc
Cleaning c1lsc...
Parsing the model c1lsc...
Building EPICS sequencers...
Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 28830 s in the
make[1]: warning:  Clock skew detected.  Your build may be incomplete.
RCG source code directory:
The following files were used for this build:

Successfully compiled c1lsc
Compile Warnings, found in c1lsc_warnings.log:
[warnings suppressed]

As did the installation:

controls@c1bhd$ make install-c1lsc
Installing system=c1lsc site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1LSC.txt
Installing /opt/rtcds/caltech/c1/target/c1lsc/c1lscepics
Installing /opt/rtcds/caltech/c1/target/c1lsc
Installing start and stop scripts
Performing install-daq
Updating testpoint.par config file
-gds_node=42 -site_letter=C -system=c1lsc -host=c1lsc
Installing GDS node 42 configuration file
Installing auto-generated DAQ configuration file
Installing Epics MEDM screens
Running post-build script

safe.snap exists

We are ready to start building and testing models.

  2772   Mon Apr 5 13:52:45 2010 AlbertoUpdateComputersFront-ends down. Rebooted

This morning, at about 12 Koji found all the front-ends down.


Then I burtestored ISCEX, ISCEY, ISCAUX to April 2nd, 23:07.

The front-ends are now up and running again.

  2376   Thu Dec 10 08:40:12 2009 AlbertoUpdateComputersFronte-ends down

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

  2378   Thu Dec 10 08:50:33 2009 AlbertoUpdateComputersFronte-ends down


I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to single out the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
Then I did the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. I executed the steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
5) power-cycle and restart the single front-ends
6) burt-restore all the snapshots
When I tried to restart C1SOSVME by power-cycling it I still got the same response: "No response from EPICS". But I then reset C1SUSVME1 and C1SUSVME2 I was able to restart C1SOSVME.
It turned out that while I was checking the efficacy of the steps of the Grand Reboot to single out the crucial one, I was getting fooled by C1SOSVME's status. C1SOSVME was stuck, hanging on C1SUSVME1 and C1SUSVME2.
So the Nuclear option is still unproven as the only working procedure. It might be not necessary.
Maybe restating BOTH RFM switches, the one in 1Y7 and the one in 1Y6, would be sufficient. Or maybe just power-cycling the C0DAQCTRL and C1DCU1 is sufficient. This has to be confirmed next time we incur on the same problem.
  2382   Thu Dec 10 10:01:16 2009 JenneUpdateComputersFronte-ends down

All the front ends are back up.  



I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to understand once for all what's the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
The following is the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. Execute the following steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
One other possibility remains to be explored to avoid the Nuclear Option. And that is to just try to reset both RFM Network switches: the one in 1Y7 and the one in 1Y6.


  2383   Thu Dec 10 10:31:18 2009 JenneUpdateComputersFronte-ends down


All the front ends are back up.  



I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to understand once for all what's the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
The following is the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. Execute the following steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
One other possibility remains to be explored to avoid the Nuclear Option. And that is to just try to reset both RFM Network switches: the one in 1Y7 and the one in 1Y6.


 I burtrestored all the snapshots to Dec 9 2009 at 18:00.

  16336   Thu Sep 16 01:16:48 2021 KojiUpdateGeneralFrozen 2

It happened again. Defrosting required.

Attachment 1: P_20210916_003406_1.jpg
  10756   Thu Dec 4 23:45:30 2014 JenneUpdateCDSFrozen?

[Jenne, Q, Diego]

I don't know why, but everything in EPICS-land froze for a few minutes just now.  It happened yesterday that I saw, but I was bad and didn't elog it.

Anyhow, the arms stayed locked (on IR) for the whole time it was frozen, so the fast things must have still been working.  We didn't see anything funny going on on the frame builder, although that shouldn't have much to do with the EPICS service.  The seismic rainbow on the wall went to zeros during the freeze, although the MC and PSL strip charts are still fine. 

After a few minutes, while we were still trying to think of things to check, things went back to normal.  We're going to just keep locking for now....

  3782   Tue Oct 26 01:53:21 2010 Joonho LeeUpdateElectronicsFuction Generator removed.

Today I worked on how to measure cable impedance directly.

In order to measure the impedance in RF range, I used a function generator which could generate 50MHz signal and was initially connected to the table on the right of the decks.

The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.

After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.


To test the VIDEO cables, I need a function generator generating signal of frequency 50 MHz.

In the deck on the right of PSL table, there was only one such generator which was connected to the table on the right of the deck.

Therefore, I disconnected it from the cable and took it to the control room to use it because Rana said it was not used.

Then, I tired to find on how to measure the impedance of cable directly but I did not finish yet.

When I finished today works, I put the generator back to the deck but I did not connect to the previous cable which was initially connected to the generator.


Next time, I will finish the practical method of measuring the cable impedance then I will measure the cables with unknown impedance.

Any suggestion would be appreciated.

  8908   Tue Jul 23 16:39:31 2013 KojiUpdateGeneralFull IFO alignment recovered

[Annnalisa Koji]

Full alignment of the IFO was recovered. The arms were locked with the green beams first, and then locked with the IR.

In order to use the ASS with lower power, C1:LSC-OUTPUT_MTRX_9_6 and C1:LSC-OUTPUT_MTRX_10_7 were reduced to 0.05.
This compensates the gain imbalance between TRX/Y siganls and the A2L component in the arm feedback signals.

Despite the IFO was aligned, we don't touch the OPLEVs and green beams to the vented IFO.

Attachment 1: alignment.png
  8912   Tue Jul 23 20:41:40 2013 gautamConfigurationendtable upgradeFull range calibration and installation of PZT-mounted mirrors

 Given that the green beam is to be used as the reference during the vent, it was decided to first test the PZT mounted mirrors at the X-endtable rather than the Y-endtable as originally planned. Yesterday, I prepared a second PZT mounted mirror, completed the full range calibration, and with Manasa, installed the mirrors on the X-endtable as mentioned in this elog. The calibration constants have been determined to be (see attached plots for aproximate range of actuation):

M1-pitch: 0.1106 mrad/V

M1-yaw: 0.143 mrad/V

M2-pitch: 0.197 mrad/V

M2-yaw: 0.27 mrad/V

Second 2-inch mirror glued to tip-tilt and mounted:

  • The spot sizes on the steering mirrors at the X-end are fairly large, and so two 2-inch steering mirrors were required.
  • The mirrors already glued to the PZTs were a CVI 2-inch and a Laseroptik 1-inch mirror.
  • I prepared another Laseroptik 2-inch mirror (45 degree with HR and AR coatings for 532 nm) and glued it to a PZT mounted in a modified mount as before.
  • Another important point regarding mounting the PZTs: there are two perforated rings (see attached picture) that run around the PZT about 1cm below the surface on which the mirror is to be glued. The PZT has to be pushed in through the mount till these are clear of the mount, or the actuation will not be as desired. In the first CVI 2-inch mirror, this was not the the case, which probably explains the unexpectedly large pitch-yaw coupling that was observed during the calibration [Thanks Manasa for pointing this out]. 

Full range calibration of PZT:

Having prepared the two steering mirrors, I calibrated them for the full range of input voltages, to get a rough idea of whether the tilt varied linearly and also the range of actuation. 


  • The QPD setup described in my previous elogs was used for this calibration. 
  • The linear range of the QPD was gauged to be while the output voltage lay between -0.5V and 0.5V. The calibration constants are as determined during the QPD calibration, details of which are here.
  • In order to keep the spot always in the linear range of the QPD, I stared with an input signal of -10V or +10V (ie. one extreme), and moved both the X and Y micrometers on the translational stage till both these coordinates were at one end of the linear range (i.e -0.5V or 0.5V). I then increased the input voltage in steps of ~1V through the full range from -10V to +10V DC. The signal was applied using a SR function generator with the signal amplitude kept to 0, and a DC offset in the range -5V to 5V DC, which gave the desired input voltages to the PZT driver board (between -10V DC and 10V DC).
  • When the output of the QPD amp reached the end of the linear regime (i.e 0.5V or -0.5V), I moved the appropriate micrometer dial on the translational stage to take it to the other end of the linear range, before continuing with the measurements. The distance moved was noted. 
  • Both the X and Y coordinates were noted in order to investigate pitch-yaw coupling.

Analysis and remarks:

  • The results of the calibration are presented in the plots below. 
  • Though the measurement technique was crude (and maybe flawed because of a possible z-displacement while moving the translational stage), the calibration was meant to be rough, and I think the results obtained are satisfactory. 
  • Fitting the data linearly is only an approximation, as there is evidence of hysteresis. Also, PZTs appear to have some drift, though I have not been able to quantify this (I did observe that the output of the QPD amp shifted by an amount equal to ~0.05mm while I left the setup standing for an hour or so).  
  • The range of actuation seems to be different for the two PZTs, and also for each degree of freedom, though the measured data is consistent with the minimum range given in the datasheet (3.5 mrad for input voltages in the range -20V to 120V DC). 


PZT Calibration Plots

The circles are datapoints for the degree of freedom to which the input is applied, while the 'x's are for the other degree of freedom. Different colours correspond to data measured with the position of the translational stage at some value.

                                            M1 Pitch                                                                                             M1 Yaw

M1_Pitch_calib.pdf     M1_Yaw_calib.pdf


                                              M2 Pitch                                                                                        M2 Yaw 

M2_Pitch_calib.pdf     M2_Yaw_calib.pdf


Installation of the mirrors at the X-endtable:

The calibrated mirrors were taken to the X-endtable for installation. The steering mirrors in place were swapped out for the PZT mounted pair. Manasa managed (after considerable tweaking) to mode-match the green beam to the cavity with the new steering mirror configuration. In order to fine tune the alignment, Koji moved ITMx and ETMx in pitch and yaw so as to maximise green TRX. We then got an idea of which way the input pointing had to be moved in order to maximise the green transmission.


Attachment 5: PI_S330.20L.pdf
  8967   Mon Aug 5 18:48:44 2013 gautamConfigurationendtable upgradeFull range calibration of PZT mounted mirrors for Y-endtable

 I had prepared two more PZT mounted mirrors for the Y-end some time back. These are:

  • A 2-inch CVI mirror (45 degree, HR and AR for 532nm, was originally one of the steering mirrors at the X-endtable, and was removed while switching those out for the PZT mounted mirrrors).
  • A 1-inch Laseroptik mirror (45 degree, HR and AR for 532nm).

I used the same QPD set-up and the methodology described here to do a full-range calibration of these PZTs. Plots attached. The calibration constants have been determined to be:

CVI-pitch: 0.316 mrad/V

CVI-yaw:  0.4018 mrad/V

Laseroptik pitch: 0.2447 mrad/V

Laseroptik yaw:  0.2822 mrad/V


  • These PZTs, like their X-end counterparts, showed evidence of drift and hysteresis. We just have to deal with this.
  • One of the PZTs (the one on which the CVI mirror is mounted) is a used one. While testing it, I thought that its behaviour was a little anomalous, but the plots do not seem to suggest that anything is amiss.


                                                        CVI YAW                                                                                                                         CVI PITCH

2-inch-CVI-Yawcalib.pdf      2-inch-CVI-Pitchcalib.pdf

                                                        Laseroptik YAW                                                                                                             Laseroptik PITCH

1-inch-Laseroptik-Yawcalib.pdf   1-inch-Laseroptik-Pitchcalib.pdf


  2279   Tue Nov 17 10:09:57 2009 josephbUpdateEnvironmentFumes

The smell of diesel is particularly bad this morning.  Its concentrated enough to be causing me a headache.  I'm heading off to Millikan and will be working remotely on Megatron.

  6191   Thu Jan 12 11:08:23 2012 Leo SingerUpdatePEMFunky spectrum from STS-2

I am trying to stitch together spectra from seismometers and accelerometers to produce a ground motion spectrum from Hz to 100's of Hz.  I was able to retrieve data from two seismometers, GUR1 and STS_1, but not from any of the accelerometers.  The GUR1 spectrum is qualitatively similar to other plots that I have seen, but the STS_1 spectrum looks strange: the X axis spectrum is falling off as ~1/f, but the Y and Z spectra are pretty flat.  All three axes have a few lines that they may share in common and that they may share with GUR1.

See attached plot.

Attachment 1: spectrum.jpg
  932   Fri Sep 5 09:56:14 2008 josephb, EricConfigurationComputersFunny channels, reboots, and ethernet connections
1) Apparently the I00-ICS type channels had gotten into a funny state last night, where they were showing just noise, exactly when Rana changed the accelerometer gains and did major reboots. A power cycle of the c1ioo crate and appropriate restarts fixed this.

2) c1asc looks like it was down all night. When I walked out to look at the terminal, it claimed to be unable to read the input file from the command line I had entered the previous night ( < /cvs/cds/caltech/target/c1asc/startup.cmd). In addition we were unable to telnet in, suggesting an ethernet breakdown and inability to mount the appropriate files. So we have temporarily run a new cat6 cable from the c1asc board to the ITMX prosafe switch (since there's a nice knee high cable tray right there). One last power cycle and we were able to telnet in and get it running.
  687   Thu Jul 17 00:59:18 2008 JenneSummaryGeneralFunny signal coming out of VCO
While working on calibrating the MC_F signal, Rana and I noticed a funny signal coming out of the VCO. We expect the output to be a nice sine wave at about 80MHz. What we see is the 80MHz signal plus higher harmonics. The reason behind the craziness is to be determined. For now, here's what the signal looks like, in both time and frequency domains.

The first plot is a regular screen capture of a 'scope. The second is the output of the SR spectrum analyzer, as seen on a 'scope screen. The leftmost tall peak is the 80MHz peak, and the others are the harmonics.
Attachment 1: VCOout_time.PNG
Attachment 2: VCOout_freq.PNG
  9583   Tue Jan 28 22:24:46 2014 ericq UpdateGeneralFurther Alignment

[Masasa, ericq]

Having no luck doing things remotely, we went into the ITMX chamber and roughly aligned the IR beam. Using the little sliding alignment target, we moved the BS to get the IR beam centered on ITMX, then moved ITMX to get good michelson fringes with ITMY. Using an IR card, found the retroflection and moved ETMX to make it overlap with the beam transmitted through the ITM. With the PRM flashing, X-arm cavity flashes could be seen. So, at that point, both the y-arm and x-arm were flashing low order modes. 

  12107   Thu May 5 14:03:52 2016 ericqUpdateLSCFurther Aux X PDH tweaks

This morning I poked around with the green layout a bit. I found that the iris immediately preceding the viewport was clipping the ingoing green beam too much, opening it up allowed for better coupling to the arm. I also tweaked the positions of the mode matching lenses and did some alignment, and have since been able to achieve GTRX values of around 0.5.

I also removed the 20db attenuator after the mixer, and turned the servo gain way down and was able to lock easily. I then adjusted the gain while measuring the CLG, and set it where the maximum gain peaking was 6dB, which worked out to be a UGF of around 8kHz. On the input monitor, the PDH horn-to-horn voltage going into the VGA is 2.44V, which shouldn't saturate the G=4 preamp stage of the AD8336, which seems ok.

The ALS sensitivity is now approaching the good nominal state:

There remains some things to be done, including comprehensive dumping of all beams at the end table (especially the reflections off of the viewport) and the new filters to replace the current post-mixer LPF, but things look pretty good.

Attachment 1: 2016-05-05_newals.pdf
  4581   Thu Apr 28 12:25:11 2011 josephbUpdateCDSFurther adventures in Hyper-threading

First, I disabled front end starts on boot up, and brought c1sus up.  I rebuilt the models for the c1sus computer so they had a new specific_cpu numbers, making the assumption that 0-1 were one real core, 2-3 were another, etc.

Then I ran the startc1SYS scripts one by one to bring up the models.  Upon just loading the c1x02 on "core 2" (the IOP), I saw it fluctuate from about 5 to 12.  After bringing up c1sus on "core 3", I saw the IOP settle down to about 7 consistently.  Prior to hyper-threading it was generally 5. 

Unfortunately, the c1sus model was between 60 and 70 microseconds, and was producing error messages a few times a second

[ 1052.876368] c1sus: cycle 14432 time 65; adcWait 0; write1 0; write2 0; longest write2 0
[ 1052.936698] c1sus: cycle 15421 time 74; adcWait 0; write1 0; write2 0; longest write2 0

Bringing up the rest of the models (c1mcs on 4, c1rfm on 5, and c1pem on 6), saw c1mcs occasionally jumping above the 60 microsecond line, perhaps once a minute.   It was generally hovering around 45 microseconds.  Prior to hyper-threading it was around 25-28 microseconds.

c1rfm was rock solid at 38, which it was prior to hyper-threading.  This is most likely due to the fact it has almost no calculation and only RFM reads slowing it down.

c1pem continued to use negligible time, 3 microseconds out of its 480.

I tried moving c1sus to core 8 from core 3, which seemed to bring it to the 58 to 65 microsecond range, with long cycles every few seconds.


I built 5 dummy models (dua on 7, dub on 9, duc on 10, dud on 11, due on 1) to ensure that each virtual core had a model on it, to see if it helped with stabilizing things.  The models were basically copies of the c1pem model.

Interestingly, c1mcs seemed to get somewhat better and only taking to 30-32 microseconds, although still not as good as its pre-hyper-threading 25-28.  Over the course of several minutes it was no longer having a long cycle.

c1sus got worse again, and was running long cycles 4-5 times a second.


At this point, without surgery on which models are controlling which optics (i.e. splitting the c1sus model up) I am not able to have hyper-threading on and have things working.  I am proceeding to revert the control models and c1sus computer to the hyper-threading state.



  13741   Mon Apr 9 18:46:03 2018 gautamUpdateIOOFurther debugging
  1. I analyzed the data from the free swinging MC test conducted over the weekend. Attachment #1 shows the spectra. Color scheme is same for all panels.
    • I am suspicious of MC3: why does the LR coil see almost no Yaw motion?
    • The "equilibrium" values of all the sensor signals (at the IN1 of the coil input filters) are within 20% of each other (for MC3, but also MC1 and MC2).
    • The position resonance is also sensed more by the side coil than by the LR coil.
    • To rule out satellite box shenanigans, I just switched the SRM and MC3 satellite boxes. But coherence between frequency noise as sensed by the arms remain.
  2. I decided to clean up my IMC nosie budget a bit more.
    • Attachment #2 shows the NB as of today. I'll choose a better color palette for the next update.
    • "Seismic" trace is estimated using the 40m gwinc file - the MC2 stack is probably different from the others and so it's contribution is probably more, but I think this will suffice for a first estimate.
    • "RAM" trace is measured at the CM board input, with MC2 misaligned.
    • The unaccounted noise is evident from above ~8 Hz.
    • More noises will be added as they are measured.
    • I am going to spend some time working on modeling the CM board noise and TF in LTspice. I tried getting a measurement of the transfer function fron IN1 to the FAST output of the CM board with the SR785 (motivation being to add the contribution of the input referred CM board noise to the NB plot), but I suspect I screwed up something w.r.t. the excitation amplitude, as I am getting a totally nonsensical shape, which also seems to depend on my input excitation amplitude. I don't think the output is saturated (viewed during measurement on a scope), but perhaps there are some subtle effects going on.
Attachment 1: MC_Freeswinging.pdf
Attachment 2: IMC_NB_20180409.pdf
  13744   Tue Apr 10 14:28:44 2018 gautamUpdateIOOFurther debugging

I am working on IMC electronics. IMC is misaligned until further notice.

  2654   Thu Mar 4 02:25:14 2010 JenneUpdateCOCFurther details on the magnet story, and SRM guiderod glued

[Koji, Jenne]

First, the easy story:  SRM got it's guiderod & standoff glued on this evening.  It will be ready for magnets (assuming everything is sorted out....see below) as early as tomorrow.  We can also begin to glue PRM guiderods as early as tomorrow.

The magnet story is not as short.....

Problem: ITMX and ITMY's side magnets are not glued in the correct places along the z-axis of the optic (z-axis as in beam propagation direction). 

ITMX (as reported the other day) has the side magnet placement off by ~2mm.  ITMX side was glued using the magnet fixture from MIT and the teflon pads that Kiwamu and I improvised.

It was determined that the improvised teflon pads were too thin (maybe about 1m thick), so I took those out, and replaced them with the teflon pads stolen from the 40m's magnet gluing fixture.   (The teflon pad from the MIT fixture and the ones from the MIT fixture are the same within my measuring ability using a flat surface and feeling for a step between them.  I haven't yet measured with calipers the MIT pad thickness).  The pads from the 40m fixture, which were used in the MIT fixture to glue ITMY side last night were measured to be ~1.7mm thick.

Today when Koji hung ITMY, he discovered that the side magnet is off by ~1mm.  This improvement is consistent with the switching of the teflon pads to the ones from the 40m fixture.

We compared the 40m fixture with the one from MIT, and it looks like the distance from the edge of where the optic should sit to the center of the hole for the side magnet is different by ~1.1mm.  This explains the remaining ~1mm that ITMY is off by. 

We should put the teflon pads back into the 40m fixture, and only use that one from now on, unless we find an easy way to make thicker teflon pads for the fixture we received from MIT.  (The pads that are in there are about the maximum thickness that will fit).  I'm going to use my thickness measurements of SRM (taken in the process of gluing the guiderods) to see what thickness of pads / what fixture we want to actually use, but I'm sure that the fixture we found in the 40m is correct.  We can't use this fixture however, until we get some clean 1/4-28 screws.  I've emailed Steve and Bob, so hopefully they'll have something for us by ~lunchtime tomorrow. 

The ITMX side magnet is so far off in the Z-direction that we'll have to remove it and reglue it in the correct position in order for the shadow sensor to do anything.  For ITMY, we'll check it out tomorrow, whether the magnet is in the LED beam at all or not.  If it's not blocking the LED beam enough, we'll have to remove and reglue it too. 

Why someone made 2 almost identical fixtures, with a 1mm height difference and different threads for the set screws, I don't know.  But I don't think whoever that person was can be my friend this week. 

  12507   Mon Sep 19 22:03:10 2016 ericqUpdateGeneralFurther recovery progress

[ericq, Lydia, Teng]

Brief summary of this afternoon's activities:

  • PMC alignment adjusted (Transmission of 0.74)
  • IMC locked, hand aligned. Tranmission slightly over 15k. Measured spot positions to be all under 2mm.
  • Set DC offsets of MC2 Trans + WFS1 + WFS2 (WFS2 DC offsets had wandered so much that DC "centered" left some quadrants almost totally dark)
  • Set demod offsets of WFS1+WFS2
    • Note to self: WFS script area is a mess. I can never remember which scripts are the right ones to run. I should clean this up
  • WFS loops activated, tested. All clear.
  • Locked Yarm, dither aligned. Transmission 0.8
  • Moved BS to center ITMY reflection on AS camera
  • Misaligned ETMY, aligned PRM to make a flashing PRY AS beam. REFL camera spot confirmed to be on the screen, which is nice
  • Wandered ITMX around until its AS spot was found. ITMX OSEMs not too far from their half max. (todo: update with numbers)
  • Wandered SRM around until full DRMI flashes seen
  • Centered all vertex oplevs
  • Made a brief attempt at locking X arm, could only get some crazy high order mode to lock. BS and ITMX alignments have changed substantially from the in-air locks, so probably need to adjust ETMX much more.

Addendum: I had a suspicion that the alignment had moved so much, we were missing the TRX PDs. I misaligned the Y arm, and used AS110 as a proxy for X arm power, as we've done in the past for this kind of thing. Indeed, I could maximize the signal and lock a TM00 mode. Both the high gain PD and QPD in the TRX path are totally dark. This needs realignment on the end table.

  12508   Tue Sep 20 10:45:06 2016 ranaUpdateGeneralFurther recovery progress

Rana suspicious. We had arms locked before pumpdown with beams on Transmon PDs. If they're off now, must be beams are far off on the mirrors. Try A2L to estimate spot positions before walkin the beams too far.

  12510   Wed Sep 21 01:08:02 2016 ericqUpdateGeneralFurther recovery progress

The misalignment wasn't as bad as I had intially feared; the spot was indeed pretty high on ETMX at first. Both transmon QPDs did need a reasonable amount of steering to center once the dither had centered the beam spots on the optics.

Arms, PRMI and DRMI have all been locked and dither aligned. All oplevs and transmon QPDs have been centered. All AS and REFL photodiodes have been centered. 

Green TM00 modes are seen in each arm; I'll do ALS recovery tomorrow. 

  11414   Tue Jul 14 17:14:23 2015 EveSummarySummary PagesFuture summary pages improvements

Here is a list of suggested improvements to the summary pages. Let me know if there's something you'd like for me to add to this list!

  • A lot of plots are missing axis labels and titles, and I often don't know what to call these labels. I could use some help with this.
  • Check the weather and vacuum tabs to make sure that we're getting the expected output. Set the axis labels accordingly. 
  • Investigate past periods of missing data on DataViewer to see if the problem was with the data requisition process, the summary page production process, or something else.
  • Based on trends in data over the past three months, set axis ranges accordingly to encapsulate the full data range.
  • Create a CDS tab to store statistics of our digital systems. We will use the CDS signals to determine when the digital system is running and when the minute trend is missing. This will allow us to exclude irrelevant parts of the data.
  • Provide duty ratio statistics for the IMC.
  • Set triggers for certain plots. For example, for channels C1:LSC-XARM OUT DQ and page 4 LIGO-T1500123–v1 C1:LSC-YARM OUT DQ to be plotted in the Arm LSC Control signals figures, C1:LSCTRX OUT DQ and C1:LSC-TRY OUT DQ must be higher than 0.5, thus acting as triggers.
  • Include some flag or other marking indicating when data is not being represented at a certain time for specific plots.
  • Maybe include some cool features like interactive plots.
  11437   Wed Jul 22 22:06:42 2015 EveSummarySummary PagesFuture summary pages improvements

- CDS Tab

We want to monitor the status of the digital control system.

1st plot
Title: EPICS DAQ Status
I wonder we can plot the binary numbers as statuses of the data acquisition for the realtime codes.
We want to use the status indicators. Like this:






1st plot
Title: IOP Fast Channel DAQ Status
These have two bits each. How can we handle it?
If we need to shrink it to a single bit take "AND" of them.
C1:FEC-40_FB_NET_STATUS (legend: c1x04, if a legend placable)
C1:FEC-20_FB_NET_STATUS (legend: c1x02)
C1:FEC-33_FB_NET_STATUS (legend: c1x03)
C1:FEC-19_FB_NET_STATUS (legend: c1x01)
C1:FEC-46_FB_NET_STATUS (legend: c1x05)

3rd plot
Title C1LSC CPU Meters
C1:FEC-40_CPU_METER (legend: c1x04)
C1:FEC-42_CPU_METER (legend: c1lsc)
C1:FEC-48_CPU_METER (legend: c1ass)
C1:FEC-22_CPU_METER (legend: c1oaf)
C1:FEC-50_CPU_METER (legend: c1cal)
The range is from 0 to 75 except for c1oaf that could go to 500.
Can we plot c1oaf with the value being devided by 8? (Then the legend should be c1oaf /8)

4th plot
Title C1SUS CPU Meters
C1:FEC-20_CPU_METER (legend: c1x02)
C1:FEC-21_CPU_METER (legend: c1sus)
C1:FEC-36_CPU_METER (legend: c1mcs)
C1:FEC-38_CPU_METER (legend: c1rfm)
C1:FEC-39_CPU_METER (legend: c1pem)
The range is be from 0 to 75 except for c1pem that could go to 500.
Can we plot c1pem with the value being devided by 8? (Then the legend should be c1pem /8)

5th plot
Title C1IOO CPU Meters
C1:FEC-33_CPU_METER (legend: c1x03)
C1:FEC-34_CPU_METER (legend: c1ioo)
C1:FEC-28_CPU_METER (legend: c1als)
The range is be from 0 to 75.

6th plot
Title C1ISCEX CPU Meters
C1:FEC-19_CPU_METER (legend: c1x01)
C1:FEC-45_CPU_METER (legend: c1scx)
C1:FEC-44_CPU_METER (legend: c1asx)
The range is be from 0 to 75.

7th plot
Title C1ISCEY CPU Meters
C1:FEC-46_CPU_METER (legend: c1x05)
C1:FEC-47_CPU_METER (legend: c1scy)
C1:FEC-91_CPU_METER (legend: c1tst)
The range is be from 0 to 75.



We want a duty ratio plot for the IMC. C1:IOO-MC_TRANS_SUM >1e4 is the good period.

Duty ratio plot looks like the right plot of the following link



OL_PIT_INMON and OL_YAW_INMON are good for the slow drift monitor.
But their sampling rate is too slow for the PSDs.
Can you use
For the PSDs? They are 2kHz sampling DQ channels. You would be able to plot
it up to ~1kHz. In fact, we want to monitor the PSD from 100mHz to 1kHz.
How can you set up the resolution (=FFT length)?


LSC / ASC / ALS tabs

Let's make new tabs LSC, ASC, and ALS


We should have a plot for
It's OK to use the minute trend for now.
You can check the range using dataviewer.


Let's use
as the status indicators. There is no YARM Green ASS yet.


Title: ALS Green transmission
We want a time series of

Title: ALS Green beatnote
Another time series

Title: Frequency monitor
We have frequency counter outputs, but I have to talk to Eric to know the channel names

  8045   Fri Feb 8 21:14:52 2013 ManasaUpdateOpticsG&H - AR Reflectivity

 Hours of struggle and still no data 

I tried to measure the AR reflectivity and the loss due to flipping of G&H mirrors

 With almost no wedge angle, separating the AR reflected beam from the HR reflected beam seems to need more tricks.


The separation between the 2 reflected rays is expected 0.8mm. After using a lens along the incident beam, this distance was still not enough to be separable by an iris.

The first trick: I could find a prism and tried to refract the beams at the edge of the prism...but the edges weren't that sharp to separate the beams (Infact I thought an axicon would do the job better..but I think we don't have any of those).

Next from the bag of tricks: I installed a camera to see if the spots can actually be resolved.

The camera image shows the 2 sets of focal spots; bright set to the left corresponding to HR reflected beam and the other from the AR surface. I expect the ghost images to arise from the 15 arcsec wedge of the mirror. I tried to mask one of the sets using a razor blade to see if I can separate them and get some data using a PD. But, it so turns out that even the blade edge is not sharp enough to separate them.

If there are any more intelligent ideas...go ahead and suggest! 



  8046   Fri Feb 8 22:49:31 2013 KojiUpdateOpticsG&H - AR Reflectivity

How about to measure the AR reflectivity at larger (but small) angles the extrapolate the function to smaller angle,
or estimate an upper limit?

The spot separation is

D = 2 d Tan(\phi) Cos(\theta), where \phi = ArcSin(Sin(\theta) * n)

D = 2 d Tan(\phi) Cos(\theta), where \phi = ArcSin(Sin(\theta) / n)         (<== correction by Manasa's entry)

\theta is the angle of incidence. For a small \theta, D is propotional to \theta.

So If you double the incident angle, the beam separation will be doubled,
while the reflectivity is an even function of the incident angle (i.e. the lowest order is quadratic).

I am not sure until how much larger angle you can use the quadratic function rather than a quartic function.
But thinking about the difficulty you have, it might be worth to try.

  8047   Fri Feb 8 23:04:40 2013 ManasaUpdateOpticsG&H - AR Reflectivity


D = 2 d Tan(\phi) Cos(\theta), where \phi = ArcSin(Sin(\theta) * n)

\theta is the angle of incidence. For a small \theta, D is propotional to \theta.

n1Sin(\theta1) = n2 Sin(\theta2)

So it should be

\phi = ArcSin(Sin(\theta) / n 

I did check the reflected images for larger angles of incidence, about 20 deg and visibly (on the IR card) I did not see much change in the separation. But I will check it with the camera again to confirm on that.

  8051   Sat Feb 9 19:34:34 2013 ranaUpdateOpticsG&H - AR Reflectivity


 Use the trick I suggested:

Focus the beam so that the beam size at the detector is smaller than the beam separation. Use math to calculate the beam size and choose the lens size and position. You should be able to achieve a waist size of < 0.1 mm for the reflected beam.

  8063   Mon Feb 11 19:55:47 2013 ManasaUpdateOpticsG&H - AR Reflectivity


I adjusted the focal length of the focusing lens and reduced the beam size enough to mask with the razor blade edge while looking at the camera and then making measurements using PD.

I am still not satisfied with this data because the R of the HR surface measured after flipping seems totally unbelievable (at around 0.45).

G&H AR reflectivity

R percentage

11 ppm @4 deg
19.8 ppm @6 deg
20 ppm @ 8 deg
30 ppm @ 20 deg

  8075   Wed Feb 13 09:28:56 2013 SteveUpdateOpticsG&H - HR plots


 Gooch & Housego optics order specification from 03-13-2010

Side 1: HR Reflectivity >99.99 % at 1064 nm for 0-45 degrees for S & P polarization

Side 2: AR coat R <0.15

The HR coating scans uploaded to 40mwiki / Aux optics today

  8018   Wed Feb 6 20:19:52 2013 ManasaUpdateOpticsG&H and LaserOptik mirrors

[Koji, Manasa]

We measured the wedge angle of the G&H and LaserOptik mirrors at the OMC lab using an autocollimator and rotation stage.

The wedge angles:

G&H : 18 arc seconds (rough measurement)

LaserOptik : 1.887 deg

  12102   Mon May 2 17:06:58 2016 ranaSummaryCOCG&H optics to Fullerton/HWS for anneal testing

Steve sent 4 of our 1" diameter G&H HR mirrors to Josh Smith at Fullerton for scatter testing. Attached photo is our total stock before sending.

Attachment 1: 20160427_182305.jpg
  530   Wed Jun 11 15:30:55 2008 josephbConfigurationCamerasGC1280
The trial use GC1280 has arrived. This is a higher resolution CMOS camera (similar to the GC750). Other than higher resolution, it has a piece of glass covering and protecting the sensor as opposed to a plastic piece as used in the GC750. This may explain the reduced sensitivity to 1064nm light that the camera seems to exhibit. For example, the image averages presented here required a 60,000 microsecond exposure time, compared to 1000-3000 microseconds for similar images from the GC750. This is an inexact comparison, and the actual sensitivity difference will be determined once we have identical beams on both cameras.

The attached pdfs (same image, different angles of view) are from 200 averaged images looking at 1064nm laser light scattering from a piece of paper. The important thing to note is there doesn't seem to be any definite structure, as was seen in the GC750 scatter images.

One possibility is that too much power is reaching the CMOS detector, penetrating, and then reflecting back to the back side of the detector. Lower power and higher exposure times may avoid this problem, and the glass of the GC1280 is simply cutting down on the amount passing through.

This theory will be tested either this evening or tomorrow morning, by reducing the power on the GC750 to the point at which it needs to be exposed for 60,000 microseconds to get a decent image.

The other possibility is that the GC750 was damaged at some point by too much incident power, although its unclear what kind of failure mode would generate the images we have seen recently from the GC750.
Attachment 1: GC1280_60000E_scatter_2d.pdf
Attachment 2: GC1280_60000E_scatter_3d.pdf
  649   Tue Jul 8 21:46:38 2008 YoichiConfigurationPSLGC650M moved to the PMC transmission
I moved a GC650M, which was monitoring the light coming out of the PSL, to the transmission port of the PMC to see the transmitted mode shape.
It will stay there unless someone find other use of it.

Just FYI, you can see the picture from the control computers by the following procedure:

ssh -X mafalda
cd /cvs/cds/caltech/target/Prosilica/40mCode

Chose 02-2210A-06223 and click on the Live View icon.
ELOG V3.1.3-