40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log  Not logged in ELOG logo
Entry  Tue Aug 27 19:34:36 2013, Jamie, Configuration, CDS, front end IPC configuration hosts.png
    Reply  Tue Aug 27 20:43:34 2013, Koji, Configuration, CDS, front end IPC configuration 
       Reply  Wed Aug 28 19:47:28 2013, jamie, Configuration, CDS, front end IPC configuration 
          Reply  Wed Aug 28 23:09:55 2013, jamie, Configuration, CDS, code to generate host IPC graph hosts.png40m-ipcs-graph.py
Message ID: 9076     Entry time: Tue Aug 27 20:43:34 2013     In reply to: 9074     Reply to this: 9086
Author: Koji 
Type: Configuration 
Category: CDS 
Subject: front end IPC configuration 

The reason we had the PCIe/RFM system was to test this mixed configuration in prior to the actual implementation at the sites.
Has this configuration been intesively tested at the site with practical configuration?

Quote:

Attached is a graph of my rough accounting of the intended direct IPC connections between the front ends. 

It's hard to believe that c1lsc -> c1sus only has 4 channels. We actuate ITMX/Y/BS/PRM/SRM for the length control.
In addition to these, we control the angles of ITMX/Y/BS/PRM (and SRM in future) via c1ass model on c1lsc.
So there should be at least 12 connections (and more as I ignored MCL).

I personally prefers to give the PCIe card to c1ioo and move the RFM card to c1lsc.
But in either cases, we want to quantitatively compare what the current configuration is (not omitting the bridging by c1rfm),
and what the future configuration will be including the addtional channels we want add in close future,

because RFM connections are really costly and moving the RFM card to c1lsc may newly cause the timeout of c1lsc
just instead of c1sus.

ELOG V3.1.3-