I got around to actually try building the LSC and IFO models on megatron. Turns out "ifo" can't be used as a model name and breaks when trying to build it. Has something to do with the find and replace routines I have a feeling (ifo is used for the C1, H1, etc type replacements throughout the code). If you change the model name to something like ifa, it builds fine though. This does mean we need a new name for the ifo model.
Also learned the model likes to have the cdsIPCx memory locations terminated on the inputs if its being used in a input role (I.e. its bringing the channel into the model). However when the same part is being used in an output role (i.e. its transmitting from the model to some other model), if you terminate the output side, it gives errors when you try to make.
Its using the C1.ipc file (in /cvs/cds/caltech/chans/ipc/) just fine. If you have missing memory locations in the C1.ipc file (i.e. you forgot to define something) it gives a readable error message at compile time, which is good. The file seems to be being parsed properly, so the era of writing "0x20fc" for block names is officially over.
I suggest "ITF" for the model name.
I'm currently working on a set of scripts which will be able to parse a "template" mdl file, replacing certain key words, with other key words, and save it to a new .mdl file.
For example you pass it the "template" file of scx.mdl file (suspension controller ETMX), and the keyword ETMX, followed by an output list of scy.mdl ETMY, bs.mdl BS, itmx.mdl ITMX, itmy.mdl ITMY, prm.mdl PRM, srm.mdl SRM. It produces these new files, with the keyword replaced, and a few other minor tweaks to get the new file to work (gds_node, specific_cpu, etc). You can then do a couple of copy paste actions to produce a combined sus.mdl file with all the BS, ITM, PRM, SRM controls (there might be a way to handle this better so it automatically merges into a single file, but I'd have to do something fancy with the positioning of the modules - something to look into).
I also have plans for a script which gets passed a mdl file, and updates the C1.ipc file, by adding any new channels and incrementing the ipcNum appropriately. So when you make a change you want to propagate to all the suspensions, you run the two scripts, and have an already up to date copy of memory locations - no additional typing required.
Similar scripts could be written for the DAQ screens as well, so as to have all the suspension screens look the same after changing one set.
So I finished writing a script which takes an .ipc file (the one which defines channel names and numbers for use with the RCG code generator), parses it, checks for duplicate channel names and ipcNums, and then parses and .mdl file looking for channel names, and outputs a new .ipc file with all the new channels added (without modifying existing channels).
The script is written in python, and for the moment can be found in /home/controls/advLigoRTS/src/epics/simLink/parse_mdl.py
I still need to add all the nice command line interface stuff, but the basic core works. And already found an error in my previous .ipc file, where I used the channel number 21 twice, apparently.
Right now its hard coded to read in C1.ipc and spy.mdl, and outputs to H1.ipc, but I should have that fixed tonight.
This IPC stuff looks really a nice improvement of CDS.
Please just maintain the wiki updated so that we can keep the latest procedures and scripts to build the models.
1) What is c1asc doing? What is ascaux used for? What are the cables labeled "C1:ASC_QPD" in the 1X2 rack really going to?
2) Put the 4600 machine (megatron) in the 1Y3 (away from the analog electronics) This can be used as an OAF/IO machine. We need a dolphin fiber link from this machine to the IO chassis which will presumably be in 1Y1, 1Y2 (we do not currently have this fiber at the 40m, although I think Rolf said something about having one).
3) Merge the PSL and IOOVME crates in 1Y1/1Y2 to make room for the IO chassis.
4) Put the LSC and SUS machines into 1Y4 and/or 1Y5 along with the SUS IO chassis. The dolphin switch would also go here.
5) Figure out space in 1X3 for the LSC chassis. Most likely option is pulling asc or ascaux stuff, assuming its not really being used.
6) Are we going to move the OMC computer out from under the beam tube and into an actual rack? If so, where?
Rolf will likely be back Friday, when we aim to start working on the "New" Y end and possibly the 1X3 rack for the LSC chassis.
I modified the /etc/rc.d/rc.local file on megatron removing a bunch of the old test module names and added the new lsc and lsp modules, as well as a couple planned suspension models and plants, to shared memory so that they'll work. Basically I'm trying to move forward into the era of working on the actual model we're going to use in the long term as opposed to continually tweaking "test" models.
The last line in the file is now: /usr/bin/setup_shmem.rtl lsc lsp spy scy spx scx sus sup&
I removed mdp mdc mon mem grc grp aaa tst tmt.
I modified /cvs/cds/caltech/target/fb and changed the line "set controller_dcu=10" to "set controller_dcu=13" (where 13 is the lsc dcu_id number).
I also changed the set gds_server line from having 10 and 11 to 13 and 14 (lsc and lsp).
The file /cvs/cds/caltech/fb/master was modified to use C1LSC.ini and C1LSP.ini, as well as tpchn_C2.par (LSC) and tpchn_C3.par (LSP)
testpoint.par in /cvs/cds/caltech/target/gds/param was modified to use C-node1 and C-node2 (1 less then the gds_node_id for lsc and lsp respectively).
Note all the values of gds_node_id, dcu_id, and so forth are recorded at http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/Existing_RCG_DCUID_and_gds_ids
I had a chat with Alex this morning and discovered that the dcu_ids 13,14,15,16 are reserved currently, and should not be used. I was told 9-12 and 17-26 were fine to use. I pointed out that we will eventually have more modules than that. His response was he is currently working on the framebuilder code and "modernizing" it, and that those restrictions will hopefully be lifted in the future although he isn't certain at this time what the real maximum gds_id number is (he was only willing to vouch for up to 26 - although the OMC seems to be currently working and set to 30).
Alex also suggested running an iop module to provide timing (since we are using adcSlave=1 option in the models). Apparently these are x00.mdl, x01.mdl, x11.mdl files in the /home/control/cds/advLigoRTS/src/epics/simLink/ directory. I saved x00.mdl as io1.mdl (I didn't want to use io0 as its a pain to differentiate between a zero and 'O'. This new IOP is using gds_node=1, dcu_id=9. I modified the approriate files to include it.
I modified /etc/rc.d/rc.local and added io1 to shmem line. I modified /cvs/cds/caltech/target/fb/daqdrc to use dcu_id 9 as the controller (this is the new iop model dcu_id number). In that same directory I modifed the file master by adding /cvs/cds/caltech/chans/daq/C1IO1.ini as well as uncommenting tpchn_C1 line. I modified testpoint.par in /cvs/cds/caltech/target/gds/param to include C-node0, and modified the prognum for lsc and lsp to 0x31001003 and 0x31001005.
So I started the 3 processes with startio1, startlsc, startlsp, then went to the fb directory and started the framebuilder. However, the model lsc.mdl is still having issues, although lsp and io1 seem to be working. At this point I just need to track down what fundamentally is different between lsc and lsp and correct it in the lsc model. I'm hoping its not related to the fact that we actually had a previous lsc front end and there's some legacy stuff getting in the way. One thing I can test is changing the name and see if that runs.
I'm currently in the process of tracking down what legacy code is interfering with the new lsc model.
It turns out if you change the name of lsc file to something else (say scx as a quick test for example), it runs fine. In fact, the lsc and scx GDS_TP screens work in that case (since they're looking at the same channels). As one would expect, running them both at the same time causes problems. Note to self, make sure the other one is killed first. It does mean the lsc code gets loaded part way, but doesn't seem to communicate on EPICs or to the other models. However, I don't know what existing code is interfering. Currently going trhough the target directories and so forth.
We placed 3 new computers in the racks. One in 1X4 (machine running SCX) and 2 in 1Y4 (LSC and SUS). These are 1U chassis, 4 core machines for the CDS upgrade. I will be bringing over 2 IO chassis and their rails over tomorrow, one to be placed in 1Y4, and 1 in 1X4.
We still need some more 40 pin adapter cables and will send someone over this week to make them. However, once we have those, we should be able to get two to three machines going, one end computer/chassis and the SUS computer/chassis.
After tomorrow we are still going to be owed 1 computer, another dolphin fiber, a couple of blue boxes, and the LSC, IO, and Y end IO chassis. We also realized we need further fiber for the timing system. We're going to need to get and then run fiber to both ends, as well as to 1X3, where the LSC IO chassis will be.
After having checked old possibilities and deciding I wasn't imagining the lsc.mdl file not working, but working as another name, I tracked Alex down and asked for help.
After scratching our heads, we finally tracked it down to the RCG code itself, as opposed to any existing code.
Apparently, the skeleton.st file (located in /home/controls/cds/advLigoRTS/src/epics/util/) has special additional behavior for models with the following names: lsc, asc, hepi, hepia, asc40m, ascmc, tchsh1, tchsh2.
Alex was unsure what this additional code was for. To disable it, we went into the skeleton.st file, and changed the name "SEQUENCER_NAME_lsc" to "SEQUENCER_NAME_lsc_removed" where ever it occured. These names were in #ifdef statements, so now these codes will only be used if the model is named lsc_removed. This apparently fixed the problem. Running startlsc now runs the code as it should, and I can proceed to testing the communication to the lsp model.
Alex said he'd try to figure out what these special #ifdef code pieces are intended for and hopefully completely remove them once we've determined we don't need it.
We have 2 new IO chassis with mounting rails and necessary boards for communicating to the computers. Still need boards to talk to the ADCs, DACs, etc, but its a start. These two IO chassis are currently in the lab, but not in their racks.
They will installed into 1X4 and 1Y5 tomorrow. In addition to the boards, we need some cables, and the computers need the approriate real time operating systems setup. I'm hoping to get Alex over sometime this week to help work on that.
So I discovered the hard way that the racks are not standard width, when I was unable to place a new IO chassis into the racks with rails attached. The IO chassis is narrow enough to fit through without the rails however.
I've talked to Steve and we decided on having some shelves made. I've asked Steve to get us 6. 1 for each end (2), 1 for SUS, 1 for LSC, 1 for IO, and 1 extra.
In /cvs/cds/caltech/target/fb modified:
master: cleaned up so only io1 (IO processor), LSC, LSP, SCY, SPY were listed, along with their associated tpchan files.
daqdrc: fixed "dcu_rate 9 = 32768" to "dcu_rate 9 = 65536" (since the IO processor is running at 64k)
Added "dcu_rate 21 = 16384" and "dcu_rate 22 = 16384"
Changed "set gds_server = "megatron" "megatron" "megatron" 9 "megatron" 10 "megatron" 11;" to
set gds_server = "megatron" "megatron" 9 9;
The above change was made after reading Rolf's Admin guide: http://lhocds.ligo-wa.caltech.edu:8000/40m/Upgrade_09/CDS?action=AttachFile&do=get&target=RCG_admin_guide.pdf
The set gds_server is simply telling which computer the gds daemons are running on, and we don't need to do it 5 times.
In /cvs/cds/caltech/gds/params modified:
testpoint.par: added C-node7 and C-node8 for SCY and SPY respectively.
Alex brought over a ADC, DAC, and PCIe card which goes into the computer and talk to the IO chassis. We tried installing the new "small" IO chassis in 1X4, but initially it couldn't find the ADC/DAC boards, just the Contec Binary out board.
We tried several different configurations (different computer, different IO chassis, the megatron chassis, the megatron IO chassis with new cards, a new IO chassis with megatron cards.
The two things were concluded once we got a new IO chassis talking with the new computer.
1) Its possible one of the slots is in the IO chassis as we didn't see the ADC until we moved it to a different slot in the new chassis
2) The card that Alex had brought over to put in the computer doesn't behave well with these IO chassis. He went back to downs to try and figure out what it is, and test it with the chassis over there.
Currently, Megatron's IO chassis is sitting on a table behind racks 1Y5 and 1Y6, along with the new "large" IO chassis. Megatron's PCIe card for talking to the IO chassis was moved to the computer in 1X4. The computer in 1X4 is currently being called c1iscex with IP 192.168.113.80.
The problem with the new models using the new shared memory/dolphin/RFM defined as names in a single .ipc file.
The first is the no_oversampling flag should not be used. Since we have a single IO processor handling ADCs and DACs at 64k, while the models run at 16k, there is some oversampling occuring. This was causing problems syncing between the models and the IOP.
It also didn't help I had a typo in two channels which I happened to use as a test case to confirm they were talking. However, that has been fixed.
Now that we have multiple machines we'd like to run the new front end code on, I'm finding it annoying to have to constantly copy files back and forth to have the latest models on different machines. So I've come to the conclusion that Rana was right all along, and I should working somewhere in /cvs/cds/caltech which gets mounted by everyone.
However, this leads to the svn problem: I.e. I need recent code checked out from the RCG repository, but our current /cvs/cds/caltech/cds/advLigo directory is covered by the 40m SVN. So for the moment, I've checked out the advLigoRTS from https://redoubt.ligo-wa.caltech.edu/svn/advLigoRTS/trunk into /cvs/cds/caltech/cds/advLigoRTS. This directory will be kept as up to date as I can keep it, both by running svn update to get Alex/Rolf's changes and on my end by keeping the new and updated models. It will remain linked the RCG repository and not the 40m repository. At some point a better solution is needed, but its the best I can come up with for now.
Also, because we are starting to compile on different machines sometimes, you may run into a problem where a code won't run on a different machine. This can be fixed by commenting out some lines in the startup script. Go to the /cvs/cds/caltech/scripts directory. Then edit the associated startSYS file by commenting out the lines that look like:
if [ `hostname` != megatron ]; then
echo Cannot run `basename $0` on `hostname` computer
Unfortunately, this gets reverted each time "make SYS" and "make install-SYS" gets run.
The other issue this leads to is that some machines don't have as many CPUs available as others. For example our new thin 1U machines have only 4 dual cores (8 CPUs total). This means the specific_cpu setting of any of the codes cannot be higher than 7 (cores being numbered 0 through 7). Core 0 is reserved for the real time kernel, and Core 1 will be used on all machines for the IO processor. This leaves only cores 2 through 7 available for models to use which include LSC, LSP, SUS, SUP, SPY, SCY, SPX, SCX, OMC, OMP, OAF, OAP?, IOC, IOP. Since there are more than 6 models, duplication in final production code of specific_cpus will be necessary. Codes which are all running on Megatron at one point will have to be rebuilt with new specific_cpu values when run on the actual final machine.
I created the sus model, which is the suspension controller for ITMX, ITMY, BS, PRM, SRM. I also created sup, which is the suspension plant model for those same optics.
Updated /cvs/cds/caltech/target/fb master and daqdrc files to add SUS, SUP models. Megatron's /etc/rc.d/rc.local file has been updated to include all the necessary models as well.
The suspension controller needs the Binary IO outputs need to be checked and corrected if wrong by changing the constant connected to the exclusive or gates. Right now its using the end suspension binary output values which may not be correct.
I've modified the lsc.mdl and lsp.mdl files back to an older configuration, where we do not use an IO processor. This seems to let things work for the time being on megatron while I try to figure out what the is wrong with the "correct" setup which includes the IO processor.
Basically I removed the adcSlave = 1 line in the cdsParameters block.
I've attached a screen shot of the desktop showing one filter bank in the LSP model passing its output correctly to a filter block in the LSC. I also put in a quick test filter (an integrator) and you can see it got to 80 before I turned off the offset.
So far this is only running on megatron, not the new machine in the new Y end.
The models being use for this are located in /cvs/cds/caltech/cds/advLigoRTS/src/epics/simLink
Talked with Alex and tracked down why the codes were not working on the new c1iscex finally. The .bashrc and .cshrc files in /home/controls/ on c1iscex has the following lines:
setenv EPICS_CA_ADDR_LIST 184.108.40.206
setenv EPICS_CA_AUTO_ADDR_LIST NO
This was interfering with channel access and preventing read and writes from working properly. We simply commented them out. After logging out and back in, the things like ezcaread and write started working, and we were able to get the models passing data back and forth.
Next up, testing RFM communications between megatron on c1iscex. To do this, I'd like to move Megatron down to 1Y3, and setup a firewall for it and c1iscex so I can test the frame builder and testpoints at the same time on both machines.
Alex updated the awg.par file to handle all the testpoints. Basically its very similar to the testpoint.par, but the prognum lines have to be 1 higher than the corresponding prognum in testpoint.par. A entry looks like:
After running "diag -i" and seeing some RPC number conflicts, we went into /cvs/cds/caltech/cds/target/gds/param/diag_C.conf and changed the line from
&chn * * 192.168.1.2 822087685 1
&chn * * 192.168.1.2 822087700 1
The number represents an RPC number. This was conflicting with the RPC number associated with the awgtpman processes. We then had to update the /etc/rpc file as well. At the end we changed chnconf 822087685 to chnconf 822087700. We then run /usr/sbin/xinetd reload
Lastly we edited the /etc/xinetd.d/chnconf file line
server_args = /cvs/cds/caltech/target/gds/param/tpchn_C4.par /cvs/cds/caltech/target/gds/param/tpchn_C5.par
server_args = /cvs/cds/caltech/target/gds/param/tpchn_C1.par /cvs/cds/caltech/target/gds/param/tpchn_C2.par /cvs/cds/caltech/target/gds/param/tpchn_C3.par /cvs/cds/caltech/target/gds/param/tpchn_C4.par /cvs/cds/caltech/target/gds/param/tpchn_C5.par /cvs/cds/caltech/target/gds/param/tpchn_C6.par /cvs/cds/caltech/target/gds/param/tpchn_C7.par /cvs/cds/caltech/target/gds/param/tpchn_C8.par /cvs/cds/caltech/target/gds/param/tpchn_C9.par
Alex also recompiled the frame builder code to be able to handle more than 7 front ends. This involved tracking down a newer version of libtestpoint.so on c1iscex and moving it over to megatron, then going in and by hand adding the ability to have up to 10 front ends connected.
Alex has said he doesn't like this code and would like it to dynamically allocate properly for any number of servers rather than having a dumb hard coded limit.
Other changes he needs to make:
1) Get rid of set dcu_rate ## = 16384 type lines in the daqrc file. That information is available from the /caltech/chans/C1LSC.ini type files which are automatically generated when you compile a model. This means not having to go in by hand to update these in daqrc.
2) Get some awg.par and testpoint.par rules, so that these are automatically updates when you build a model. Make it so it automatically assigns a prognum when read in rather than having to hard code them in by hand.
3)Slave the awgtpmans to a single clock running from the IO processor x00. This ensures they are all in sync.
From what I understand, Alex rewrote portions of the framebuilder and testpoint codes and then recompiled them in order to get more than 1 testpoint per front end working. I've tested up to 5 testpoints at once so far, and it worked.
We also have a new noise component added to the RCG code. This piece of code uses the random number generator from chapter 7.1 of Numerical Recipies Third Edition to generate uniform numbers from 0 to 1. By placing a filter bank after it should give us sufficient flexibility in generating the necessary noise types. We did a coherence test between two instances of this noise piece, and they looked pretty incoherent. Valera will add a picture of it when it finishe 1000 averages to this elog.
I'm in the process of propagating the old suspension control filters to the new RCG filter banks to give us a starting point. Tomorrow Valera and I are planning to choose a subset of the plant filters and put them in, and then work out some initial control filters to correspond to the plant. I also need to think about adding the anti-aliasing filters and whitening/dewhitening filters.
Alex wrote a new code to implement LSP noise generator. The code is based on 64 bit random number generator from Numerical Recipes 3rd ed ch 7.1 (p 343).
Joe made two instances in the LSP model.
The attached plot shows the spectra and coherence of two generators. The incoherence is ~1/Navg - statistically consistent with no coherence.
I put matlab files and a summary into the 40m wiki for the fitting of the 40m Optickle transfer functions and generating digital filters for the simulated plant:
The filters are not loaded yet. Joe and Alex will make a rcg code to make a matrix of filters (currently 5x15=75 elements) which will enable the simulated plant tf's.
Joe and I tried to put a signal through the DARM loop but the signal was not going through the memory location in the scx part of the simulated plant.
Edit by Joe:
I was able to track it down to the spx model not running properly. It needed the Burt Restore flag set to 1. I hadn't done that since the last rebuild, so it wasn't actually calculating anything until I flipped that flag. The data is now circulating all the way around. If I turn on the final input (the same one with the initial 1.0 offset), the data circulates completely around and starts integrating up. So the loop has been closed, just without all the correct filters in.
A new webview of the LSP model is available at:
This model include a couple example noise generators as well as the new Matrix of Filter banks (5 inputs x 15 outputs = 75 Filters!). The attached png shows where these parts can be found in the CDS_PARTS library. I'm still working on the automatic generation of the matrix and filter bank medm screens for this part. The plan is to have a matrix screen similar to current ones, except that the value entry points to the gain setting of the associated filter. In addition, underneath each value, there will be a link to the full filter bank screen. Ideally, I'd like to have the filter adl files located in a sub-directory of the system, to keep clutter down.
I've cut and past the new Foton file generated by the LSP model below. The first number following the MTRX is the input the filter is taking data from and the second number is the output its pushing data to. This means for the script parsing Valera's transfer functions, I need to input which channel corresponds to which number, such as DARM = 0, MICH = 1, etc. So the next step is to write this script and populate the filter banks in this file.
# FILTERS FOR ONLINE SYSTEM
# Computer generated file: DO NOT EDIT
# MODULES DOF2PD_AS11I DOF2PD_AS11Q DOF2PD_AS55I DOF2PD_AS55Q
# MODULES DOF2PD_ASDC DOF2PD_POP11I DOF2PD_POP11Q DOF2PD_POP55I
# MODULES DOF2PD_POP55Q DOF2PD_POPDC DOF2PD_REFL11I DOF2PD_REFL11Q
# MODULES DOF2PD_REFL55I DOF2PD_REFL55Q DOF2PD_REFLDC Mirror2DOF_f2x1
# MODULES Mirror2DOF_f2x2 Mirror2DOF_f2x3 Mirror2DOF_f2x4 Mirror2DOF_f2x5
# MODULES Mirror2DOF_f2x6 Mirror2DOF_f2x7 DOF2PD_MTRX_0_0 DOF2PD_MTRX_0_1
# MODULES DOF2PD_MTRX_0_2 DOF2PD_MTRX_0_3 DOF2PD_MTRX_0_4 DOF2PD_MTRX_0_5
# MODULES DOF2PD_MTRX_0_6 DOF2PD_MTRX_0_7 DOF2PD_MTRX_0_8 DOF2PD_MTRX_0_9
# MODULES DOF2PD_MTRX_0_10 DOF2PD_MTRX_0_11 DOF2PD_MTRX_0_12 DOF2PD_MTRX_0_13
# MODULES DOF2PD_MTRX_0_14 DOF2PD_MTRX_1_0 DOF2PD_MTRX_1_1 DOF2PD_MTRX_1_2
# MODULES DOF2PD_MTRX_1_3 DOF2PD_MTRX_1_4 DOF2PD_MTRX_1_5 DOF2PD_MTRX_1_6
# MODULES DOF2PD_MTRX_1_7 DOF2PD_MTRX_1_8 DOF2PD_MTRX_1_9 DOF2PD_MTRX_1_10
# MODULES DOF2PD_MTRX_1_11 DOF2PD_MTRX_1_12 DOF2PD_MTRX_1_13 DOF2PD_MTRX_1_14
# MODULES DOF2PD_MTRX_2_0 DOF2PD_MTRX_2_1 DOF2PD_MTRX_2_2 DOF2PD_MTRX_2_3
# MODULES DOF2PD_MTRX_2_4 DOF2PD_MTRX_2_5 DOF2PD_MTRX_2_6 DOF2PD_MTRX_2_7
# MODULES DOF2PD_MTRX_2_8 DOF2PD_MTRX_2_9 DOF2PD_MTRX_2_10 DOF2PD_MTRX_2_11
# MODULES DOF2PD_MTRX_2_12 DOF2PD_MTRX_2_13 DOF2PD_MTRX_2_14 DOF2PD_MTRX_3_0
# MODULES DOF2PD_MTRX_3_1 DOF2PD_MTRX_3_2 DOF2PD_MTRX_3_3 DOF2PD_MTRX_3_4
# MODULES DOF2PD_MTRX_3_5 DOF2PD_MTRX_3_6 DOF2PD_MTRX_3_7 DOF2PD_MTRX_3_8
# MODULES DOF2PD_MTRX_3_9 DOF2PD_MTRX_3_10 DOF2PD_MTRX_3_11 DOF2PD_MTRX_3_12
# MODULES DOF2PD_MTRX_3_13 DOF2PD_MTRX_3_14 DOF2PD_MTRX_4_0 DOF2PD_MTRX_4_1
# MODULES DOF2PD_MTRX_4_2 DOF2PD_MTRX_4_3 DOF2PD_MTRX_4_4 DOF2PD_MTRX_4_5
# MODULES DOF2PD_MTRX_4_6 DOF2PD_MTRX_4_7 DOF2PD_MTRX_4_8 DOF2PD_MTRX_4_9
# MODULES DOF2PD_MTRX_4_10 DOF2PD_MTRX_4_11 DOF2PD_MTRX_4_12 DOF2PD_MTRX_4_13
# MODULES DOF2PD_MTRX_4_14
I've finished the MEDM portion of the RCG FiltMuxMatrix part. Now it generates an appropriate medm screen for the matrix, with links to all the filter banks. The filter bank .adl files are also generated, and placed in a sub directory with the name of the filter matrix as the name of the sub directory.
The input is the first number and the output is the second number. This particular matrix has 5 inputs (0 through 4) and 15 outputs (0 through 14). Unfortunately, the filter names can't be longer than 24 characters, which forced me to use numbers instead of actual part names for the input and output.
The key to the numbers is:
To get this working required modifications to the feCodeGen.pl and the creation of mkfiltmatrix.pl (which was based off of mkmatrix.pl). These are located in /cvs/cds/caltech/cds/advLigoRTS/src/epics/util/
In related news, I asked Valera if he could load the simulated plant filters he had generated, and after several tries, his answer was no. He says it has the same format as those filter they pass to the feed forward banks down in Livingston, so he's not sure why they won't work.
I tested my script, FillFotonFilterMatrix.py, on some simple second order section filters (like gain of 1, with b1 = 1.01, b2 = 0.02, a1 = 1.03, a2 = 1.04), and it populated the foton filter file correctly, and was parsed fine by Foton itself. So I'm going to claim the script is done and its the fault of the filters we're trying to load. This script is now living in /cvs/cds/caltech/chans/ , along with a name file called lsp_dof2pd_mtrx.txt which tells the script that DARM is 0, CARM is 1, etc. To run it, you also need a SOS.txt file with the filters to load, similar to the one Valera posted here, but preferably loadable.
I also updated my progess on the wiki, here.
It appears that foton does not like the unstable poles, which we need to model the transfer functions.
But one can try to load the filters into the front end by generating the filter file e.g.:
# MODULES DARM_ASDC
### DARM_ASDC ###
# SAMPLING DARM_ASDC 16384
# DESIGN DARM_ASDC
DARM_ASDC 0 21 6 0 0 darm 1014223594.005454063416 -1.95554205062071 0.94952075557861 0.06176931505784 -0.93823068494216
-2.05077577179611 1.05077843532639 -2.05854170261687 1.05854477394411
-1.85353637553024 0.86042048250739 -1.99996540107622 0.99996542454814
-1.93464836371852 0.94008893626414 -1.89722830906561 0.90024221050918
-2.04422931770060 1.04652211283968 -2.01120153956052 1.01152717233685
-1.99996545575365 0.99996548582538 -1.99996545573320 0.99996548582538
Unfortunately if you open and later save this file with foton it will strip the lhp poles.
I was getting an excess noise in the C1:IOO-MC_DRUM1 channel - it was a flat spectrum of 10 cts/rHz (corresponding to 600 uV/rHz).
I tried a few things, but eventually had to power cycle the crate with c1iovme in order to recover the standard ADC noise level of 3x10^-3 cts/rHz with a 1/sqrt(f) knee at 10 Hz.
I checked the gain of the channel by injecting a 2 Vpp sine wave at 137.035 Hz. 2Vpp as measured on a scope gives 31919 cts instead of the expected 32768, giving a 2.5% error from what we would have naively calculated.
Even so, the noise in this channel is very surprisingly good: 0.003 / (31919 / 2) = 187 nV /rHz. The best noise I have previously seen from an ICS-110B channel is 800 nV/rHz. What's going on here?
I visited downs and announced that I would be showing up again until all the 40m hardware is delivered.
I brought over 4 ADC boards and 5 DAC boards which slot into the IO chassis.
The DACs are General Standards Corporation, PMC66-16AO16-16-F0-OF, PCIe4-PMC-0 adapters.
The ADCs are General Standards Corporation, PMC66-16AI6455A-64-50M, PCIe4-PMC-0 adapters.
These new ones have been placed with the blue and gold adapter boards, under the table behind the 1Y4-1Y5 racks.
With the 1 ADC and 1 DAC we already have, we now have enough to populated the two ends and the SUS IO chassis. We have sufficient Binary Output boards for the entire 40m setup. I'm going back with a full itemized list of our current equipment, and bring back the remainder of the ADC/DAC boards we're due. Apparently the ones which were bought for us are currently sitting in a test stand, so the ones I took today were from a different project, but they'll move the test stand ones to that project eventually.
I'm attempting to push them to finish testing the IO chassis and the remainder of those delivered as well.
I'd like to try setting up the SUS IO chassis and the related computer this week since we now have sufficient parts for it. I'd also like to move megatron to 1Y3, to free up space to place the correct computer and IO chassis where its currently residing.
Yesterday afternoon I went to downs and acquired the following materials:
2 100 ft long blue fibers, for use with the timing system. These need to be run from the timing switch in 1Y5/1Y6 area to the ends.
3 ADCs (PMC66-16AI6455A-64-50M) and 2 DACs (PMC66-16AO16-16-F0-OF), bringing our total of each to 8.
7 ADC adapter boards which go in the backs of the IO chassis, bringing our total for those (1 for each ADC) to 8.
There were no DAC adapter boards of the new style available. Jay asked Todd to build those in the next day or two (this was on Thursday), so hopefully by Monday we will have those.
Jay pointed out there are different styles of the Blue and Gold adapter boxes (for ADCs to DB44/37) for example. I'm re-examining the drawings of the system (although some drawings were never revised to the new system, so I'm trying to interpolate from the current system in some cases), to determine what adapter style and numbers we need. In any case, those do not appear to have been finished yet (there basically stuffed boards in a bag in Jay's office which need to be put into the actual boxes with face plates).
When I asked Rolf if I could take my remaining IO chassis, there was some back and forth between him and Jay about numbers they have and need for their test stands, and having some more built. He needs some, Jay needs some, and the 40m still needs 3. Some more are being built. Apparently when those are finished, I'll either get those, or the ones that were built for the 40m and are currently in test stands.
Aparently Friday afternoon (when we were all at Journal Club), Todd dropped off the 7 DAC adapter boards, so we have a full set of those.
Things still needed:
1) 3 IO chassis (2 Dolphin style for the LSC and IO, and 1 more small style for the South end station (new X)). We already have the East end station (new Y) and SUS chassis.
2) 2 50+ meter Ethernet cables and a router for the DAQ system. The Ethernet cables are to go from the end stations to 1Y5-ish, where the DAQ router will be located.
3) I still need to finish understanding the old drawings drawings to figure out what blue and gold adapter boxes are needed. At most 6 ADC, 3 DAC are necessary but it may be less, and the styles need to be determined.
4) 1 more computer for the South end station. If we're using Megatron as the new IO chassis, then we're set on computers. If we're not using Megatron in the new CDS system, then we'll need a IO computer as well. The answer to this tends to depend on if you ask Jay or Rolf.
ORPHAN ENTRY FOUND ON ROSALBA:::::::::::::::::::::::::::::::::::::::::::::::::::>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
We did svn update. Then Alex realized he missed adding some files, so added them to the svn, and then we checked out again.
We rebuilt awg, fb, nds.
We reloaded service xinet.d, after editing /etc/xinetd.d/chnconf. We changes all the tpchn_c1 to tpchn_c1SYS
There's a new naming scheme for the model files. They start with the site name. So, lsc becomes c1lsc.
On any machine you want code running, make a symbolic link to /cvs/cds/rtcds/ in /opt/
This applies to SUS:
Two ICS 110Bs. Each has 2 (4 total) 44 shielded conductors going to DAQ Interface Chassis (D990147-A). See pages 2 and 4.
Three Pentek 6102 Analog Outputs to LSC Anti-Image Board (D000186 Rev A). Each connected via 40 conductor ribbon cable (so 3 total). See page 5.
Eight XY220 to various whitening and dewhitening filters. 50 conductor ribbon cable for each (8 total). See page 10.
Three Pentek 6102 Analog Input to Op Lev interface board. 40 conductor ribbon cable for each (3 total). See page 13.
The following look to be part of the AUX crate, and thus don't need replacement:
Five VMIC113A to various Coil Drives, Optical Levers, and Whitening boards. 64 conductor ribbon cable for each (5 total). See page 11.
Three XY220 to various Coil boards. 50 conductor ribbon for each (3 total). See page 11.
This applies to WFS and LSC:
Two XY220 to whitening 1 and 2 boards. 50 conductor ribbon for each (2 total). See page 3.
Pentek 6102 to LSC Anti-image. 50 conductor ribbon. (1 total). See page 5.
The following are unclear if they belong to the FE or the Aux crate. Unable to check the physical setup at the moment.
One VMIC3113A to LSC I & Q, RFAM, QPD INT. 64 conductor ribbon cable. (Total 1). See page 4.
One XY220 to QPD Int. 50 conductor ribbon cable. (Total 1). See page 4.
The following look to be part of WFS, and aren't needed:
Two Pentek 6102 Analog Input to WFS boards. 40 conductor ribbon cables (2 Total). See page 1.
The following are part of the Aux crate, and don't need to be replaced:
Two VMIC3113A to Demods, PD, MC servo amp, PZT driver, Anti-imaging board. 64 conductor ribbon cable (2 Total). See page 3.
Two XY220 to Demods, MC Servo Amp, QPD Int boards. 50 conductor ribbon cable (2 Total). See page 3.
Three VMIC4116 to Demod and whitening boards. 50 conductor ribbon cable (3 Total). See page 3.
Last week Alex merged in the changes I had made to the local 40m copy of the Real Time Code Generator. These were to add a new part, called FiltMuxMatrix, which is a matrix of filter banks, as well as fixing the filter medm generation code so the filter banks actually have working time stamps.
I checked out a new version of the CDS SVN with these changes merged in. Changes that will be added in the near future by Rolf and Alex include the addition of "tags". These are pieces in simulink which act as a bridge between two points, so you can reduce the amount of wire clutter on diagrams. Otherwise they have no real affect on the generated C code. Also the ADC/DAC channel selector and in fact the ADC/DAC parts will be changing. The MIT group has requested the channel selector be freed up for its original purpose in matlab, so Rolf is working on that.
For the time being, Alex has created a directory /rtcds on Linux1 under /home/cds. He then created softlinks to that directory on megatron, c1iscex, and allegra in the /opt directory. This was an easy way to have a shared path.
After checking out the CDS SVN, we discovered there some files missing that Alex had added to his version, but not the main branch. Alex came over to the 40m and proceeded to get all those files checked in. We then checked it out again. Changes were made to the awg, framebuilder, and nds codes and needed to be rebuilt.
Certain other file name conventions were also changed. Instead of tpchn_c1.par, tpchn_c2.par, etc, its now tpchn_c1lsc.par, tpchn_c2lsp.par, etc. The system name is included at the end of the filename, to help make it clearer what file goes with what.
This required an edit of the chnconf file, which has explicit calls to those file names. Once we edited that file, we had to reload the xinetd service which its apparently a subpart of (this can be accomplished by /etc/init.d/xinetd stop, then /etc/init.d/xinetd start).
/etc/rc.d/rc.local also had to be edited for the new model names (c1lsc, c1lsp, etc).
The daqdrc file (for the framebuilder) now parses which dcu_rate to use from the tpchn_c1lsc.par type files, so the dcu_rate 20 = 16384 lines have been removed. set gds_server has also been removed, and replaced with tpconfig "/opt/rtcds/caltech/c1/target/gds/param/testpoint.par"; from which it can get the hostname. This information is now derived from the c1SYS.mdl file.
After that Alex informed me the IOP processor needs to be running for the other models to work properly, as well as for the Framebuilder to work.
Initially there was a problem running on Megatron, because the IOP gets its timing signal from the IO chassis, and there was none connected to megatron. However, he has since modified the code so that if there's no IO chassis, the IOP processor just uses the system clock. It has been tested and runs on megatron now.
Those drawings are an OK start, but its obvious that things have changed at the 40m since 2002. We cannot rely on these drawings to determine all of the channel counts, etc.
I thought we had already been through all this...If not, we'll have to spend one afternoon going around and marking it all up.
I talked with Rolf, and asked if we were using Megatron for IO. The gist boiled down to we (the 40m) needed to use it for something, so yes, use it for the IO computer. In regards to the other end station computer, he said he just needed a couple of days to make sure it doesn't have anything on it they need and to free it up.
I had a chat with Jay where he explained exactly what boards and cables we need. Adapter boards are 95% of the way there. I'll be stopping by this afternoon to collect the last few I need (my error this morning, not Jays). However it looks like we're woefully short on cables and we'll have to make them. I also acquired 2 D080281 (Dsub 44 x2 to SCSI).
For each 2 Pentek DACs plus a 110B, you need 1 DAC adapter board (D080303 with 2 connectors for IDC40 and a SCSI). You also need a D080281 to plug onto the back of the Sander box (going to the 110Bs) to convert the D-sub 44 pins to SCSI.
LSC will need none, SUS will need 3, IO will need 1, and the ends will need 1 each. We have a total of 6, we're set on D080303s. We have 3 110Bs, so we need one more D080281 (Dsub44 to SCSI). I'll get that this afternoon.
For each XVME220, we'll need one D080478 binary adapter. We have 8 XVME220s, and we have 8 boards, so we're set on D08478s.
For the ends, there's a special ADC to DB44/37 adapter, which we only have 1 one of. I need to get them to make 1 more of these boxes.
We have 1 ADC to DB37 adapter, of which we'll need 1 more of as well, one for IO and one for SUS.
However, for each Pentek ADC, we need a IDC40 to DB37 cable. For each Pentek DAC we need an IDC40 to IDC40 cable. We need a SCSI cable for each 110B. I believe the current XVME220 cables plug directly in the BIO adapter boxes, so those are set.
So we need to make or acquire 11 IDC40 to DB37 cables, 7 IDC40 to IDC40 cables, and 3 SCSI cables.
I picked up the ribbon cable connectors from Jay. It looks like we'll have to make the new cables for connecting the ADCs/DACs myself (or maybe with some help). We should be able to make enough ribbon cables for use now. However, I'm adding "Make nice shielded cables" to my long term to do list.
I pointed out the 2 missing adapter boxes we need to Jay. He has the parts (I saw them) and will try to get someone to put it together in the next day or so. I also picked up 2 more D080281 (DB44 to SCSI), giving us enough of those.
I once again asked Jay for an update on IO chassis, and expressed concern that without them the CDS effort can't really go forward, and that we really need this to come together ASAP. He said they still need to make 3 new ones for us.
So we're still waiting on a computer, 3 IO chassis, router + ethernet.
I spent this morning populating the SUS IO Chassis and getting it ready for installation into the 1Y4 rack. I discovered I'm lacking the internal cables to connect the ADC/DAC boards to the AdL adapter boards that also go in the chassis (D0902006 and D0902496). I'm not sure where Alex got the cables he used for the end IO chassis he had put in. I'll be going to Downs after the meeting today either to get cables or parts for cables, and get the SUS chassis wired up fully.
I'd also like to confirm with Alex that the OSS-MAX-EXP-ELB-C board that goes in the IO chassis matches the host interface board that goes in the computer (OSS-HIB2-PE1x4-1x4 Re-driver HIB, since we spent half a day the last time we installed an IO chassis determining that one of the pair was bad or didn't match.
The SUS chassis has been populated in the following way:
Slot 1 ADC PMC66-16AI6455A-64-50M
Slot 2 DAC PMC66-16AO16-16-F0-OF
Slot 3-6 BO Contec DIO-1616L-PE Isolated Digital IO board
Slot 7 ADC PMC66-16AI6455A-64-50M
Slot 8-9 DAC PMC66-16AO16-16-F0-OF
Slot 1 ADC adapter D0902006
Slot 2 DAC adapter D0902496-v1
Slot 7 ADC adapter D0902006
Slot 8-9 DAC adapter D0902496-v1
Thanks to Steve's work on some L brackets, and Kiwamu's lifting help, we now have a new SUS IO chassis in the new 1X4 rack (formerly the 1Y4 rack), just below the new SUS and LSC computers. I have decided to call the sus machine, c1sus, and give it IP address 192.168.113.85. We also put in a host interface adapter, OSS-HIB2-PE1x4-1x4 Re-driver HIB, which connects the computer to the IO chassis.
The IP was added to the linux1 name server. However, the computer itself has not been configured yet. I'm hoping to come in for an hour or two tomorrow and get the computer hooked up to a monitor and keyboard and get its network connection working, mount /cvs/cds and get some basic RCG code running.
We also ran ethernet cables for the SUS machine to the router in 1X6 (formerly 1Y6) as well as a cable for megatron from 1X3 (formerly 1Y3) to the router, in anticipation of that move next week.
During the day, I realized we needed 2 more ADCs, one of which I got from Jay immediately. This is for two 110Bs and 4 Pentek ADCs. However, there's a 3rd 110B connected to c0dcu1 which goes to a BNC patch panel. Original Jay thought we would merge that into 4 pin lemo style into the 2nd 110B associated with the sus front ends. We've decided to get a another ADC and adapter. That will have to be ordered, and generally take 6-8 weeks. However, it may be possible to "borrow" one from another project until that comes in to "replace" it. This will leave us with our BNC patch panel and not force me to convert over 20 cables.
I also discovered we need one Contec DIO-1616L-PE Isolated Digital IO board for each Chassis, which I wasn't completely aware of. This is used to control the ADCs/DACs adapter boards in the chassis. It means we need still need to put a Binary Output board in the c1iscex chassis. Hopefully the chassis as they come in come from Downs continue to come with the Contec DIO-1616L-PE boards (they have so far).
The current loadout of the SUS chassis is as follows:
Far left slot, when looking from the front has the OSS-MAX-EXP-ELB-C board, used to communicate with the c1sus computer.
Slot 1 ADC PMC66-16AI6455A-64-50M
Slot 3-6 BO Contec DIO-32L-PE Isolated Digital Output board
Slot 7 ADC PMC66-16AI6455A-64-50M
Slot 10-11 ADC PMC66-16AI6455A-64-50M
Slot 12 Contect DIO-1616L-PE Isolated Digital IO board
Slot 10-11 ADC adapter D0902006
Kiwamu and I went through and looked at the spare channels available near the PSL table and at the ends.
First, I noticed I need another 4 DB37 ADC adapter box, since there's 3 Pentek ADCs there, which I don't think Jay realized.
Anyways, in the IOO chassis that will put in, for the ADC we have a spare 8 channels which comes in the DB37 format. So one option, is build a 8 BNC converter, that plugs into that box.
The other option, is build 4-pin Lemo connectors and go in through the Sander box which currently goes to the 110B ADC, which has some spare channels.
For DAC at the PSL, the IOO chassis will have 8 spare channel DAC channels since there's only 1 Pentek DAC. This would be in a IDC40 cable format, since thats what the blue DAC adapter box takes. A 8 channel DAC box to 40 pin IDC would need to be built.
The ends have 8 spare DAC channels, again 40 pin IDC cable. A box similar to the 8 channel DAC box for the PSL would need to be built.
The ends also have spare 4-pin Lemo capacity. It looked like there were 10 channels or so still unused. So lemo connections would need to be made. There doesn't appear to be any spare 37 DB connectors on the adapter box available, so lemo via the Sander box is the only way.
Joe needs to provide Kiwamu with cabling pin outs.
If Kiwamu makes a couple spares of the 8 BNC to 37DB connector boards, there's a spare 37DB ADC input in the SUS machine we could use up, providing 8 more channels for test use.
I connected a monitor and keyboard to the new c1sus machine and discovered its not running RTL linux. I changed the root password to the usual, however, without help from Alex I don't know where to get the right version or how to install it, since it doesn't seem to have an obvious CD rom drive or the like. Hopefully Tuesday I can get Alex to come over and help with the setup of it, and the other 1-2 IO chassis.
I went to talk to Rolf and Jay this morning. I asked Rolf if a chassis was available, so he went over and disconnected one of his test stand chassis and gave it to me. It comes with a Contect DIO-1616L-PE Isolated Digital IO board and an OSS-MAX-EXP-ELB-C, which is a host interface board. The OSS board means it has to go into the south end station. There's a very short maximum cable length associated with that style, and the LSC and IOO chassis will be further than that from their computers (we have dolphin connectors on optical fiber for those connections).
I also asked Jay for another 4 port 37 d-sub ADC blue and gold adapter box, and he gave me the pieces. While over there, I took 2 flat back panels and punched them with approriate holes for the scsi connectors that I need to put in them. I still need to drill 4 holes in two chassis to mount the boards, and then a bit of screwing. Shouldn't take more than an hour to put them both together. At that point, we should have all the adapter boxes necessary for the base design. We still need some stuff for the green locking, as noted on Friday.
I talked to Alex, and he explained the steps necessary to get the real time linux kernel installed. It basically went like copy the files from c1iscex (the one he installed last month) in the directory /opt/rtlidk-2.2 to the c1sus locally. Then go into rtlinux_kernel_2_6, and run make and make install (or something like that - need to look at the make file). Then edit the grub loader file to look like the one on c1iscex (located at /boot/grub/menu.lst).
This will then hopefully let us try out the RCG code on c1sus and see if it works.
After talking with Rolf, and clarifying exactly which machine I wanted, he gave me an 4600 Sun machine (similar to our current megatron). I'm currently trying to find a good final place for it, but its at least here at the 40m.
I also acquired 3 boards to plug our current VMIPMC 5565 RFM cards into, so they can be installed in the IO chassis. These require +/- 5V power be connected to the top of the RFM board, which would be not possible in the 1U computers, so they have to go in the chassis. These style boards prevent the top of the chassis from being put on (not that Rolf or Jay have given me tops for the chassis). I'm planning on using the RFM cards from the East End FE, the LSC FE, and the ASC FE.
I talked to Jay, and offered to upgrade the old megatron IO chassis myself if that would speed things up. They have most of the parts, the only question being if Rolf has an extra timing board to put in it. Todd is putting together a set of instructions on how to put the IO chassis together and he said he'd give me a copy tomorrow or Monday. I'm currently planning on assembling it on Monday. At that point I only need 1 more IO chassis from Rolf.
When I asked about the dolphin IO chassis, he said we're not planning on using dolphin connections between the chassis and computer anymore. Apparently there was some long distance telecon with the dolphin people and they said the Dolphin IO chassis connection and RFM doesn't well together (or something like that - it wasn't very clear from Rolf's description). Anyways, the other style apparently is now made in a fiber connected version (they weren't a year ago apparently), so he's ordered one. When I asked why only 1 and what about the IOO computer and chassis, he said that would either require moving the computer/chassis closer or getting another fiber connection (not cheap).
So the current thought I hashed out with Rolf briefly was:
We use one of the thin 1U computers and place that in the 1Y2 rack, to become the IOO machine. This lets us avoid needing a fiber. Megatron becomes the LSC/OAF machine, either staying in 1Y3 or possibly moving to 1Y4 depending on the maximum length of dolphin connection because LSC and the SUS machine are still supposed to be connected via the Dolphin switch, to test that topology.
I'm currently working on an update to my CDS diagram with these changes and will attach it to this post later today.
This is the machine which will be the new x end front end machine. Its IP is 192.168.113.86.
We changed the root and controls passwords to the usual. We have modified the controls user group to be 1001, by using "usermod -u 1001 controls" (we had to use the non-rtl kernel to get that command to work).
We changed /etc/fstab to point to /cvs/cds on Linux rather than some downs machine. We added a link to /cvs/cds/rtcds in the local /opt directory.
We modified the /etc/rc.d/rc.local file to no longer run /opt/open-mx/sbin/omx_init start, /cvs/cds/geo/target/fb/mx_stream -d scipe12:0 om1, and /cvs/cds/geo/target/fb/mx_stream -d scipe12:0 -e2 -r2 om2. We modified the /usr/bin/setup_shmem.rtl to run only c1x00 c1scx and c1spx.
I also commented out a line0 "/bin/rm -f /rtl*"
The timing slave in the IO chassis on the new X end was not working with symptoms of no front "OK" green light, no "PPS" light, 3.3V testpoint not working and ERROR testpoint bouncing between 5-6V.
We took out the timing slave from the X end IO chassis put in to the new Y end IO chassis .
It worked perfectly there. We took the working one from Y end put in the X end IO chassis.
We slowly added cables. First we added power , it worked fine and we saw green "OK" light. Then we added 1PPS signal by a fiber and it also worked.
We turned everything off and then we added 40pin IPC cable from the chassis and infiniband cable from the computer.
When we turned ON it we didn't see the green light.
This means something in the computer configuration might be wrong not in the timing card, we now are trying to make contact with Alex.
We are comparing the setup of the C1SCX machine and the working C1ISCEX machine.
The End IO chassis have small trenton boards, which apparently only have 5 usuable PCI slots, even though there are 6 on the board. This is because of the way the the host interface board is setup and its closeness to the 2nd to last PCI slot
The PMC to PCIe adapters I was handed by Jay for use with the RFM cards require a 4 pin power connection at the top, which are not available inside the thin 1U computers.
The only solution I can come up with is swap the PMC to PCIe adapters for the RFM cards with adapters for some of the already installed ADCs and DACs which do not require power directly from the power supply. This should make it possible to mount the RFM card in the computer, at least for the ends. Since the SUS and IOO chassis will have more slots available than needed, the RFM cards can be slotted into those. The SUS has to fit in the chassis since the computer will have the Infiband host adapter and a dolphin connector for talking to the LSC machine.
There is still the problem of actually getting the RFM card into the computer, but that should be possible with a little bit of bending of the left side of the computer frame.
I re-routed around the c1lsc machine this morning. I turned the crate off, and disconnected the transmission fiber from c1lsc (which went to the receiver on c1asc). I then took the receiving fiber from c1lsc and plugged it into the receiver on c1asc.
I pulled out the c1lsc computer from the VME crate and pulled out the RFM card, which I needed for the CDS upgrade. I then replaced the lsc card back in the crate and turned it back on. Since there hasn't been a working version of the LSC code on linux1 since I overwrote it with the new CDS lsc code, this shouldn't have any significant impact on the interferometer.
I've confirmed that the RFM network seems to be in a good state (the only red lights on the RFM timing and status medm screen are LSC, ASC, and ETMX). Fast channels can still be seen with dataviewer and fb40m appears to still be happy.
The RFM card has found its new home in the SUS IO Chassis. The short fiber that used to go between c1asc and c1lsc is now on the top shelf of the new 1X3 rack.
Kiwamu and I strung an ethernet cable from the new 1X7 rack to the 1X3 rack. The cable is labeled c1iscex-daq on both ends. This cable will eventually connect c1iscex's second ethernet port to the daq router. However, for today, it plugged into the primary ethernet port and is going to a linksys router. This is the same linksys router we used to firewall megatron.
The idea is to place megatron, c1sus, and c1iscex behind the firewall to prevent any problems with the currently running while doing RFM nework tests.
The way to get into the firewalled sub-network is to ssh into megatron. The router will forward the ssh to megatron. Inside the network, the computers will have the following IPs. Router is 192.168.1.1, megatron is 192.168.1.2, c1sus is 192.168.1.3, and c1iscex is 192.168.1.4.
I just realized that an unfortunate casualty of this LSC work was the deletion of the slow controls for the LSC which we still use (some sort of AUX processor). For example, the modulation
depth slider for the MC is now in an unknown state.