Is this the correct status? Please directly update this entry.
LO1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
LO2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS4 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR3 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
SR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
Last updated: Thu Jan 20 17:16:33 2022
I have further updated my calculation. Please find the results in the attached pdf.
Following is the description of calculations done:
Reflection fro arm cavity is calculated as simple FP cavity reflection formula while absorbing all round trip cavity scattering losses (between 50 ppm to 200 ppm) into the ETM transmission loss.
So effective reflection of ETM is calculated as
The magnitude and phase of this reflection is plotted in page 1 with respect to different round trip loss and deviation of cavity length from resonance. Note that the arm round trip loss does not affect the sign of the reflection from cavity, at least in the range of values taken here.
The Michelson in PRFPMI is assumed to be perfectly aligned so that one end of PRC cavity is taken as the arm cavity reflection calculated above at resonance. The other end of the cavity is calculated as a single mirror of effective transmission that of PRM, 2 times PR2 and 2 times PR3. Then effective reflectivity of PRM is calculated as:
Note, that field transmission of PRM is calculated with original PRM power transmission value, so that the PR2, PR3 transmission losses do not increase field transmission of PRM in our calculations. Then the field gain is calculated inside the PRC using the following:
From this, the power recycling cavity gain is calculated as:
The variation of PRC Gain is showed on page 2 wrt arm cavity round trip losses and PR2 transmission. Note that gain value of 40 is calculated for any PR2 transmission below 1000 ppm. The black verticle lines show the optics whose transmission was measured. If V6-704 is used, PRC Gain would vary between 15 and 10 depending on the arm cavity losses. With pre-2010 ITM, PRC Gain would vary between 30 and 15.
LO power when PRFPMI is locked is calculated by assuming 1 W of input power to IMC. IMC is assumed to let pass 10% of the power (). This power is then multiplied by PRC Gain and transmitted through the PR2 to calculate the LO power.
Page 3 shows the result of this calculation. Note for V6-704, LO power would be between 35mW and 15 mW, for pre-2010 ITM, it would be between 15 mW and 5 mW depending on the arm cavity losses.
The power available during alignment is simply given by:
If we remove PRM from the input path, we would have sufficient light to work with for both relevant optics.
I have attached the notebook used to do these calculations. Please let me know if you find any mistake in this calculation.
Today I suspended AS1. Anchal helped me with the initial hanging of the optics. Attachments 1,2 show the roll balance and side magnet height. Attachment 3 shows the motion spectra.
The major peaks are at 668mHz, 821mHz, 985mHz.
For some reason, I was not able to balance the pitch with 2 counterweights as I did with the rest of the thin optics (and AS1 before). Inserting the weights all the way was not enough to bring the reflection up to the iris aperture that was used for preliminary balancing. I was able to do so with a single counterweight (attachment 4). I'm afraid something is wrong here but couldn't find anything obvious. It is also worth noting that the yaw resonance 668mHz is different from the 755mHz we got in all the other optics. Maybe one or more of the wires are not clamped correctly on the side blocks?
The OSEMs were pushed into the OSEM plate and the plates were adjusted such that the magnets are at the center of the face OSEMs. The wires were clamped and cut from the winches. The SOS is ready for installation.
Also, I added a link to the OSEM assignments spreadsheet to the suspension wiki.
I uploaded some pictures of the PEEK EQ stops, both on the thick and thin optics, to the Google Photos account.
IMC is not such lossy. IMC output is supposed to be ~1W.
The critical coupling condition is G_PRC = 1/T_PRM = 17.7. If we really have L_arm = 50ppm, we will be very close to the critical coupling. Maybe we are OK if we have such condition as our testing time would be much longer in PRMI than PRFPMI at the first phase. If the arm loss turned out to be higher, we'll be saved by falling to undercoupling.
When the PRC is close to the critical coupling (like 50ppm case), we roughly have Tprc x 2 and Tarm to be almost equal. So each beam will have 1/3 of the input power i.e. ~300mW. That's probably too much even for the two OMCs (i.e. 4 DCPDs). That's OK. We can reduce the input power by 3~5.
LO power when PRFPMI is locked is calculated by assuming 1 W of input power to IMC. IMC is assumed to let pass 10% of the power ().
I was wondering whether I should take AS1 down to redo the wire clamping on the side blocks. I decided to take the OpLev spectrum again to be more certain. Attachments 1,2,3 show 3 spectra taken at different times.
They all show the same peaks 744mHz, 810mHz, 1Hz. So I think something went wrong with yesterday's measurement. I will not take AS1 down for now. We still need to apply some glue to the counterweight.
Turns out, the shifting was likely due to the table level. Because I didn't take care the first time to "zero" the level of the table as I tuned the OSEMs, the installation was b o g u s. So today I took time to,
a) Shift AS4 close to the center of the table.
b) Use the clean level tool to pick a plane of reference. To do this, I iteratively placed two counterweights (from the ETMX flow bench) in two locations in the breadboard such that I nominally balanced the table under this configuration to zome reference plane z0. The counterweight placement is of course temporary, and as soon as we make further changes such as final placement of AS4 SOS, or installation of AS1, their positions will need to change to recover z=z0.
c) Install OSEMs until I was happy with the damping. ** Here, I noticed the new suspension screens had been misconfigured (probably c1sus2 rebooted and we don't have any BURT), so quickly restored the input and output matrices.
SUSPENSION STATUS UPDATED HERE
The main issue with SR2 OSEMs, now that I think of it, was that the BS table was very inclined due to the multiple things we removed (including counterweights). Today the first I did was level the BS table by placing some counterweights in the correct positions. I placed the level in two directions right next to SR2 (clamped in its planned place), and made the bubble center.
While doing do, at one point, I was trying to reach the far South-West end of the table with the 3x heavy 6" cylindrical counterweight in my hand. The counterweight slipped off my hand and fell from the table. See the photo in attachment 1. It went to the bottommost place and is resting on its curved surface.
This counterweight needs to be removed but one can not reach it from over the table. So to remove it, we'll have to open one of the blank flanges on the South-west end of BS chamber and remove the counterweight from there. We'll ask Chub to help us on this. I'm sorry for the mistake, I'll be more careful with counterweights in the future.
Moving on, I tuned all the SR2 OSEMs. It was fairly simple today since the table was leveled. I closed the chamber with the optic free to move and damped in all degrees of freedom.
The PR2 candidate V6-704/705 mirrors (Qty2) are now @Downs. Camille picked them up for the measurements.
To identify the mirrors, I labeled them (on the box) as M1 and M2. Also the HR side was checked to be the side pointed by an arrow mark on the barrel. e.g. Attachment 1 shows the HR side up
AS4 is set to go through a free swinging test at 10 pm tonight. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.
To access the test, on allegra, type:
Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.
SR2 is set to go through a free swinging test at 3 am tonight. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.
The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on theSR2 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/SR2 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks.
The new matrix was loaded on SR2 input matrix and this resulted in no control loop oscillations at least. I'll compare the performance of the loops in future soon.
Camille@Downs measured the surface of these M1 and M2 using Zygo.
The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the AS4 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/AS4 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks.
The new matrix was loaded on AS4 input matrix and this resulted in no control loop oscillations at least. I'll compare the performance of the loops in future soon.
% Seismic BLRMS Monitor
% RA 08-01-26
% 0 for no messages, 1 for debugging
debug_flag = 1;
% ------------ Build channel list
svn import tds https://40m.ligo.caltech.edu/svn/40m/tds --username rana
svn checkout https://40m.ligo.caltech.edu/svn/40m/tds --username rana
# FILTERS FOR ONLINE SYSTEM
# Computer generated file: DO NOT EDIT
# MODULES ULYAW
### ULYAW ###
# SAMPLING ULYAW 65536
See Adhikari eLOG entry: http://nodus.ligo.caltech.edu:8080/AdhikariLab/194
Peter and Koji,
We are constructing a setup for the new 40m CDS using Realtime Code Generator (RCG).
We are trying to put simulated suspensions and test suspension controllers on a different processors of megatron
in order to create a virtual control feedback loop. Those CDS processes are communicating
each other via a shared memory, not via a reflective memory for now.
After some struggles with tremendous helps of Alex, we succeeded to have the communication between the two processes.
Also we succeeded to make the ADC/DAC cards recognized by megatoron, using the PCI express extension card replaced by Jay.
(This card runs multi PCI-X cards on the I/O chasis.)
- Establish a firewall between the 40m network and megatron (Remember this)
- Make DTT and other tools available at megatron
- Try virtual feedback control loops and characterize the performance
- Enable reflective memory functionalities on megatron
- Construct a hybrid system by the old/new CDSs
- Controllability tests using an interferometer
Note on MATLAB/SIMULINK
o Each cdsIPC should have a correct shared memory address spaced by 8 bytes. (i.e. 0x1000, 0x1008, 0x1010, ...)
Note on MEDM
o At the initial state, garbage (e.g. NaN) can be running all around the feedback loops. They are invisible as MEDM shows them as "0.0000".
To escape from this state, we needed to disconnect all the feedback, say, by turning off the filters.
Note on I/O chasis
o We needed to pull all of the power plugs from megatron and the I/O chasis once so that we can activate
the PCI-e - PCI-X extension card. When it is succeeded, all (~30) LEDs turn to green.
I took the output of the OMC DAC and plugged it directly into an OMC ADC channel to see if I could isolate the OMC DAC weirdness I'd been seeing. It looks like it may have something to do with DTT specifically.
Attachment 1 is a DTT transfer function of a BNC cable and some connectors (plus of course the AI and AA filters in the OMC system). It looks like this on both linux and solaris.
Attachment 2 is a transfer function using sweepTDS (in mDV), which uses TDS tools as the driver for interfacing with testpoints and DAQ channels.
Attachment 3 is a triggered time series, taken with DTT, of the same channels as used in the transfer functions, during a transfer function. I think this shows that the problem lies not with awg or tpman, but with how DTT is computing transfer functions.
I've tried soft reboots of the c1omc, which didn't work. Since the TDS version appears to work, I suspect the problem may actually be with DTT.
For the CDS upgrade preparation I put and moved those stuff at the rack 1Y9:
Placed 1Y9-12 ADC to DB44/37 Adapter LIGO D080397
Placed 1Y9-14 DAC to IDC Adapter LIGO D080303
Moved the ethernet switch from 1Y9-16 to 1Y9-24
Wiki has also been updated.
Joe, Peter, Jay, Koji, Rana
We put the new CDS stuff at Y end 1Y9 rack.
I've added the side coil to the model controller and plant, and the oplev quad to the model controller and plant. After the megatron wipe, the code now lives in /home/controls/cds/advLigo/src/epics/simLink. The files are mdc.mdl (controller) and mdp.mdl (plant). These RCG modules go at 16K with no decimation (no_oversampling=1 in the cdsParameters block) so hopefully will work with the old (16K) timing.
I've loaded many of the filters, there are some eft to do. These filters are simply copied from the current frontend.
Next I will port to the SUS module (which talks to the IO chassis). This means channel names will match with the current system, which will be important when we plug in the RFM.
The .mdl code for the mdc and mdp development modules is finished. These modules need more filters, and testing. Probably the most interesting piece left to do is putting in the gains and filters for the oplev model in mdp. It might be OK to simply ignore oplevs and first test damping of the real optic without them. However, it shouldn't be hard to get decent numbers for oplevs, add them to the mdp (plant) module, and make sure the mdc/mdp pair is stable. In mdp, the oplev path starts with the SUSPIT and SUSYAW signals. Kakeru recently completed calibration of the oplevs from oplev cts to radians: 1403 . From this work we should find the conversion factors from PIT and YAW to oplev counts, without making any new measurements. (The measurements wouldn't be hard either, if we can't simply pull numbers from a document.) These factors can be added to mdp as appropriate gains.
I've also copied mdc to a new module, which I've named "sas" to address fears of channel name collisions in the short term, and replaced the cpu-to-cpu connections with ADC and DAC connections. sas can be the guy for the phase I ETMY test. When we're happy with mdc/mdp, we hopefully can take the mdc filter file from chans, replace all the "MDC" strings with "SAS", and use it.
We started the test of the new CDS system at ETMY.
The plan is as follows:
We do the ETMY test from 9:30 to 15:00 at ETMY from Nov 12~17. This disables the ETMY during this period.
From 15:00 of the each day, we restore the ETMY configuration and confirm the ETMY work properly.
Today we connected megatron to the existing AA/AI modules via designated I/F boxes. The status of the test was already reported by the other entry.
During the test, c1iscey was kept running. We disabled the ETMY actuation by WatchDog. We did not touch the RFM network.
After the test we disconnected our cables and restored the connection to ICS110B and the AI/AA boards.
The WatchDog switches were released.
The lock of the ETMY was confirmed. The full interferometer was aligned one by one. Left in the full configuration with LA=off.
Tobin & Keith pointed out in the LLO ilog that there was a code bug in the autoburt.pl script for autoburts.
I edited the autoburt.pl script so that it will work from now until 2099 (by which time we may no longer be using this version of perl):
nodus:autoburt>diff autoburt.pl~ autoburt.pl
< $thisyear = "200".$timestamp;
> $thisyear = "20".$timestamp;
The autoburt has not been working ever since 11PM on New Year's eve.
I ran it by hand and it seems to run fine. I noticed along the way that it was running on op340m (our old Sun Blade 150 machine). The autoburt.pl was pointing at /cvs/cds/bin/perl
which is Perl v5.0. I changed it to use '/usr/bin/env' and now points at '/usr/bin/perl' which is perl 5.8. It runs fine with the new perl:
op340m:scripts>time perl /cvs/cds/scripts/autoburt.pl >> /cvs/cds/caltech/logs/autoburtlog.log
5.37u 6.29s 2:13.41 8.7%
Also ran correctly, via cron, at 9AM.
This is mostly a reminder to myself about what I discussed with Jay and Alex this morning.
The big black IO chassis are "almost" done. Except for the missing parts. We have 2 Dolphin, 1 Large and 1 Small I/O Chassis due to us. One Dolphin is effectively done and is sitting in the test stand. However, 2 are missing timing boards, and 3 are missing the boards necessary for the connection to the computer. The parts were ordered a long time ago, but its possible they were "sucked to one of the sites" by Rolf (remember this is according to Jay). They need to either track them down in Downs (possibly they're floating around and were just confused by the recent move), get them sent back from the sites, or order new ones (I was told by one person that the place they order from them notoriously takes a long time, sometimes up to 6 weeks. I don't know if this is exaggeration or not...). Other than the missing parts, they still need to wire up the fans and install new momentary power switches (apparently the Dolphin boards want momentary on/off buttons). Otherwise, they're done.
We are due another CPU, just need to figure out which one it was in the test stand.
6 more BIO boards are done. When I went over the plans with Jay, we realized we needed 7 more, not 6, so they're putting another one together. Some ADC/DAC interface boards are done. I promised to do another count here, to determine how many we have, how many we need, and then report that back to Jay before I steal the ones which are complete. Unfortunately, he did not have a new drawing for the ASC/vertex wiring, so we don't have a solid count of stuff needed for them. I'll be taking a look at the old drawings and also looking at what we physically have.
I did get Jay to place the new LSC wiring diagram into the DCC (which apparently the old one never was put in or we simply couldn't find it). Its located at: https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?docid=10985
I talked briefly with Alex, reminded him of feature requests and added a new one:
1) Single part representing a matrix of filter banks
2) Automatic generation of Simulated shared memory locations and an overall on/off switch for ADC/DACs
3) Individual excitation and test point pieces (as opposed to having to use a full filter bank). He says these already exist, so when I do the CVS checkout, I'll see if they work.
I also asked where the adl default files lived, and he pointed me at ~/cds/advLigo/src/epics/util/
In that directory are FILTER.adl, GDS_TP.adl, MONITOR.adl. Those are the templates. We also discovered the timing signal at some point was changed from something like SYS-DCU_ID to FEC-DCU_ID, so I basically just need to modify the .adl files to fix the time stamp channel as well. I basically need to do a CVS checkout, put the fixes in, then commit back to the CVS. Hopefully I can do that sometime today.
I also brought over 9 Contec DO-32L-PE boards, which are PCIe isolated digital output boards which do into the IO chassis. These have been placed above the 2 new computers, behind the 1Y6 rack.
Alberto and myself went to downs and acquired the 3rd 4x processor (Dual core, so 8x cores total) computer. We also retrieved 6 BIO interface boards (blue front thin boxes), 4 DAC interface boards, and 1 ADC interface boards. The tops have not been put on yet, but we have the tops and a set of screws for them. For the moment, these things have been placed behind the 1Y6 rack and under the table behind the 1Y5 rack
The 6 BIO boards have LIGO travelers associated with them: SN LIGO-S1000217 through SN LIGO-S1000222.
I've added a new page in the wiki which describes the current naming scheme for the .mdl model files used for the real time code generator. Note, that these model names do not necessarily have to be the names of the channels contained within. Its still possible to make all suspension related channels start with C1:SUS- for example. I'm also allocating 1024 8 byte channels for shared memory address space for each controller and each simulated plant.
The wiki page is here
Name suggestions, other front end models that are needed long term (HEPI is listed for example, even though we don't have it here, since in the long run we'd like to port the simulated plant work to the sites) are all welcome.
Awhile back we had requested a feature for the RCG code where a single file would define a memory location's name as well as its explicit hex address. Alex told me it had been implemented in the latest code in SVN. After being unable to find said file, I went back and talked to him and Rolf. Rolf said it existed, but had not been checked into the SVN yet.
I now have a copy of that file, called G1.ipc. It is supposed to live in /cvs/cds/caltech/chans/ipc/ , so I created the ipc directory there. The G1.ipc file is actually for a geo install, so we'll eventually make a C1.ipc file.
The first couple lines look like:
There are also section using ipcType IPC:
Effectively the ipcNum tells it which memory location to use, starting with 0x2000 (at least thats how I'm interpreting it. Every entry of a given ipcType has a different ipcNum which seems to be correlated to its description (at least early on - later in the file many desc= lines repeat, which I think means people were copy/pasting and got tired of editing the file. Once I get a C1.ipc file going, it should make our .mdl files much more understandable, at least for communicating between models. It also looks like it somehow interacts with the ADCs/DACs with ipcType PCI, although I'm hoping to get a full intro how to use the file tomorrow from Rolf and Alex.
I've added a diagram in the wiki under IFO Upgrade 2009-2010->New CDS->Diagram section Joe_CDS_Plan.pdf (the .svg file I used to create it is also there). This was mostly an exercise in me learning inkscape as well as putting out a diagram with which lists control and model names and where they're running.
A direct link is: CDS_Plan.pdf
Talked with Jay briefly today. Apparently there are 3 IO chassis currently on the test stand at Downs and undergoing testing (or at least they were when Alex and Rolf were around). They are being tested to determine which slots refer to which ADC, among other things. Apparently the numbering scheme isn't as simple as 0 on the left, and going 1,2,3,4, etc. As Rolf and Alex are away this week, it is unlikely we'll get them before their return date.
Two other chassis (which apparently is one more than the last time I talked with Jay), are still missing cards for communicating between the computer and the IO chassis, although Gary thinks I may have taken them with me in a box. I've done a look of all the CDS stuff I know of here at the 40m and have not seen the cards. I'll be checking in with him tomorrow to figure out when (and if) I have the the cards needed.
I've updated the LSC and IFO models that Rana created with new shared memory locations. I've used the C1:IFO- for the ifo.mdl file outputs, which in turn are read by the lsc.mdl file. The LSC outputs being lsc control signals are using C1:LSC-. Optics positions would presumably be coming from the associated suspension model, and am currently using SUP, SPX, and SPY for the suspension plant models (suspension vertex, suspension x end, suspension y end).
I've updated the web view of these models on nodus. They can be viewed at: https://nodus.ligo.caltech.edu:30889/FE/
I've also created a C1.ipc file in /cvs/cds/caltech/chans/ipc which assigns ipcNum to each of these new channels in shared memory.
I got around to actually try building the LSC and IFO models on megatron. Turns out "ifo" can't be used as a model name and breaks when trying to build it. Has something to do with the find and replace routines I have a feeling (ifo is used for the C1, H1, etc type replacements throughout the code). If you change the model name to something like ifa, it builds fine though. This does mean we need a new name for the ifo model.
Also learned the model likes to have the cdsIPCx memory locations terminated on the inputs if its being used in a input role (I.e. its bringing the channel into the model). However when the same part is being used in an output role (i.e. its transmitting from the model to some other model), if you terminate the output side, it gives errors when you try to make.
Its using the C1.ipc file (in /cvs/cds/caltech/chans/ipc/) just fine. If you have missing memory locations in the C1.ipc file (i.e. you forgot to define something) it gives a readable error message at compile time, which is good. The file seems to be being parsed properly, so the era of writing "0x20fc" for block names is officially over.
I suggest "ITF" for the model name.
I'm currently working on a set of scripts which will be able to parse a "template" mdl file, replacing certain key words, with other key words, and save it to a new .mdl file.
For example you pass it the "template" file of scx.mdl file (suspension controller ETMX), and the keyword ETMX, followed by an output list of scy.mdl ETMY, bs.mdl BS, itmx.mdl ITMX, itmy.mdl ITMY, prm.mdl PRM, srm.mdl SRM. It produces these new files, with the keyword replaced, and a few other minor tweaks to get the new file to work (gds_node, specific_cpu, etc). You can then do a couple of copy paste actions to produce a combined sus.mdl file with all the BS, ITM, PRM, SRM controls (there might be a way to handle this better so it automatically merges into a single file, but I'd have to do something fancy with the positioning of the modules - something to look into).
I also have plans for a script which gets passed a mdl file, and updates the C1.ipc file, by adding any new channels and incrementing the ipcNum appropriately. So when you make a change you want to propagate to all the suspensions, you run the two scripts, and have an already up to date copy of memory locations - no additional typing required.
Similar scripts could be written for the DAQ screens as well, so as to have all the suspension screens look the same after changing one set.
So I finished writing a script which takes an .ipc file (the one which defines channel names and numbers for use with the RCG code generator), parses it, checks for duplicate channel names and ipcNums, and then parses and .mdl file looking for channel names, and outputs a new .ipc file with all the new channels added (without modifying existing channels).
The script is written in python, and for the moment can be found in /home/controls/advLigoRTS/src/epics/simLink/parse_mdl.py
I still need to add all the nice command line interface stuff, but the basic core works. And already found an error in my previous .ipc file, where I used the channel number 21 twice, apparently.
Right now its hard coded to read in C1.ipc and spy.mdl, and outputs to H1.ipc, but I should have that fixed tonight.
This IPC stuff looks really a nice improvement of CDS.
Please just maintain the wiki updated so that we can keep the latest procedures and scripts to build the models.
1) What is c1asc doing? What is ascaux used for? What are the cables labeled "C1:ASC_QPD" in the 1X2 rack really going to?
2) Put the 4600 machine (megatron) in the 1Y3 (away from the analog electronics) This can be used as an OAF/IO machine. We need a dolphin fiber link from this machine to the IO chassis which will presumably be in 1Y1, 1Y2 (we do not currently have this fiber at the 40m, although I think Rolf said something about having one).
3) Merge the PSL and IOOVME crates in 1Y1/1Y2 to make room for the IO chassis.
4) Put the LSC and SUS machines into 1Y4 and/or 1Y5 along with the SUS IO chassis. The dolphin switch would also go here.
5) Figure out space in 1X3 for the LSC chassis. Most likely option is pulling asc or ascaux stuff, assuming its not really being used.
6) Are we going to move the OMC computer out from under the beam tube and into an actual rack? If so, where?
Rolf will likely be back Friday, when we aim to start working on the "New" Y end and possibly the 1X3 rack for the LSC chassis.
I modified the /etc/rc.d/rc.local file on megatron removing a bunch of the old test module names and added the new lsc and lsp modules, as well as a couple planned suspension models and plants, to shared memory so that they'll work. Basically I'm trying to move forward into the era of working on the actual model we're going to use in the long term as opposed to continually tweaking "test" models.
The last line in the file is now: /usr/bin/setup_shmem.rtl lsc lsp spy scy spx scx sus sup&
I removed mdp mdc mon mem grc grp aaa tst tmt.
I modified /cvs/cds/caltech/target/fb and changed the line "set controller_dcu=10" to "set controller_dcu=13" (where 13 is the lsc dcu_id number).
I also changed the set gds_server line from having 10 and 11 to 13 and 14 (lsc and lsp).
The file /cvs/cds/caltech/fb/master was modified to use C1LSC.ini and C1LSP.ini, as well as tpchn_C2.par (LSC) and tpchn_C3.par (LSP)
testpoint.par in /cvs/cds/caltech/target/gds/param was modified to use C-node1 and C-node2 (1 less then the gds_node_id for lsc and lsp respectively).
Note all the values of gds_node_id, dcu_id, and so forth are recorded at http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/Existing_RCG_DCUID_and_gds_ids
I had a chat with Alex this morning and discovered that the dcu_ids 13,14,15,16 are reserved currently, and should not be used. I was told 9-12 and 17-26 were fine to use. I pointed out that we will eventually have more modules than that. His response was he is currently working on the framebuilder code and "modernizing" it, and that those restrictions will hopefully be lifted in the future although he isn't certain at this time what the real maximum gds_id number is (he was only willing to vouch for up to 26 - although the OMC seems to be currently working and set to 30).
Alex also suggested running an iop module to provide timing (since we are using adcSlave=1 option in the models). Apparently these are x00.mdl, x01.mdl, x11.mdl files in the /home/control/cds/advLigoRTS/src/epics/simLink/ directory. I saved x00.mdl as io1.mdl (I didn't want to use io0 as its a pain to differentiate between a zero and 'O'. This new IOP is using gds_node=1, dcu_id=9. I modified the approriate files to include it.
I modified /etc/rc.d/rc.local and added io1 to shmem line. I modified /cvs/cds/caltech/target/fb/daqdrc to use dcu_id 9 as the controller (this is the new iop model dcu_id number). In that same directory I modifed the file master by adding /cvs/cds/caltech/chans/daq/C1IO1.ini as well as uncommenting tpchn_C1 line. I modified testpoint.par in /cvs/cds/caltech/target/gds/param to include C-node0, and modified the prognum for lsc and lsp to 0x31001003 and 0x31001005.
So I started the 3 processes with startio1, startlsc, startlsp, then went to the fb directory and started the framebuilder. However, the model lsc.mdl is still having issues, although lsp and io1 seem to be working. At this point I just need to track down what fundamentally is different between lsc and lsp and correct it in the lsc model. I'm hoping its not related to the fact that we actually had a previous lsc front end and there's some legacy stuff getting in the way. One thing I can test is changing the name and see if that runs.
I'm currently in the process of tracking down what legacy code is interfering with the new lsc model.
It turns out if you change the name of lsc file to something else (say scx as a quick test for example), it runs fine. In fact, the lsc and scx GDS_TP screens work in that case (since they're looking at the same channels). As one would expect, running them both at the same time causes problems. Note to self, make sure the other one is killed first. It does mean the lsc code gets loaded part way, but doesn't seem to communicate on EPICs or to the other models. However, I don't know what existing code is interfering. Currently going trhough the target directories and so forth.